Order statistics and concentration of l r norms for log-concave vectors



Similar documents
Tail inequalities for order statistics of log-concave vectors and applications

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

On an anti-ramsey type result

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

About the inverse football pool problem for 9 games 1

Adaptive Online Gradient Descent

Random graphs with a given degree sequence

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

INTEGRAL OPERATORS ON THE PRODUCT OF C(K) SPACES

Pacific Journal of Mathematics

Properties of BMO functions whose reciprocals are also BMO

Concentration inequalities for order statistics Using the entropy method and Rényi s representation

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Lecture 4: BK inequality 27th August and 6th September, 2007

Duality of linear conic problems

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Week 1: Introduction to Online Learning

THE CENTRAL LIMIT THEOREM TORONTO

Ideal Class Group and Units

COMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS

Math 461 Fall 2006 Test 2 Solutions

FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS

1 Norms and Vector Spaces

CONTRIBUTIONS TO ZERO SUM PROBLEMS

How To Prove The Dirichlet Unit Theorem

F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)

arxiv: v1 [math.pr] 5 Dec 2011

Lecture 13: Martingales

An example of a computable

Separation Properties for Locally Convex Cones

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

2.3 Convex Constrained Optimization Problems

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The Online Set Cover Problem

THE SCHEDULING OF MAINTENANCE SERVICE

Gambling Systems and Multiplication-Invariant Measures

Math 231b Lecture 35. G. Quick

The Ideal Class Group

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

24. The Branch and Bound Method

A Turán Type Problem Concerning the Powers of the Degrees of a Graph

Convex analysis and profit/cost/support functions

All trees contain a large induced subgraph having all degrees 1 (mod k)

15 Limit sets. Lyapunov functions

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME

Large induced subgraphs with all degrees odd

Some stability results of parameter identification in a jump diffusion model

How To Find Out How To Calculate A Premeasure On A Set Of Two-Dimensional Algebra

Notes from Week 1: Algorithms for sequential prediction

Math 431 An Introduction to Probability. Final Exam Solutions

Follow the Perturbed Leader

Ex. 2.1 (Davide Basilio Bartolini)

Elements of probability theory

1 The Brownian bridge construction

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Geometric Methods in the Analysis of Glivenko-Cantelli Classes

Inner Product Spaces and Orthogonality

Mathematics for Econometrics, Fourth Edition

MAS108 Probability I

Lecture 6: Discrete & Continuous Probability and Random Variables

ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION. E. I. Pancheva, A. Gacovska-Barandovska

Curriculum Vitae. Grigoris Paouris January 2011

Mathematics Course 111: Algebra I Part IV: Vector Spaces

1. Prove that the empty set is a subset of every set.

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

WHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION?

Master s Theory Exam Spring 2006

ERDOS PROBLEMS ON IRREGULARITIES OF LINE SIZES AND POINT DEGREES A. GYARFAS*

Regular Induced Subgraphs of a Random Graph*

Stationary random graphs on Z with prescribed iid degrees and finite mean connections

Chapter 4 Lecture Notes

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

Chapter 2: Binomial Methods and the Black-Scholes Formula

THREE DIMENSIONAL GEOMETRY

Introduction to Probability

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CURVES WHOSE SECANT DEGREE IS ONE IN POSITIVE CHARACTERISTIC. 1. Introduction

Influences in low-degree polynomials

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

Mathematical Methods of Engineering Analysis

Killer Te categors of Subspace in CpNN

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Math 55: Discrete Mathematics

Section Inner Products and Norms

An axiomatic approach to capital allocation

1 Sets and Set Notation.

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

FALSE ALARMS IN FAULT-TOLERANT DOMINATING SETS IN GRAPHS. Mateusz Nikodem

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

The degree, size and chromatic index of a uniform hypergraph

Important Probability Distributions OPRE 6301

Asymptotics of discounted aggregate claims for renewal risk model with risky investment

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Math 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions

Transcription:

Order statistics and concentration of l r norms for log-concave vectors Rafa l Lata la Abstract We establish upper bounds for tails of order statistics of isotropic log-concave vectors and apply them to derive a concentration of l r norms of such vectors. 1 Introduction and notation An n dimensional random vector is called log-concave if it has a log-concave distribution, i.e. for any compact nonempty sets A, B R n and λ 0, 1, PX λa + 1 λb PX A λ PX B 1 λ, where λa+1 λb = {λx+1 λy : x A, y B}. By the result of Borell [3] a vector X with full dimensional support is log-concave if and only if it has a density of the form e f, where f : R n, ] is a convex function. Log-concave vectors are frequently studied in convex geometry, since by the Brunn-Minowsi inequality uniform distributions on convex sets as well as their lower dimensional marginals are log-concave. A random vector X = X 1,..., X n is isotropic if EX i = 0 and CovX i, X j = δ i,j for all i, j n. Equivalently, an n-dimensional random vector with mean zero is isotropic if E t, X 2 = t 2 for any t R n. For any nondegenerate AMS 2010 Mathematics Subject Classification: 60E15 52A38, 60B11. Key words and phrases: log-concave measures, order statistics, concentration of volume. Research partially supported by MNiSW Grant no. N N201 397437 and the Foundation for Polish Science. Institute of Mathematics, University of Warsaw, Banacha 2, 02-097 Warszawa, Poland and Institute of Mathematics, Polish Academy of Sciences, ul. Śniadecich 8, 00-956 Warszawa, Poland, e-mail: rlatala@mimuw.edu.pl. 1

log-concave vector X there exists an affine transformation T such that T X is isotropic. In recent years there were derived numerous important properties of logconcave vectors. One of such results is the Paouris concentration of mass [10] that states that for any isotropic log-concave vector X in R n, P X Ct n exp t n for t 1. 1 One of purposes of this paper is the extension of the Paouris result to l r norms, that is deriving upper bounds for P X r t, where x r = n x i r 1/r. For r [1, 2 this is an easy consequence of 1 and Hölder s inequality, however the case r > 2 requires in our opinion new ideas. We show that P X r Crtn 1/r exp tn 1/r for t 1, r > 2, where Cr is a constant depending only on r see Theorem 8. Our method is based on suitable tail estimates for order statistics of X. For an n dimensional random vector X by X1 X 2... X n we denote the nonincreasing rearrangement of X 1,..., X n in particular X1 = max{ X 1,..., X n } and Xn = min{ X 1,..., X n }. Random variables X, 1 n, are called order statistics of X. By 1 we immediately get for isotropic log-concave vectors X, PX t exp 1 t C for t C n/. The main result of the paper is Theorem 3 which states that the above inequality holds for t C logen/ as shows the example of exponential distribution this range of t is for n/2 optimal up to a universal constant. Tail estimates for order statistics can be also applied to provide optimal estimates for sup #I=m P I X, where the supremum is taen over all subsets of {1,..., n} of cardinality m [1, n] and P I denotes the coordinatewise projection. The details will be presented in the forthcoming paper [1]. The organization of the article is as follows. In Section 2 we discuss upper bounds for tails of order statistics and their connections with exponential concentration and Paouris result. Section 3 is devoted to the derivation of tail estimates of l r norms for log-concave vectors. Finally Section 4 contains a proof of Theorem 4, which is a crucial tool used to derive our main result. Throughout the article by C, C 1,... we denote universal constants. Values of a constant C may differ at each occurence. For x R n we put x = x 2 = n x2 i 1/2. 2

2 Tail estimates for order statistics If the coordinates of X are independent symmetric exponential random variables with variance one then it is not hard to see that MedX 1 C logen/ for any 1 n/2. So we may obtain a reasonable bound for PX t, n/2 in the case of isotropic log-concave vectors only for t 1 C logen/. Using the idea that exponential random vectors are extremal in the class of unconditional log-concave vectors i.e. such vectors that η 1 X 1,..., η n X n has the same distribution as X for any choice of signs η i { 1, 1} one may easily derive the following fact. Proposition 1. If X is a log-concave and unconditional n-dimensional isotropic random vector then PX t exp 1 C t en for t C log. Proof. The result of Bobov and Nazarov [2] implies that for any i 1 < i 2 <... < i and t > 0, P X i1 t,..., X i t = 2 PX i1 t,..., X i t 2 exp 1 C t. Hence PX t 1 i 1 <...<i n 2en if t C logen/. P X i1 t,..., X i t exp 1 C t exp 1 2C t n 2 exp 1 C t However for a general isotropic log-concave vector without unconditionality assumption we may bound PX i1 t,..., X i t only by exp t/c for t C. This suggests that we should rather expect bound exp t/c than exp t/c. If we try to apply the union bound as in the proof of Proposition 1 it will wor only for t C logen/. Another approach may be based on the exponential concentration. We say that a vector X satisfies exponential concentration inequality with a constant α if for any Borel set A, PX A + αtb n 2 1 exp t if PX A 1 2 and t > 0. 3

Proposition 2. If the coordinates of an n-dimensional vector X have mean zero and variance one and X satisfies exponential concentration inequality with a constant α 1 then PX t exp 1 t 3α en for t 8α log. Proof. Since VarX i = 1 we have P X i 2 1/2 so P X i 2 + t exp t/α for t > 0. Let µ be the distribution of X. Then the set At = { x R n : #{i: x i t} < } 2 has measure µ at least 1/2 for t 4α logen/ indeed we have for such t 1 µat = P 1 { Xi t} 2 2 E 2n exp t 2n en 2 1 2α 2. 1 { Xi t} Let A = A4α logen/. If z = x + y A + sb2 n then less than /2 of x i s are bigger than 4α logen/ and less than /2 of y i s are bigger than 2s, so en P X 4α log + 2s 1 µa + sb2 n exp 1 s. α For log-concave vectors it is nown that exponential inequality is equivalent to several other functional inequalities such as Cheeger s and spectral gap see [9] for a detailed discussion and recent results. The strong conjecture due to Kannan, Lovász and Simonovits [6] states that every isotropic log-concave vector satisfies Cheeger s and therefore also exponential inequality with a uniform constant. The conjecture however is wide open a recent result of Klartag [7] shows that in the unconditional case KLS conjecture holds up to log n constant see also [5] for examples of nonproduct distributions that satisfy spectral gap inequality with uniform constants. Best nown upper bound for Cheeger s constant for general isotropic logconcave measure is n α for some α 1/4, 1/2 see [9] and [4]. The main result of this paper states that despite the approach via the union bound or exponential concetration fails the natural estimate for order statistics is valid. Namely we have 4

Theorem 3. Let X be an n-dimensional log-concave isotropic vector. Then PX t exp 1 en t for t C log. C Our approach is based on the suitable estimate of moments of the process N X t, where N X t := 1 {Xi t}, t 0. Theorem 4. For any isotropic log-concave vector X and p 1 we have nt Et 2 N X t p Cp 2p 2 for t C log p 2. We postpone a long and bit technical proof till the last section of the paper. Let us only mention at this point that it is based on two ideas. One is the Paouris large deviation inequality 1 and another is an observation that if we restrict a log-concave distribution to a convex set it is still log-concave. Proof of Theorem 3. Observe that X t implies that N Xt /2 or N X t /2 and vector X is also isotropic and log-concave. So by Theorem 4 and Chebyshev s inequality we get 2 penx PX t t p + EN X t p Cp 2p 2 t provided that t C lognt 2 /p 2. So it is enough to tae p = 1 ec t and notice that the restriction on t follows by the assumption that t C logen/. As we already noticed one of the main tools in the proof of Theorem 4 is the Paouris concentration of mass. One may however also do the opposite and derive large deviations for the Euclidean norm of X from our estimate of moments of N X t and the observation that the distribution of UX is again log-concave and isotropic for any rotation U. More precisely the following statement holds. Proposition 5. Suppose that X is a random vector in R n such that for some constants A 1, A 2 < and any U On, E t 2 N UX t l A1 l 2l for t A 2, l n. Then P X t n exp 1 t n CA 1 for t max{ca 1, A 2 }. 5

Proof. Let us fix t A 2. Hölder s inequality gives that for any U 1,..., U n On, E l l N Ui Xt EN Ui Xt l 1/l A1 l 2l for l n. t Now let U 1,..., U l be independent random rotations in On distributed according to the Haar measure then for l n, A1 l t 2l l EX E U N Ui Xt = E X E U1 N U1 Xt l = E X np Y X, Y t l = n l E X P Y X Y 1 t l, where Y is a random vector uniformly distributed on S n 1. Since Y 1 is symmetric, EY1 2 = 1/n and EY 4 1 C/n2 we get by the Paley-Zygmund inequality that PY1 2 1 4n 1/C 1 which gives P X 2t n E X C1 P Y X Y 1 t l C1 A 2 1 l2 l. t 2 n To conclude the proof it is enough to tae l = 1 ec1 A 1 nt. 3 Concentration of l r norms The aim of this section is to derive Paouris type estimates for concentration of X r = n X i r 1/r. We start with presenting two simple examples. Example 1. Let the coordinates of X be independent symmetric exponential r.v s with variance one. Then E X r r 1/r = ne X 1 r 1/r 1 C rn1/r for r [1,, and E X 1 C log n E X p r 1/p E X 1 p 1/p p C for p 2, r 1. 6

It is also nown that in the independent exponential case wea and strong moments are comparable [8], hence for r 2, E X r r 1/r = E Therefore we get sup a r 1 i E X 2 r 1/2 + C E X 2 r 1/2 + Cr a i X i r 1/r sup E a r 1 i sup E a r 1 i a i X i r 1/r a i X i 2 1/2 E X 2 r 1/r + Cr. E X p r 1/p E X 2 r 1/2 1 C rn1/r for p 2 and n C r. Example 2. For 1 r 2 let X be an isotropic random vector such that Y = X 1 +...+X n / n has the exponential distribution with variance one. Then by Hölder s inequality X r n 1/r 1/2 Y and E X p r 1/p n 1/r 1/2 Y p 1 C n1/r 1/2 p for p 2, 1 r 2. and The examples above show that the best we can hope is E X p r 1/p Cn 1/r + n 1/r 1/2 p for p 2, 1 r 2, 2 E X p r 1/p Crn 1/r + p for p 2, r [2, 3 Or in terms of tails, E X p 1/p Clog n + p for p 2. 4 P X r t exp 1 C tn1/2 1/r for t Cn 1/r, r [1, 2], 5 P X r t exp 1 C t for t Crn 1/r, r [2, 6 and P X t exp 1 C t for t C log n. 7 Case r [1, 2] is a simple consequence of the Paouris theorem. 7

Proposition 6. Estimates 2 and 5 hold for all isotropic log-concave vectors X. Proof. We have X r n 1/r 1/2 X 2 by Hölder s inequality, hence 2 and therefore also 5 immediately follows by the Paouris result. Case r = is also very simple Proposition 7. Estimates 4 and 7 hold for all isotropic log-concave vectors X. Proof. We have P X t P X i t n exp t/c. What is left is the case 2 < r < we would lie to obtain 6 and 3. We almost get it except that constants explode when r approaches 2. Theorem 8. For any δ > 0 there exist constants C 1 δ, C 2 δ C1+δ 1/2 such that for any r 2 + δ, P X r t exp 1 C 1 δ t for t C 1 δrn 1/r and E X p r 1/p C 2 δ rn 1/r + p for p 2. The proof of Theorem 8 is based on the following slightly more precise estimate. Proposition 9. For r > 2 we have P X r t exp 1 r 2 1/rt C r or in terms of moments r 1/rlog E X p r 1/p C rn 1/r + n + p r 2 r 1/r for t C rn 1/r + log n r 2 for p 2. 8

Proof. Let s = log 2 n. We have Theorem 3 yields P X r r = X r C r 3 log r en Xi r s 2 X 2 r. + t r exp 1 t C Observe that s 2 log r en2 Cn j r 2 j Cr r n. Thus for t 1,..., t 0 we get where Hence by 8 P X r C rn 1/r + P X r C rn 1/r + j=1 s 1/r s t P Y Y := 2 X 2 r C r 3 log r en2. s 1/r t s s P Y t exp for t > 0. 8 s t, 1 C 2 2 1/r r t. Fix t > 0 and choose t such that t = 2 /2 /r t 1/r. Then s s t = t r 2 2 r 2 t r 1 2 2 r 1 2 Ct r r r 2, so we get r 1/r P X r C rn 1/r + t log r 2 2 n + 1 exp 1 C t. Proof of Theorem 8. Observe that r r 2 1/r C1+δ 1/2 for r 2+δ and log n rn 1/r and apply Proposition 9. 9

4 Proof of Theorem 4 Our crucial tool will be the following result. Proposition 10. Let X be an isotropic log-concave n-dimensional random vector, A = {X K}, where K is a convex set in R n such that 0 < PA 1/e. Then PA {X i t} C 1 PA t 2 log 2 PA+ne t/c 1 for t C 1. 9 Moreover for 1 u t C 2, #{i n: PA {X i t} e u PA} C 2u 2 t 2 log 2 PA. 10 Proof. Let Y be a random vector distributed as the vector X conditioned on the set A that is PY B = PA {X B} PA = PX B K. PX K Notice that in particular for any set B, PX B PAPY B. The vector Y is log-concave, but no longer isotropic. Since this is only a matter of permutation of coordinates we may assume that EY1 2 EY 2 2... EYn 2. For α > 0 let m = mα = #{i: EY 2 i α}. We have EY 2 1... EY 2 m α. Hence by the Paley-Zygmund inequality, m P Yi 2 1 2 αm P m Y 2 i 1 2 E m Yi 2 1 4 E m Y 2 i 2 E m Y 2 i 2 1 C. This implies that m P Xi 2 1 2 αm 1 C PA. However by the result of Paouris, m P Xi 2 1 2 αm exp 1 mα C 3 for α C 3. 10

So for α C 3, exp 1 C 3 mα PA/C and we get that We have mα = #{i: EY 2 i α} C 4 α log2 PA for α C 3. 11 PA {X i t} PA t = PY i t exp 1 CEYi 21/2 and 10 follows by 11. Tae t C 3 and let 0 be a nonnegative integer such that 2 0 t C3 2 0 1 t. Define and By 11 we get I 0 = {i: EY 2 i t 2 }, I 0 +1 = {i: EY 2 i < 4 0 t 2 } I j = {i: 4 j t 2 EY 2 i < 4 1 j t 2 } j = 1, 2,..., 0. #I j C 4 4 j t 2 log 2 PA for j = 0, 1,..., 0 and obviously #I 0 +1 n. Moreover for i I j, j 0, Thus Y i PY i t P EYi 2 2j 1 1/2 0 +1 PY i t = PY i t #I 0 + e j=0 i I j 0 C 4 t 2 log 2 PA 1 + e j=1 C 1 t 2 log 2 PA + ne t/c 1 exp 1 1 C 2j. 0 +1 j=1 To finish the proof of 9 it is enough to observe that #I j exp 1 C 2j 2 2j exp 1 C 2j + ene t/c. PA {X i t} = PA PY i t. 11

The following two examples show that estimate 9 is close to be optimal. Example 1. Tae X 1, X 2,..., X n to be independent symmetric exponential random variables with variance 1 and A = {X 1 2} Then PA = 1 2e and PA {X i t} = PA i=2 PX i t = n 1PA exp t/ 2, i=2 therefore the factor ne t/c in 9 is necessary. Example 2. Tae A = {X 1 t,..., X t} then PA {X i t} PA. So improvement of the factor t 2 PA log 2 PA in 9 would imply in particular a better estimate of PX 1 t,..., X t than exp 1 C t and we do not now if such bound is possible to obtain. Proof of Theorem 4. We have N X n, so the statement is obvious if t n Cp, in the sequel we will assume that t n 10p. Let C 1 and C 2 be as in Proposition 10 increasing C i if necessary we may assume that PX 1 t e t/c i for t C i and i = 1, 2. Let us fix p 1 and t C log nt2, then t max{c p 2 1, 4C 2 } and t 2 ne t/c 1 p 2 if C is large enough. Let l be a positive integer such that p l 2p and l = 2 for some integer. Since EN X t p 1/p EN X t l 1/l it is enough to show that Et 2 N X t l Cl 2l. Recall that by our assumption on p, we have t n 5l. To shorten the notation let B i1,...,i s = {X i1 t,..., X is t} and B = Ω. Define ml := EN X t l = E l = 1 {Xi t} i 1,...,i l =1 PB i1,...,i l, we need to show that ml Cl 2l. 12 t 12

We devide the sum in ml into several parts. Let j 1 2 be such integer that nt 2 j1 2 2 < log l 2 2 j1 1. We set I 0 = { i 1,..., i l {1,..., n} l : PB i1,...,i l > e l}, I j = { i 1,..., i l {1,..., n} l : PB i1,...,i l e 2jl, e 2j 1l ] } 0 < j < j 1 and I j1 = { i 1,..., i l {1,..., n} l : PB i1,...,i l e 2j 1 1 l }. Since {1,..., n} l = j 1 j=0 I j we get ml = j 1 j=0 m j l, where m j l := PB i1,...,i l for 0 j j 1. i 1,...,i l I j It is easy to bound m j1 l namely since #I j1 n l we have l 2l. t i 1,...,i l I j1 PB i1,...,i l n l e 2j1 1l To estimate m 0 l we define first for I {1,..., n} l and 1 s l, P s I = {i 1,..., i s : i 1,..., i l I for some i s+1,..., i l }. By Proposition 10 we get for s = 1,..., l 1 PB i1,...,i s+1 i 1,...,i s+1 P s+1 I 0 C 1 i 1,...,i s P si 0 i s+1 =1 PB i1,...,i s {X is+1 t} i 1,...,i s P si 0 PB i1,...,i s t 2 log 2 PB i1,...,i s + ne t/c1. Observe that we have PB i1,...,i s > e l for i 1,..., i s P s I 0 and recall that t 2 ne t/c 1 p 2 4l 2, hence PB i1,...,i s+1 5C 1 t 2 l 2 PB i1,...,i s. i 1,...,i s+1 P s+1 I 0 i 1,...,i s P si 0 So, by easy induction we obtain m 0 l = PB i1,...,i l 5C 1 t 2 l 2 l 1 i 1,...,i l I 0 5C 1 t 2 l 2 l 1 ne t/c 1 13 Cl 2l. t i 1 P 1 I 0 PB i1

Now comes the most involved part of the proof estimating m j l for 0 < j < j 1. It is based on suitable bounds for #I j. We will need the following simple combinatorial lemma. Lemma 11. Let l 0 l 1... l s be a fixed sequence of positive integers and } F = {f : {1, 2,..., l 0 } {0, 1, 2,..., s}: 1 i s #{r : fr i} l i. Then #F s eli 1 Proof of Lemma 11. Notice that any function f : {1, 2,..., l 0 } {0, 1, 2,..., s} is determined by the sets A i = {r : fr i} for i = 0, 1,..., s. Tae f F, obviously A 0 = {1,..., l 0 }. If the set A i 1 of cardinality a i 1 l i 1 is already chosen then the set A i A i 1 of cardinality at most l i may be chosen in ai 1 0 ways. + ai 1 1 +...+ ai 1 l i li 1 0 l i + li. li 1 1 +...+ li 1 l i eli 1 li We come bac to the proof of Theorem 4. Fix 0 < j < j 1, let r 1 be a positive integer such that 2 r 1 < t C 2 2 r 1+1. For i 1,..., i l I j we define a function f i1,...,i l : {1,..., l} {j, j+1,..., r 1 } by the formula j if PB i1,...,i s exp 2 j+1 PB i1,...,i s 1, f i1,...,i l s = r if exp 2 r+1 PB i 1,...,is PB i1,...,i s 1 < exp 2r, j < r < r 1, r 1 if PB i1,...,i s < exp 2 r 1 PB i1,...,i s 1. Notice that for all i 1, PX i1 t e t/c 2 < exp 2 r 1 PB, so f i1,...,i l 1 = r 1 for all i 1,..., i l. Put F j := { f i1,...,i l : i 1,..., i l I j }. For f = f i1,...,i l F j and r > j, we have exp 2 j l < PB i1,...,i l < exp 2 r #{s: fs r}, l i 14

so #{s: fs r} 2 j r l =: l r. 13 Observe that the above inequality holds also for r = j. We have l r 1 /l r = 2 and r 1 r=j+1 l r l so by Lemma 11 we get Put #F j r 1 r=j+1 elr 1 l r lr e 2l. Now fix f F j we will estimate the cardinality of the set We have I j f := {i 1,..., i l I j : f i1,...,i l = f}. n r := #{s {1,..., l}: fs = r} r = j, j + 1,..., r 1. n j + n j+1 +... + n r1 = l, moreover if i 1,..., i s 1 are fixed and fs = r < r 1 then s 2 and by the second part of Proposition 10 with u = 2 r+1 t/c 2 i s may tae at most 4C 2 2 2r t 2 log 2 PB i1,...,i s 1 4C 22 2r+j l 2 t 2 4C 2l 2 t 2 values. Thus #I j f n nr 1 r 1 1 r=j m nr r = n nr 1 4C2 l 2 t 2 l n r1 exp exp2r + j =: m r r 1 1 Observe that by previously derived estimate 13 we get n r l r = 2 j r l, r=j 2r + jn r. hence We also have r 1 1 r=j 2r + jn r 2 j+2 l r2 r C + 2 j 2 l. r=j n r1 2 j r 1 l 2C 2 2 j l t 1 lognt 2 /4l 2 2j 3 l, 15

where the last inequality holds since t C lognt 2 /4l 2 and C may be taen arbitrarily large. So we get that for any f F j, Hence #I j f This shows that Cl 2 l nt 2 nr1 t 2 4l 2 exp 2 j 2 l Cl 2 l 3 exp t 2 8 2j l. Cl 2 l 3 Cl 2 l #I j #F j exp t 2 8 2j l exp t 2 2 + 3 8 2j l. Cl m j l = PB i1,...,i l #I j exp 2 j 1 2 l l exp 2 j 3 t 2 l. i 1,...,i l I j Therefore j 1 1 ml = m 0 l + m j1 l + m j l Cl 2l t and 12 holds. j=1 l t 2l C l + 1 + C l exp 2 j 3 l Acnowledgments. This wor was done while the author was taing part in the Thematic Program on Asymptotic Geometric Analysis at the Fields Institute in Toronto. The author would lie to than Rados law Adamcza and Nicole Tomcza-Jaegermann for their helpful comments. References [1] R. Adamcza, R. Lata la, A. Litva, A. Pajor and N. Tomcza- Jaegermann, in preparation. [2] S.G. Bobov and F.L. Nazarov, On convex bodies and log-concave probability measures with unconditional basis, in: Geometric aspects of functional analysis, 53 69, Lecture Notes in Math., 1807, Springer, Berlin, 2003. [3] C. Borell, Convex measures on locally convex spaces, Ar. Math. 12 1974, 239 252. j=1 16

[4] O. Guedon and E. Milman, Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures, preprint, http://arxiv.org/abs/1011.0943. [5] N. Huet, Spectral gap for some invariant log-concave probability measures, preprint, http://arxiv.org/abs/1003.4839. [6] R. Kannan, L. Lovász, and M. Simonovits, Isoperimetric problems for convex bodies and a localization lemma, Discrete Comput. Geom. 13 1995, 541 559. [7] B. Klartag, A Berry-Esseen type inequality for convex bodies with an unconditional basis, Probab. Theory Related Fields, 145 2009, 1 33. [8] R. Lata la, Tail and moment estimates for sums of independent random vectors with logarithmically concave tails, Studia Math. 118 1996, 301 304. [9] E. Milman, On the role of convexity in isoperimetry, spectral gap and concentration, Invent. Math. 177 2009, 1 43. [10] G. Paouris, Concentration of mass on convex bodies, Geom. Funct. Anal. 16 2006, 1021 1049. 17