Statistique en grande dimension Lecturer : Dalalyan A., Scribe : Thomas F.-X. First lecture Introduction. Statistique classique Statistique paramétriques : Z,..., Z n iid, avec une loi commune P θ On fait l hypothèse θ Θ R d Connu : Z,..., Z n et Θ Inconnu : θ ou P θ Hypothèse importante : d est fixe et n + On sait dans ce cas que l estimateur du MV est asymptotiquement le plus) efficace convergent) : ˆθ MV vérifie quand n + : E P ˆθ MV θ 2] = C + o )) n On estime θ à une vitesse n vitesse paramétrique) Constat. Si d = d n t.q. lim n + d n = +, alors toute la théorie paramétrique est inutilisable. De plus, l estimateur du MV n est plus le meilleur estimateur!.2 Statistique non paramétrique On observe Z,..., Z n iid de loi P, inconnue, telle que P P θ, θ Θ, mais avec Θ soit de dimension infinie, soit de dimension d = d n finie mais + avec la taille de l échantillon. Exemples: Θ = = f : 0, ] R, f Lipschitz de constante L f : 0, ] R, x, y, f x) f y) L x y Θ = θ = θ, θ 2,...), j= ) 2) θ 2 j < + = l 2 3) Démarche générale: On approche Θ par une suite croissante Θ k de sous-ensembles de Θ telle que Θ k est de
dimension d k. En procédant comme si θ appartenait à Θ k ce n est pas nécessairement le cas), on utilise une méthode paramétrique pour définir un estimateur θ k de θ. Cela nous donne une famille d estimateurs θ k. Question principale. Comment choisir k pour minimiser le risque de θ k? Si k est petit, on est face à un phénomène de sous-apprentissage underfitting) Inversement, si k est grand, phénomène de sur-apprentissage overfitting).3 Principal models in non-parametric statistics Density model. We have X,..., X n iid with a density f defined on R p, and : P X A) = A f x)dx The assumptions imposed on f are very weak as opposed to the parametric setting. For instance, a typical assumption in parametric setting is that f is the Gaussian density : f x) = det Σ ) 2π) p/2 exp ] 2 x µ)t Σ x µ), whereas a common assumption on f in nonparametric framework is : f is smooth, say, twice continuously differentiable with a bounded second derivative. Regression model. We observe Z i = X i, Y i ), with input X i, output Y i and error ε i : Y i = f X i ) + ε i. The function f is called the regression function. Here, the goal is to estimate f without assuming any parametric structure on it. Practical examples. Marketing. Each i represents a consumer X i are the features of the consumer A typical question is how do I estimate different relevant groups of consumers. A typical answer is then to use clustering algorithms. We assume that X,..., X n are iid with density f. Then, we estimate f in a non-parametric manner by ˆf. The clusters are defined as regions around the local maxima of the function ˆf..4 Machine Learning Essentially the same as non-parametric statistics The main focus here is on the algorithms rather than on the models), their statistical performance and their computational complexity. 2
2 Main concepts and notations Observations : Z,..., Z n iid, P Non-supervised learning : Z i = X i Supervised learning : Z i = X i, Y i ), where X i is an example or a feature, and Y i a label. Aim. To learn the distribution P or some properties of it. Prediction. We assume that a new feature X from the same prob. distribution as X,..., X n ) is observed. The aim is to predict the label associated to X. To measure the quality of a prediction, we need a loss function l y, ỹ) y is the true label, ỹ is the predicted label). In practice, both y and ỹ are random variables, furthermore y and its distribution are unknown, so l is hard to compute! Risk function. This is the expectation of the loss. Definition Assume that Z i = X i, Y i ) X Y and l : Y Y R is a loss function. A predictor, or preduction algorithm, is any mapping : The risk of the prediction function g is : ĝ : X Y) n Y X R P g ] = E P l Y, gx))] The risk of a predictor ĝ is R P ĝ ], which is random since ĝ depends on the data. R P ĝ ] = l y, ĝx)) dp x, y) Examples: X Y. Binary classification: Y = 0,, with any X l y, ỹ) = 0, if y = ỹ, otherwise = y = ỹ) = y ỹ) 2. 2. Least-squares regression: Y R, with any X l y, ỹ) = y ỹ) 2. 3 Excess risk and Bayes predictor We have Z i = X i, Y i ) R P g ] = X Y l y, gx)) P dx, dy) P dx, dy) = P Y X dy X = x) P X dx) Definition 2 Given a loss function l : Y Y R, the Bayes predictor, or oracle is the prediction function minimizing the risk : g arg min g Y X R P g ] 3
Remark In practice, g is unavailable, since it depends on P, which is unknown. The ultimate goal is to do almost as well as the oracle. A predictor ĝ n will be considered as a good one if : lim R P ĝ n ] R P g ] = 0 n + excess risk Definition 3 We say that the predictor ĝ n is consistent universally consistent) if P, we have : Theorem lim E P R P ĝ n ]] R P g ] = 0 n +. Suppose that x X,the infimum of y E P l Y, y) X = x] is reached. Then the funcion g defined by : g x) arg min y Y E P l Y, y) X = x]...is a Bayes predictor. 2. In the case of the binary classification, Y = 0, and l y, ỹ) = y = ỹ), g x) = η x) > ) where η x) = P Y = X = x]. 2 Furthermore, the excess risk can be computed by R P g] R P g ] = E P gx) g X)) 2η X))]. 4) 3. In the case of the least squares regression, Furthermore, for any η : X Y, we have : g x) = η x) where η x) = E P Y X = x] R P η ] R P η ] = E P η X) η X)) 2] Proof. Let g Y X and let : We have : g x) arg min y Y E P l Y, y) X = x]. R P g ] = E P l Y, g X))] = EP l Y, g X)) X = x] P X dx) EP l Y, g x)) X = x] P X dx) = R P g ]. 4
2. Using the first assertion, Therefore, g x) arg min P Y = y) X = x] y 0, = arg min P Y = y X = x) y 0, = arg max P Y = y X = x) y 0, To check 4), it suffices to remark that = arg max η x)y = ) + η x))y = 0). y 0, g x) = 0, if P Y = X = x) 2, otherwise. R P g] = E P gx) Y) 2 ] = E P gx) 2 ] + E P Y 2 ] 2E P YgX)] = E P gx)] + E P Y] 2E P E P YgX) X)] = E P gx)] + E P Y] 2E P gx)e P Y X)] = E P gx)] + E P Y] 2E P gx)η P X)] = E P gx) 2η P X)] + E PY]. Writing the same identity for g P and making the difference of these two identities, we get the desired result. 3. In view of the first assertion of the theorem, we have: g x) arg min y R E ] P Y y) 2 X = x = arg min y R ϕ y) where ϕ y) = E P Y 2 X = x ] 2yE P Y X = x] + y 2 is a second order polynomial. The minimization of such a polynomial is straightforward and leads to: arg min y R ϕ y) = E P Y X = x]. This shows that the Bayes predictor is equal to the regression function η x). The risk of this predictor is: R P η ] = E P Y η X)) 2] ]) = E P EP Y η X)) 2 X ] = E P EP Y η X)) 2 X ] = R P η ] + 0 + E P η η) 2 X), where the cross-product term vanishes since ) + 2E P Y η X)) η η) X) X] + η η) 2 X) E P Y η X)) η η) X) X] = η η) X)E P Y η X)) X] = 0. This completes the proof of the theorem. 5
3. Link between Binary Classification & Regression Plug-in rule We start by estimating η x) by ˆη n x), ) We define ĝ n x) = ˆη n > 2. Question: How good the plug-in rule ĝ n is? ) Proposition Let ˆη be an estimator of the regression function η, and let ĝx) = ˆη x) > 2. Then, we have : R class ĝ] R class g ] 2 R reg ˆη] R reg η ] ) Proof Let η : X Y R, and gx) = ηx) > 2, and let s compute the excess risk of g. We have, R class g] R class g ] = E P gx) g X)) 2η X))]. Since g and g are both indicator functions and, therefore, take only the values 0 and, their difference will be nonzero if and only if one of them is equal to and the other one is equal to 0. This leads to R class g ] Rclass E P ηx) /2 < η X) ) 2η X) ] +E P η X) /2 < ηx) ) 2η X) ] = 2E P /2 η X), ηx)] ) η X) /2 ] If ηx) /2 and η X) > /2, then η X) /2 η X) ηx), and thus : ] R class g Rclass g ] 2E P /2 ηx), η X)] ) η X ) η X) ] 2E P ηx) η X) ] 2 EP ηx) η X) ) 2] = 2 R reg η) R reg η ). Since this inequality is true for every deterministic η, we get the desired property. 6