Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Chapter 7 Methods of Fidig Estimators Sectio 7.1 Itroductio Defiitio 7.1.1 A poit estimator is ay fuctio W( X) W( X1, X,, X ) of a sample; that is, ay statistic is a poit estimator. Notes: estimator: fuctio of the sample ( X ( X1, X,, X )) estimate: realized value (a umber) of a estimator (( x ( x1, x,, x ))) Sectio 7. Methods of Fidig Estimators 7..1 Method of Momets (MME) Notes: oldest method datig back at least to Karl Pearso i the late 1800s idea is simple, however, oftetimes, the resultig estimators may still be improved 1
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Let X,, 1 X be iid from pmf or pdf f( x 1,, k ), we have 1 1 st sample momet: m1 X i1 1 st ' ' populatio momet: EX (,, ) k th sample momet: m k 1 1 1 1 k X i1 i ' k ' k th populatio momet: EX (,, ) i k k 1 k k To get MME: Equate the first k sample momets to the correspodig k populatio momets ad solve these equatios for ( 1,, k ) i terms of 1 1 1. ( m 1,, m ) ( X 1 1, X k k,, X ) i i1 i i1 i Example 7..1 (Normal method of momets) Suppose X,, 1 X are iid from a ad 1. (, ). I this case, k ad Solutio: 1 X i 1 i X ad 1 X i 1 i, we ca get:
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 1 1 X ad ( X ) i 1 i X S. Example 7.. (Biomial method of momets) Suppose X,, 1 X are iid from a biomial( mp, ) where both m ad p are ukow. I this case, k ad 1 m ad p. Solutio: From X mp ad 1 i1 X mp(1 p) m p mp(1 pmp), we have: i X m ad X 1/ ( X X ) i1 i p X m. Note: Method of momets may give estimates that are outside the rage of the parameters. 7.. Maximum Likelihood (MLE) Let X,, 1 X be iid from pmf or pdf f( x 1,, k ). The likelihood fuctio is defied by L( θ x) L(,, x,, x ) f( x,, ). 1 k 1 i 1 i 1 k 3
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Defiitio 7..4 For each sample poit x, let ˆ( x ) be a parameter value at which L( θ x) attais its maximum as a fuctio of θ, with x held fixed. A maximum likelihood estimator (MLE) of the parameter θ based o a sample X is ˆ( X ). Notes: 1. Fidig the MLE ca be difficult i some cases.. MLE may be obtaied through differetiatio but i some cases differetiatio will ot work. 3. Whe differetiatio will be used to fid the MLE, it will be easier to deal with the atural log of the likelihood. 4. Maximizatio should be oly over the rage of the parameter. 5. If MLE caot be obtaied aalytically, it ca be obtaied umerically. Example 7..5 (Normal likelihood) Let X 1,, X are iid from a (,1). Show that X is the MLE of usig derivatives. Solutio: d Step 1: fid the solutio from the equatio: L ( x ) 0, which gives the possible solutios. d d Step : verify the solutio achieves the global maximum ( L( x) 0 x i this case). d 4
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Step 3: check the boudaries ( i this case; it is ot ecessary i this case). Example 7..6 Recall Theorem 5..4 (p. 1) part (a): If x, 1 x are ay umbers ad For ay real umbers, we have: with equality if ad oly if a,. ( x ) ( ) 1 i a x i i 1 i x x. This implies that for ay, 1 1 exp( ( ) ) exp( ( ) ) i 1 i i 1 i x x x with equality if ad oly if x. So X is MLE. x 1 x i1 i, the Example 7..7 (Beroulli MLE) Let X,, 1 X are iid Beroulli( p ). Fid the MLE of p where 0 p 1. Note that we iclude the possibility that p 0 or p 1. Solutio: Use the atural log of the likelihood fuctio. Example 7..8 (Restricted rage MLE) Let X,, 1 X are iid from a (,1), where 0. Solutio: Without ay restrictio, X is the MLE. So whe x 0, ˆ x. Whe x 0, L( x ) achieves its maximum at ˆ 0 for 0, so ˆ 0 i this situatio. I summary: 5
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 ˆ X, X 0; XI[0, ) ( X ) 0, X 0. Example 7..9 (Biomial MLE, ukow umber of trials) Let X 1,, X are iid Biomial( k, p). Fid the MLE of k where p is kow ad k is ukow. (Example where differetiatio will ot be used to obtai the MLE.) Solutio: The likelihood fuctio is: (, ) k x i i (1 ) Lk px p p x. i1 x i The cosider the ratio: L( k p, x) / L( k 1 p, x ) Ivariace Property of Maximum Likelihood Estimators Defiitio Cosider a fuctio ( ) which may ot ecessarily be oe-to-oe fuctio so that for a give value, there may be more tha oe value such that ( ). The iduced likelihood fuctio, L *, of ( ) is give by: L*( x) sup L( x ). { : ( ) } The value ˆ that maximizes L *( x ) will be called the MLE of ( ). 6
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Theorem 7..10 (Ivariace Property of MLEs) If ˆ is the MLE of, the for ay fuctio ( ), the MLE of ( ) is ( ˆ ). Example Let X,, 1 X be iid (,1), the MLE of is X. Example Let X,, 1 X be iid biomial( k, p) where k is kow ad p is ukow. Fid the MLE of the variace ad stadard deviatio of X 1. Solutio: kp(1 p ), so ˆ kpˆ(1 pˆ) kx (1 X ). kp(1 p ), so ˆ kpˆ(1 pˆ) kx (1 X ). Example Let X,, 1 X be iid Poisso( ). Fid the MLE of PX ( 0). Solutio: The MLE of is: ˆ X. Because PX ( 0) exp( ), so the MLE of PX ( 0) is exp( X ). Note: Theorem 7..10 icludes the multivariate case. If the MLE of ( 1,, k ) is ( ˆ ˆ 1,, k ), ad if ( 1,, k ) is ay fuctio of the parameter vector, the by the ivariace property of the MLE, the MLE of ( 1,, k ) is ( ˆ,, ˆ ). 1 k 7
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Example 7..11 (Normal MLEs, ad ukow) Let X,, 1 X are iid from a are ukow. The the MLE of is ˆ (, ) where both ad 1 1 X ad the MLE of is ˆ ( X ) i 1 i X S. Solutio: Verify these estimators usig (a) uivariate calculus (this Example, Example 7..11) ad (b) multivariate calculus (Example 7..1). Notes: 1. MLE is susceptible to problems associated with umerical istability if the MLEs do ot have explicitly expressio.. How sesitive is the MLE to measuremet error i the data? (see Example 7..13.) 7..3 Bayes Estimators Bayesia Approach to Statistics The parameter is a radom quatity described by a probability distributio kow as the prior distributio. A sample is the take from the populatio idexed by. The prior distributio is updated with this sample iformatio to get what is kow as the posterior distributio usig Bayes rule (Theorem 1.3.5 p. 3). Let ( ) deote the prior distributio of ad let 8
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 f ( x ) be the samplig distributio of the sample. The posterior distributio of give the sample x is give by ( x) f ( x ) ( ), where m( x) is the margial distributio of x, i.e., m( x ) f( x ) ( ) d m( x). The posterior distributio is the used to make statemets about. For istace, the mea of the posterior distributio may be used as a poit estimate of. Example 7..14 (Biomial Bayes estimatio) Let X 1,, X be iid Beroulli( p ), where p is ukow. The Y X is biomial(, p). We assume the prior distributio o p is beta(, ). The posterior distributio of i1 p give Y i y, f ( p y ), is beta( y, y ). Hece the Bayes estimate for p is the mea of the posterior distributio, i.e., Note that the mea of the prior distributio is ad pˆ B may be writte as pˆ B y. a y pˆ B. 9
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Hece, the Bayes estimator is a liear combiatio of the sample mea ad the prior mea with weights determied by,, ad. Note: The prior ad the posterior distributios are both beta distributio. Defiitio 7..15 Let deote the class of pdfs or pmfs f ( x ) (idexed by ). A class of prior distributios is a cojugate family for if the posterior distributio is the class of for all f, all priors i, ad all x. Example 7..16 (Normal Bayes estimatio) Let X ~ (, ) where is kow. We assume the prior distributio o is (, ). The posterior distributio of give X x, is also ormal (Homework problem) with mea ad variace E( x) x Var( x). Notes: 10
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 1. The ormal family is its ow cojugate family.. If the prior iformatio is vague (i.e., is very large) the more weight is give to the sample data. 3. If the prior iformatio is good (i.e., ) the more weight is give to the prior mea. Sectio 7.3 Methods of Evaluatig Estimators 7.3.1 Mea Squared Error Defiitio 7.3.1 The mea squared error (MSE) of a estimator W of a parameter is the fuctio defied by where BiasW EW. MSE E ( W ) Var W ( Bias W ), Defiitio 7.3. The bias of a poit estimator W of a parameter is the differece betwee the expected value of W ad. A estimator whose bias is idetically (i ) equal to 0 is called ubiased ad satisfies E W. for all If W is ubiased the MSE E( W ) VarW. 11
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Example 7.3.3 (Normal MSE) Let X 1,, X be iid from a estimators of ad, respectively, (, ). We kow that X ad S are ubiased EX ad ES (which is true eve without ormality see Theorem 5..6), i.e., for all ad. Thus ad MSE( X ) E( X ) /, (Recall that ( 1) S / ~ 4 MSES ( ) ES ( ) VarS. 1 Chi-square with 1 degrees of freedom, which is gamma( ( 1) /, ) ad Var( Y ) if ~ (, ) Y gamma.) Example 7.3.4 Let X,, 1 X be iid from a (, ). Recall that the MLE (ad MME) of is Note that 1 1 ˆ ( X ) i 1 i X S. 1
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 ad so that From these formulas, you ca verify that 1 1 E( ˆ ) E( S ). 1 1 ( 1) 4 Var ˆ Var( S ) Var( S ), ( 1) 4 1 4 1 4 MSE( ˆ ) Var( ˆ ) [ Bias( ˆ )]. ˆ has the smaller MSE tha S. Example 7.3.5 (MSE of biomial Bayes estimators) Suppose X,, 1 X are iid from a Beroulli( p ). (1) MLE: ˆp X is ubiased estimator for p ad () Bayes estimator: pˆ B The MSE of p ˆ B is Y p(1 p) MSE( pˆ) E ( ˆ p p p) Varp( X ). p ( ) p is a Biased estimator because E ( ˆ p pb) p. 13
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 MSE( pˆ ) Var ( pˆ ) [ E ( pˆ p)] B p B p B Y pp Varp ( ) ( ) p(1 p) p p ( ) ( ). If we choose /4, we have MSE( pˆ B) 4( ) as a costat ad pˆ B Y /4. I this situatio, we ca determie which of these two estimators is better i terms of the MSE. Skip equivariace example (Example 7.3.6) 7.3. Best Ubiased Estimator Cosider the class of estimators C { W : EW ( )}. For ay W1, W C, we have BiasW1 BiasW ( ), so MSE( W ) MSE( W ) E ( W ) E ( W ) Var ( W ) Var ( W ). 1 1 1 14
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Defiitio 7.3.7 A estimator W * is a best ubiased estimator of ( ) if it satisfies EW * ( ) for all ad, for ay other estimator W with EW ( ), we have Var ( W*) Var ( W ) for all. W * is also called a uiform miimum variace ubiased estimator (UMVUE) of ( ). Note: UMVUE may ot ecessarily exist. If UMVUE exists, it is uique (Theorem 7.3.19). Example 7.3.8 (Poisso ubiased estimatio) Let X,, 1 X be iid from a Poisso( ). Note that E ( X) ad ES for all. Thus, both X ad Also, ote that the class of estimators give by is a class of ubiased estimators for 0 a 1. S are ubiased estimators of. W (, ) (1 ) a X S ax a S To determie which estimator has the smallest MSE, we eed to calculate Var ( X ) Var ax ( (1 a) S ). The calculatio ca be very legthy., Var S ( ), ad The questio here is, how ca we fid the best, i.e., smallest variace, of these ubiased estimators? 15
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Theorem 7.3.9 (Cramer-Rao iequality) Let X,, 1 X be a sample with pdf f ( x ) ad let W W X1 X ( X) (,, ) be ay estimator satisfyig d E W ( X) [ W ( ) f ( )] d d x x x ad The Var ( W ( X )). where log is the atural logarithm. d ( EW ( X)) Var ( ( )) d W X. E (( log f( X )) ) Corollary 7.3.10 (Cram er-rao iequality, iid case) If the assumptios of Theorem 7.3.9 are satisfied ad, additioally, X,, 1 X are iid with pdf f ( x ), the d ( EW ( X)) Var ( ( )) d W X. E (( log f ( X )) ) Notes: 16
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 1. The quatity E (( log f( X )) ) is called iformatio umber, or Fisher iformatio, of the sample.. The iformatio umber gives a boud o the variace of the best ubiased estimator of. 3. As the iformatio umber icreases, we have more iformatio about, ad we have a smaller boud. The followig lemma helps i the computatio of the CRLB. Lemma 7.3.11 If f ( x ) satisfies d E ( log f ( X )) [( log f ( x )) f ( x )] dx d (true for a expoetial family), the E f X E f X {[ log ( )] } ( log ( )). Example 7.3.1 Recall the Poisso problem. We will show that X is the UMVUE of. Note: 17
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Key assumptio of the Cram er-rao Theorem is that oe ca differetiate uder the itegral sig. Below is a example where this assumptio is ot satisfied. Example 7.3.13 (Ubiased estimator for the scale uiform) Let X,, 1 X be iid with pdf f ( x ) 1/,0 x. Note: Cramer-Rao Lower Boud (CRLB) is ot guarateed to be sharp, i.e., there is o guaratee that the CRLB ca be attaied. Example 7.3.14 (Normal variace boud) Let X,, 1 X be iid (, ). We have: hece S has variace larger tha the CRLB. 4 4 CRLB but Var( S ), 1 Questio: How do we kow if there exists a ubiased estimator that achieves the CRLB? 18
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Corollary 7.3.15 (Attaimet) Let X,, 1 X be iid with pdf f ( x ), where f ( x ) satisfies the coditios of the Cramer-Rao Theorem. Let i 1 L( x ) f( x ) deote the likelihood fuctio. If W( X) W( X1,, X ) is ay ubiased estimator of ( ), the W ( X ) attais the CRLB if ad oly if i for some fuctio a( ). a( )[ W( X) ( )] log L( x ) Example 7.3.16 Recall the ormal problem. so that 1 1 L(, x ) exp{ ( ) } / x i 1 i, ( ) ( x ) 1 i i L 4 log (, x ) ( ). 1 If μ is kow, CRLB is achieved ad the UMVUE is W( X ) ( X ) i 1 i of will achieve the CRLB.. Otherwise, o ubiased estimator Questio: 19
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 1. What ca we do to fid the best estimator if f ( x ) does ot satisfy the assumptios of the Cramer-Rao Theorem.. What if the CRLB is ot attaiable, how do we kow if our estimator is the best? 7.3.3 Sufficiecy ad Ubiasedess Recall two importat results: E( X) E[ E( X Y)] ad Var( X ) Var[ E( X Y )] E[ Var( X Y )]. Theorem 7.3.17 (Rao-Blackwell) Let W be ay ubiased estimator of ( ), ad let T be a sufficiet statistic for. Defie ( T) E( W T). The E ( ( T)) ( ) ad Var( ( T )) Var( W ) for all ; i.e., ( T ) is a uiformly better ubiased estimator of ( ). Notes: 1. Coditioig ay ubiased estimator o a sufficiet statistic will result i a improved estimator.. To fid the UMVUE, oly eed to cosider fuctios of the sufficiet statistic. 3. Sufficiecy is eeded so that the resultig quatity (estimator) after coditioig o the sufficiet statistic will ot deped o. 0
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Example 7.3.18 (Coditioig o a isufficiet statistic) Let X 1 ad X be iid from (,1). The X is a ubiased estimator (ad a sufficiet statistic) of. Suppose we coditio X o X 1 which is ot a sufficiet statistic. Let ( X1) E ( X1) E( X X1). The ( X1) is ubiased for ad has a smaller variace tha X but is ot a valid estimator. Theorem 7.3.19 If W is a best ubiased estimator of ( ), the W is uique. Let W be such that E ( W) ( ) ad let U be such that E ( U) 0 for all. The a W au, where a is a costat forms a class of ubiased estimators of with Var( a) VarW acov( W, U ) a VarU. Questio: Which is a better estimator, W or a? Theorem 7.3.0 If E ( W) ( ), W is the best ubiased estimator of ( ) if ad oly if W is ucorrelated with all ubiased estimators of 0. 1
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Example 7.3.1 (Ubiased estimators of 0) Let X be a observatio from uiform(, 1) distributio. The Therefore, 1 1 1 EX xdx ad Var X. 1 1 X is a ubiased estimator of. We will show that of 0, ad hece caot be a best ubiased estimator of. 1 X is correlated with a ubiased estimator Note: If a family of pdfs f ( x ) has the property that there are o ubiased estimators of 0 other tha 0 itself, the our search would be eded sice Cov( W,0) 0. What is this property called? Example 7.3. (cotiuatio of Example 7.3.13) Let X 1,, X be iid uiform (0, ) The Y X is a ubiased estimator of. ( ) 1 Y where Solutio: 1. Coditios of Cramer-Rao Theorem were ot satisfied.. By Rao-Blackwell Theorem, we oly eed to cosider ubiased estimator of based o Y. 3. Y is a complete sufficiet statistic, therefore Y is ucorrelated with all ubiased estimators of 0 sice this would just be 0 itself.
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 4. 1 Y is the best ubiased estimator of. Importat Note: What is critical is the completeess of the family of distributios of the sufficiet statistics ot the completeess of the origial family. Theorem 7.3.3 Let T be a complete sufficiet statistic for a parameter, ad let ( T ) be ay estimator based oly o T. The ( T ) is the uique best ubiased estimator of its expected value. Result: If T is a complete sufficiet statistic for a parameter ad hx ( 1,, X ) is ay ubiased estimator of ( ), the ( T) E[ h( X1,, X) T] is the uique best ubiased estimator of ( ). Example 7.3.4 (Biomial best ubiased estimatio) Let X,, 1 X be iid biomial( k, ). We wat to estimate ( ) ( 1) (1 ) k 1 P X k. Solutio: Recall that X i ~ biomial( k, ) is a complete sufficiet statistic for. i1 3
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Questio: How about a ubiased estimator for ( )? Oce we fid a ubiased estimator, how do we get the best ubiased estimator? 7.3.4 Loss Fuctio Optimality Decisio Theory: Settig: Observed data: X x where X~ f ( x ). Let = actio space, i.e., set of allowable decisios regardig. Defiitio: Loss fuctio is a oegative fuctio that geerally icreases as the distace betwee a actio, a, ad icreases. Note: L(, ) 0 (What does this mea? the loss is miimum if the actio is correct) If is real-valued, two commoly used loss fuctios are absolute error loss L(, a) a : more pealty o small discrepacies 4
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 squared error loss L(, a) ( a) : more pealty o large discrepacies Other examples: L(, a) which pealizes overestimatio more tha uderestimatio. ( a), a ; 10( a), a. L(, a) [( a) ]/( 1), pealizes errors i estimatio more if is ear 0 tha if is large Defiitio: I decisio theoretic aalysis, the quality of a estimator, ( X ), is quatified by its risk fuctio defied by R(, ) E L(, ( X )), i.e., at a give the risk fuctio is the average loss that will be icurred if the estimator ( X ) is used. Notes: MSE is a example of a risk fuctio with respect to the squared error loss. R(, ) E L(, ( X)) E ( ( X)) Var ( ( X)) ( Bias ( ( X ))). We wat to fid a estimator that has a small risk fuctio for all relative to aother estimator. However, most of the time the risk fuctios of two estimators cross. 5
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Example 7.3.5 (Biomial risk fuctios) Recall Example 7.3.5 comparig the Bayes estimator ad the MLE of the Beroulli parameter p. pˆ B X /4 i 1 i ad p 1 ˆ X X i1 i. Example 7.3.6 (Risk of ormal variace) Let X 1,, X be iid from (, ). We wat to estimate cosiderig estimators of the form ( X ) bs. Solutio: Recall that ES b 4 ad for ormal samples Var( S ). 1 The risk fuctio with respect to the squared error loss is R((, ), ) Var( bs ) ( E( bs ) ) b b Var( bs ) ( b ) b 1 4 [ ( b 1) ]. Notes: 1. The resultig risk fuctio does ot deped o. 6
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 1. b value that miimizes this risk fuctio is give by b. Thus, for every value of 1 estimator with the smallest risk amog all estimators of the form ( X ) bs is 1 1 ( ) (See Figure 7.3. p. 351 for =5). 1 1 S S X i 1 i X b (, ), the Example 7.3.7 (Variace estimatio usig Stei s loss) Let X,, 1 X be iid from a populatio with positive fiite variace,. We wat to estimate. Solutio: Cosiderig estimators of the form ( X ) bs ad the loss fuctio I this case, the risk fuctio is give by Note that L b a a (attributed to Stei) (, a) 1 log bs bs S R E b b E (, b) ( 1log ) 1log (log ). S E(log ) does ot deped o b. To miimize this risk fuctio, we fid b that miimizes log( ) b b which is whe b 1. Hece, the estimator with the smallest risk for all values of 1 1( X ) S ( ) i 1 i 1 X X. is 7
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 Bayesia Approach to Loss Fuctio Defiitio: Give a prior distributio ( ), the Bayes risk x x x R(, ) ( ) d ( L(, ( )) f( ) d ) ( ) d ad the estimator that results i the smallest value of the Bayes risk is kow as the Bayes rule with respect to a prior ( ). Note that R(, ) ( ) d [ L(, ( x)) ( x) d] m( x) dx where the quatity i the square brackets is kow as the posterior expected loss. The actio ( X ) that miimizes the posterior expected loss will also miimize the Bayes risk. Example 7.3.8 (Two Bayes rules) Suppose we wat to estimate. 1. For the squared error loss, the posterior expected loss is x X x, ( a) ( ) d E(( a) ) 8
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 where ~ ( x ). This is miimized by ( x) E( x ) so that the Bayes rule is the mea of the posterior distributio (Example..6).. For the absolute error loss, the posterior expected loss is a ( x) d E( a Xx ) miimized by ( x) E( x ) = media of the posterior distributio (Exercise.18). Example 7.3.9 (Normal Bayes estimates) Let X,, 1 X be iid from (, ) ad let ( ) be (, ), where,, are kow. From Example 7..16 ad your homework problem (Exercise 7.), the posterior distributio of give X x is ormal with mea ad / E( x) x ( / ) ( / ) / Var( x). / 1. For the squared error loss, ( x ) E( x). For the absolute error loss, 9
Chapter 7 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 011 ( x ) = media of the posterior distributio = E( x). 30