A POOLING METHODOLOGY FOR COEFFICIENT OF VARIATION

Size: px
Start display at page:

Download "A POOLING METHODOLOGY FOR COEFFICIENT OF VARIATION"

Transcription

1 Sankhyā : The Indian Journal of Statistics 1995, Volume 57, Series B, Pt. 1, pp A POOLING METHODOLOGY FOR COEFFICIENT OF VARIATION By S.E. AHMED University of Regina SUMMARY. The problem of estimating the coefficient of variation is considered when it is a priori suspected that two coefficients of variations are the same. It is advantageous to pool the data for making inferences about population coefficient of variation. The estimators based on pre-test and the shrinkage principle are proposed. The expressions for the asymptotic bias and asymptotic mean-square error (AMSE) of the proposed estimators are derived and compared with the parallel expressions for the unrestricted and pooled estimators. Interestingly, the proposed estimator dominates the unrestricted estimator in a wider range. Not only that, the size for the preliminary test is also appropriate. Furthermore, an optimal rule for the choice of the level of significance (α) for the preliminary test is presented. Tables for the optimum selection of α is also provided for the use of the shrinkage preliminary test estimator. 1. Introduction The ratio of the standard deviation to the population mean is known as the coefficient of variation (C.V.). For convenience of notation, C.V. will be denoted by greek letter λ. Furthermore, it is worth noting that the population coefficient of variation λ is a pure number free from units of any measure. This has the advantage of being able to compare the variability of two different populations directly using the coefficient of variation. If X is a random variable with mean µ and variance σ, then the coefficient of variation is defined as λ = σ µ, µ 0. It is sometimes a more informative quantity than σ. For instance, a value of 0 for σ has a little meaning unless we can compare it with µ. Ifσ is known to be 0 and µ is known to be 8000, then amount of the variation is small relative Paper received. October 199; revised October AMS (1990) subject classification. 6F10. Key words and phrases. Shrinkage restricted estimator; shrinkage preliminary test estimator; asymptotic biases and risks; asymptotic efficiency.

2 58 s.e. ahmed to the mean. For example, in the stock market the volatility or the level of activity of a stock is often measured by the coefficient of variation of the stock. Thus, it is possible to directly compare the volatility of one stock with another or against that of a known index such as the Dow Jones Industrial Average (DJIA). Also, in the study of precision of a measuring instrument, engineers are typically more interested in estimating λ. Let X 11,X 1,,X 1n1 be independent and identically distributed random variables with mean µ 1 and variance σ1 with λ 1 = σ 1 /µ 1. Also let X 1,X,, X n be independent and identically distributed random variables with mean µ and variance σ with λ = σ /µ. Suppose that this pair of independent random samples are obtained either from similar types of characteristics or at different times. It is natural to suspect that λ 1 may be equal to λ, since the population coefficient of variation is fairly stable over time and over similar types of characteristics. We are interested in estimating λ 1 when it is a priori suspected that both coefficients of variation may be equal and we wish to pool the information from two sources. It is advantageous to utilize information provided by the second sample if both population coefficients of variation are equal. However, in many experimental situations, it is not certain whether the coefficients of variation of the two populations are equal or not. This type of problem may be classified as a statistical estimation problem with uncertain prior information (not in the form of prior distribution). To tackle this uncertainty, it is desirable to perform a preliminary test on the validity of the prior information and then make a choice between restricted and classical statistical inference procedures. This may be regarded as a compromise between two extremes. For some account of the parametric theory on the subject we refer to Judge and Bock (1978) and Ahmed and Saleh (1990) among others. Two bibliographies in this area of research are provided by Bancroft and Han (1977), and Han et al. (1988). In addition, the asymptotic theory of the preliminary test estimation for some discrete models is discussed by Ahmed (1991a, 1991b) among others. In this paper the parameter of interest is λ 1 and the problem is to estimate it on the basis of two samples. Five estimation strategies are developed for the estimation of parameter λ 1. We use the mean-square error (MSE) criterion to appraise the performance of the estimators under the following squared loss function L(ˆλ 1,λ 1 )=(ˆλ 1 λ 1 ),...(1.1) where ˆλ 1 is a suitable estimator of λ 1. Then, the MSE of ˆλ 1 is given by MSE(ˆλ 1,λ 1 )=E(ˆλ 1 λ 1 )....(1.) Further, ˆλ 1 will be termed an inadmissible estimator of λ 1 if there exists an alternative estimator ˆλ 1 such that

3 pooling methodology for coefficient of variation 59 MSE(ˆλ 1) MSE(ˆλ 1) for all λ 1,...(1.3) with strict inequality for some λ 1. If instead of (1.3) holding for every n we use the asymptotic MSE (AMSE) then we require lim MSE(ˆλ 1) lim MSE(ˆλ 1) for all λ 1,...(1.4) n n with strict inequality for some λ 1 and ˆλ 1 is termed an asymptotically inadmissible estimator of λ 1. The proposed estimators are presented in section. In section 3, the expressions for the asymptotic bias and AMSE of the estimators are presented under local alternatives. The AMSE analysis is provided in section 4. Section 5 discusses how to use the estimators and an example is also provided. Section 6 summarizes the findings.. Estimation strategies and preliminaries If µ i and σi,i=1,, are unknown, then the unrestricted maximum likelihood estimator of (µ i,σi )is( µ i, σ i ), where µ i = 1 n i x ij and σ i = 1 n i (x ij µ i ),j=1,,,n i....(.1) n i n i j=1 j=1 The unrestricted estimator (UE)ofthei-th coefficient of variation is defined as λ i = σ i µ i. Alternatively, if λ 1 = λ, then ˆλ 1 = n 1 λ 1 +n λ n 1+n and is called the pooled or restricted estimator (RE) ofλ 1. The pooled estimator ˆλ 1 performs better than the unrestricted estimator λ 1 when λ 1 = λ but as λ moves away from λ 1, ˆλ 1 may be considerably biased and inefficient. However, the performance of λ 1 remains steady, over such departures. In an effort to increase the precision of estimators, it is often desirable to develop an estimator which is a combination of λ 1 and ˆλ 1 by incorporating a preliminary test on the null hypothesis H o : λ 1 = λ....(.) As a result, when H o is rather suspicious, it is desirable to have a compromised estimator using a preliminary test on H o in (.) and then choose between ˆλ 1 and λ 1 depending on the outcome of the test. This estimator, denoted by ˆλ P 1 is called the preliminary test estimator (PE) of λ 1 which is a convex combination of λ 1 and ˆλ 1 via a test-statistic for testing H o. Thus,

4 60 s.e. ahmed ˆλ P 1 = ˆλ 1 I(D <d α )+ λ 1 I(D d α ),...(.3) where I(A) is an indicator function of the set A and D is a test statistic for H o in (.). For a given level of significance α (0 <α<1), let d α be the upper 100%α critical value using the distribution of D under H o. We develop D in the next section. However, PE may not be perfect in the whole parameter space. Moreover, its use may be limited due to the large level of significance. On the other hand, the use of such a large significance level helps to maximize the minimum efficiency of ˆλ P 1 relative to λ 1 (Ahmed, 1991b). In order to overcome this shortcoming of PE, we propose the shrinkage preliminary test estimator (E). The E may be viewed as an improved version of ˆλ P 1. First, we propose a shrinkage restricted estimator (SRE) of λ 1, ˆλ S 1 =(1 π) λ 1 + πˆλ 1, π [0, 1],...(.4) as a modification of the restricted estimator of λ 1 and π may be defined as coefficient reflecting a degree of confidence in the prior information. The SRE, like UE, yields a smaller AMSE at or near the null hypothesis at the expense of poor performance in the rest of the parameter space. However, the SRE provides a wider range than the restricted estimator in which it dominates the unrestricted estimator. This motivates replacing the restricted estimator RE by SRE in the PE. As a result, the proposed E yields a wider interval in the parameter space in which it dominates the UE. More importantly, it also provides meaningful size for the preliminary test. Finally, we define E by replacing the restricted estimator by SRE in the PE as: ˆλ 1 = {πˆλ 1 +(1 π) λ 1 }I(D <d α )+ λ 1 I(D d α )....(.5) The primary objective here is to focus on the asymptotic properties of E and to compare these with those of PE and UE. An optimal rule for the choice of the level of significance α for the preliminary test is provided. Tables for the optimum selection of α is also presented for the use of the shrinkage preliminary test estimator. p p First, we note that since µ i µi and σ i σi,wehave σ p i µ i σ i µ i, where p means convergence in probability. For the limiting distribution, noting that ( σi ni σ ) i µ i µ i it can be shown that = ni µ i ( σ i σ i )+ σ i µ i µ i { n i (µ i µ i )}, ni ( µ i µ i ) d N(0,σ i ), ni ( σ i σ i ) d N ( 0, (κ 4 1)σi ), 4

5 pooling methodology for coefficient of variation 61 σ i µ i µ i P σ i µ i, 1 P 1 µ i µ i where d means convergence in distribution and κ 4 = µ4 i is the kurtosis of the σi 4 distribution. Thus, we have the following lemma: Lemma.1. If X i1,x i,,x ini is a random sample of size n i from a normal distribution with mean µ i (µ i 0)and variance σi, then the following results hold: (i) ( ) ) σ n i d N i µ i σi σ µ i (0, i + σ4 µ i. µ i 4 i (ii) We develop a large sample test statistic for testing H o, ( λ D n = λ 1 ) ( ), ˆτ 1 n n where ˆτ = 1 ˆλ 1 + ˆλ 4 1. For large n 1,n, the distribution D n approaches the central chi-square distribution with 1 degree of freedom. Thus, the critical value d α of D n may be approximated by χ 1,α, the upper 100α% critical value of the chi-square distribution with 1 degree of freedom. To avoid the asymptotic degeneracy, we specify a sequence of local alternatives. This local alternative setting was also used in Kulperger and Ahmed (199) among others. Here the local alternative setting is reasonable since estimators based on the preliminary test principle are only useful in the cases where λ 1 and λ are close. For this reason, a sequence {K n } of local alternatives is considered. That is given a sample of size n = n 1 + n, replace λ by λ 1 + ξ n : where ξ is a fixed real number. K n : λ = λ 1 + ξ n,...(.6) 3. Bias and mean squared error of the estimators In this section, expressions for bias and mean squared errors of the estimators are obtained. First, the asymptotic bias of the proposed estimators are given in theorem 3.1. Theorem 3.1. Under local alternatives, as n in such a way that the relative sample sizes converge, n 1 /n γ (0, 1). By direct computation and using the results from Judge and Bock (1978) Appendix B, we arrive at the following bias expressions of the estimators: B 1 = asymptotic bias of{ γn( λ 1 λ 1 )} =0 B = asymptotic bias of { γn(ˆλ 1 λ 1 )} = τξ (1 γ)

6 6 s.e. ahmed B 3 = asymptotic bias of { γn(ˆλ S 1 λ 1 )} = πτξ (1 γ) B 4 = asymptotic bias of{ γn(ˆλ P 1 λ 1 )} = τξ (1 γ)g 3 (χ 1,α; δ ) B 5 = asymptotic bias of{ γn(ˆλ 1 λ 1 )} = πτξ (1 γ)g 3 (χ 1,α; δ ), where δ γ(1 γ)ξ = τ, τ = 1 λ 1 + λ 4 1 and G m (,δ ) is the cumulative distribution of a noncentral chi-square distribution with m degrees of freedom and noncentrality parameter δ.

7 pooling methodology for coefficient of variation 63 It can easily be seen that B 3 = πb and B 5 = πb 4. Thus, for π (0, 1), B 3 <B and B 5 <B 4. Hence, E and SRE are superior to PE and RE respectively. Thus the shrinkage technique may also be viewed as a bias reduction technique from the point of view of asymptotic bias. Figure 1 displays behavior of the bias function of E and SRE in term of δ for various values of π at selected values of α with γ =0.5. The large values of α is used simply to exhibit its effect on the magnitude of the bias functions (Han and Bancroft, 1968). Since the bias is a component of the AMSE and the control of the AMSE would control both bias and variance, we shall only compare the AMSE from this point onwards. Direct computation provides expressions for the asymptotic mean squared errors (AMSE) of the estimators presented in theorem 3.. Theorem 3.. Under local alternatives and using the results from Judge and Bock (1978) Appendix B, we arrive at the following expressions of the AMSE of the estimators. AMSE 1 (δ ) = asymptotic MSE of{ γn( λ 1 λ 1 )} = τ AMSE (δ ) = asymptotic MSE of { γn(ˆλ 1 λ 1 )} = τ +(1 γ)τ δ (1 γ)τ AMSE 3 (δ ) = asymptotic MSE of { γn(ˆλ S 1 λ 1 )} = τ + π (1 γ)τ δ π( π)(1 γ)τ AMSE 4 (δ ) = asymptotic MSE of { γn(ˆλ P 1 λ 1 } = τ +(1 γ)τ δ {G 3 (χ 1,α; δ ) G 5 (χ 1,α; δ )} (1 γ)τ G 3 (χ 1,α; δ ) AMSE 5 (δ ) = asymptotic MSE of { γn(ˆλ 1 λ 1 } = τ +(1 γ)τ δ {πg 3 (χ 1,α; δ ) π( π)g 5 (χ 1,α; δ )} π( π)(1 γ)τ G 3 (χ 1,α; δ ). In the following section, asymptotic properties of the estimators are studied. 4. Asymptotic analysis We note that the AMSE of λ 1 is a constant line, while the AMSE of ˆλ 1 is a straight line in terms of δ which intersects the AMSE of λ 1 at δ =1. Using the AMSE criterion to appraise performance, if the restriction λ 1 = λ is correct, then the AMSE of ˆλ 1 is less than the AMSE of λ 1. Furthermore, AMSE AMSE 1 if 0 δ 1. Hence, for δ [0, 1], ˆλ 1 performs better than λ 1. However, beyond this interval the AMSE of ˆλ 1 increases without bound. The characteristics of the AMSE function of ˆλ S 1 are similar to that of ˆλ 1.Itis

8 64 s.e. ahmed worth noting that ˆλ S 1 dominates λ 1 when 0 δ π 1 ( π). Thus,the range in which AMSE 3 AMSE 1 is wider than the range in which AMSE AMSE 1. In an effort to identify some important characteristics of the shrinkage preliminary test estimator, first note that G 3 (χ 1,α; δ ) G 1 (χ 1,α; δ ) G 1 (χ 1,α;0)=1 α,... (4.1) for α (0, 1) and δ 0. The left hand side of (4.1) converges to 0 as δ approaches infinity. Now, we compare the AMSE of E with UE. AMSE 5 AMSE 1 according as δ > ( π)g 3 (χ 1,α; δ ) { G 3 (χ 1,α; δ ) ( π)g 5 (χ 1,α; δ ) } 1....(4.) Thus, we notice that ˆλ 1 dominates λ 1 whenever δ ( π)g 3 (χ 1,α; δ ) { G 3 (χ 1,α; δ ) ( π)g 5 (χ 1,α; δ ) } 1....(4.3) It is evident from (4.3) that the AMSE of E is less than the AMSE of UE when δ is equal to or near 0. Moreover, as α, the level of the statistical significance, approaches one, AMSE 5 tends to AMSE 1. Also, when δ increases and tends to infinity, the AMSE of E approaches the AMSE of UE. Further, for larger values of δ, the value of the AMSE of E increases, reaches its maximum after crossing the AMSE of UE and then monotonically decreases and approaches the AMSE of UE. Therefore, there are points in the parameter space where E has a larger AMSE than UE and a sufficient condition for this result to occur is that (4.) holds. Now, we examine the value of δ when α 0 in the equation (4.3). Here 0 δ ( π)....(4.4) π It is evident that the AMSE of PE will be less than that of UE as long as 0 δ 1 when α 0, whereas the AMSE of E will be less than AMSE of UE as long as (4.4) holds. Thus, the range for which AMSE 5 AMSE 1 is greater than the range for which AMSE 4 AMSE 1. Hence the shrinkage preliminary test estimator provides a wider range than ˆλ P 1 in which its AMSE is smaller than the AMSE of λ 1. This clearly indicates the superiority of E over PE. Next, investigating the dominance range for ˆλ 1 and ˆλ 1. Note that under the null hypothesis AMSE 5 - AMSE = τ (1 γ) [ 1 { 1 (1 π) G 3 (χ 1,α;0) }] > 0, for α (0, 1). Alternatively, when δ deviates from the null hypothesis the AMSE of ˆλ 1 grows and become unbounded whereas the AMSE of ˆλ 1 remains bounded. The departure from H o is fatal to ˆλ 1 whereas ˆλ 1 has good AMSE control.

9 pooling methodology for coefficient of variation 65 Now, we wish to compare AMSE behavior of ˆλ 1 and ˆλ S 1 and determine the dominance range of these estimators in terms of the non-centrality parameter δ. First under the null hypothesis, i.e., δ = 0 the AMSE of of ˆλ S 1 is τ {1 π( π)(1 γ))} and AMSE 5 - AMSE 3 > 0, for α (0, 1). Hence, we conclude that under the null hypothesis ˆλ S 1 performs better than ˆλ 1. Furthermore, we find that the AMSE of ˆλ S 1 is smaller than the AMSE of ˆλ 1 when ( 1 G3 δ (χ 1,α; δ ) ) < ( π) ( 1 1 G 3 (χ 1,α ; δ ) ) ( 1 G 5 (χ 1,α ; δ ) ) and for ( 1 G3 δ (χ 1,α; δ ) ) > ( π) ( 1 1 G 3 (χ 1,α ; δ ) ) ( 1 G 5 (χ 1,α ; δ ) ) the opposite conclusion holds. Hence neither ˆλ 1 nor ˆλ S 1 asymptotically dominates the other under local alternatives. We now proceed to compare the AMSE of E and PE and determine the conditions under which E dominates PE. It may be noticed that AMSE 4 AMSE 5 = τ (1 γ)δ { (1 π)g 3 (χ 1,α; δ ) (1 π) G 5 (χ 1,α; δ ) } τ (1 γ)(1 π) G 3 (χ 1,α; δ )....(4.5) It is clear from (4.5) that the AMSE of ˆλ P 1 will be smaller than ˆλ 1 in the neighborhood of δ = 0. However, the AMSE difference may be negligible for larger values of π. On the other hand, as δ increases, the AMSE difference in (4.5) becomes positive and ˆλ 1 dominates ˆλ P 1 uniformly in the rest of the parameter space. Let δπ be a point in the parameter space at which the AMSE of E and PE intersect for a given π. Then, for δ (0,δπ], PE performs better than E, while for δ (δπ, ), E dominates PE uniformly. Further, for large values of π (close to 1), the interval (0,δπ] may be negligible. Nevertheless, PE and E share a common asymptotic property that, as δ, their AMSE converge to a common limit, i.e., to the AMSE of λ 1. In addition, from (4.5) AMSE 4 AMSE 5 according as δ < (1 π)g 3 ((χ 1,α; δ ) { G 3 (χ 1,α; δ ) (1 π)g 5 (χ 1,α; δ ) } 1....(4.6) Thus, E dominates PE whenever δ (1 π)g 3 (χ 1,α; δ ) { G 3 (χ 1,α; δ ) (1 π)g 5 (χ 1,α; δ ) } 1 When α 0 in (4.6), the value of δ is...(4.7) 0 δ (1 π) (1 + π).

10 66 s.e. ahmed [ ] Thus, PE dominates the E only when δ 0, (1 π) (1+π) which suggests that π should be chosen large. For example, if π =0.9, then PE dominates E in a tiny interval [0, 1 19 ]. We have plotted the AMSE of λ 1, ˆλS 1 and ˆλ 1 against δ for π =0.1, 0.3, 0.5, γ =0.5, and for selected values of α. Figure exhibits such properties of these estimators. It appears from figure that for smaller levels of significance, when π is fixed, the variation in the AMSE functions is greater. The large α values are used in figure to observe the variation between the maximum and minimum AMSE of the selected estimators. Moreover, the larger the value of π, the greater is the variation in the AMSE of ˆλ 1. Also, it may be seen that for smaller values of γ,

11 pooling methodology for coefficient of variation 67 when π and α are fixed, the variation in the AMSE functions is greater. Finally, we conclude none of the estimators is inadmissible to the any others under the local alternatives. However, under the null hypothesis, the dominance picture of the estimators is presented in the following theorem. Theorem 4.1. None of the five estimators is inadmissible with respect to the other four. However, at δ =0, the AMSE of the estimators may be ordered according to the magnitude of their AMSE as follows. ˆλ 1 ˆλ S 1 ˆλ P 1 ˆλ 1 λ 1, for a range of π,... (4.8) where denotes domination. In order to compare the performance of the estimators it is conventional to consider the asymptotic relative efficiency (ARE) of the estimators defined by ARE = AMSE( λ 1,λ 1 ) AMSE( λ 1,λ 1), while keeping in mind that a value of ARE greater than 1 signifies improvement of λ 1 over λ 1. The ARE of ˆλ 1 relative to λ 1 is given by where ARE 1 (α, δ,π)= AMSE( λ 1,λ 1 ) AMSE(ˆλ 1,λ 1 ) = 1 1+Ψ(δ ),...(4.9) Ψ(δ ) = δ (1 γ) { πg 3 (χ 1,α; δ ) π( π)g 5 (χ 1,α,δ ) } π( π)(1 γ)g 3 (χ 1,α; δ )]. Similarly, the asymptotic relative efficiency of...(4.10) ˆλ 1 relative to ˆλ 1 is given by ARE (α, δ,π)= AMSE(ˆλ 1,λ 1 ) γ +(1 γ)δ = AMSE(ˆλ 1,λ 1 ) 1+Ψ(δ....(4.11) ) Now, we present the analysis of the asymptotic relative efficiencies. First we consider ˆλ 1 relative to λ 1. The ARE 1 is a function of α, δ and π. This function for α 0 has its maximum at δ = 0 with value E = {1 π( π)(1 γ)g 3 (χ 1,α;0)} 1 ( 1)....(4.1) Moreover, for fixed values of α and π, ARE 1 decreases as δ increases from 0, crossing the line ARE 1 = 1, attains a minimum value at a point δ o and then increases asymptotically to 1. However, for fixed π, E is a decreasing function of α while the minimum efficiency is an increasing function of α. We have plotted ARE(α, π, δ ) against δ for γ =0.5 and α =0.05, 0.10, 0.0, at selected values of π in figure 3.

12 68 s.e. ahmed It appears from figure 3 that the smaller the value of α, the greater is the variation in the ARE. On the other hand, for any fixed α, the maximum value of ARE is an increasing function of π and the minimum efficiency is a decreasing function of π. The shrinkage factor π may also be viewed as a variation controlling factor among the maximum and minimum efficiencies. Next, we compare ˆλ 1 with ˆλ 1. First, we note that under the null hypothesis γ ARE (α, 0,π)= 1 π( π)(1 γ)g 3 (χ ( γ). 1,α ;0)

13 pooling methodology for coefficient of variation 69 Thus, γ ARE (α, 0,π) 1 ARE 1 (α, 0,π). On the other hand, as δ moves away from the the null hypothesis ARE (α, δ,π) 1 whenever δ 1+(1 γ) 1 Ψ(δ ) ˆλ Thus, except when δ is small, 1 is relatively more efficient than ˆλ 1. Finally, noticing that the ARE 1 of ˆλ 1 depends on α, the size of the preliminary test, which must be determined by the user. One method to determine α is to compute the minimum guaranteed efficiency. We outline this procedure in the following section. 5. Maximum efficiency criterion for selecting the estimators The ARE of ˆλ 1 is a function of δ, α and π and one method to determine α and π is to use a maxmin rule given by Ahmed (199) among others. We preassign a value of the minimum efficiency (E o ) that we are willing to accept to use this rule. Consider the set A = {α, π ARE(α, π, δ ) E o, δ }....(5.1) The estimator is chosen which maximizes ARE (α, π, δ )overallα, π A and δ. Thus, we solve for α and π such that { } sup inf ) = E(α,π )=E o....(5.) α,π A δ For given π = π o, we determine the value of α such that { inf α, π o,δ ) } = E(α,π o )=E o....(5.3) δ sup α A

14 70 s.e. ahmed TABLE 1. MAXIMUM AND MINIMUM ASYMPTOTIC GUARANTEED EFFICIENCIES FOR γ =0.4. α/π E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo

15 pooling methodology for coefficient of variation 71 TABLE. MAXIMUM AND MINIMUM ASYMPTOTIC GUARANTEED EFFICIENCIES FOR γ =0.5. α/π E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo E E δo

16 7 s.e. ahmed TABLE 3. MAXIMUM AND MINIMUM ASYMPTOTIC GUARANTEED EFFICIENCIES FOR γ =0.6. α/π E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ E E δ0

17 pooling methodology for coefficient of variation 73 Tables 1 3 provide the maximum relative efficiencies E, minimum relative efficiencies E o and δo at which the minimum occurred for π =0.1(0.1)1.0 and γ =0.4, 0.5, 0.6 respectively. The tables for other values of γ are also prepared but not provided here to save the space. These tables are prepared for those values of α as were used by Han and Bancroft (1968) among others. However, the maximum and minimum efficiency may be readily computed for smaller values of the α. We notice that for smaller values γ, while π and α are held constant, the variation in the maximum ARE and the minimum ARE is greater. Infact, the maximum ARE is a decreasing function of γ, while the minimum ARE is an increasing function of γ. Furthermore, it is evident from the tables that maximum relative efficiency (E ) increases with increase in π and minimum relative efficiency (E o ) decreases. Hence, there does not exist a π satisfying (5.). The value of π can be determined by the researcher according to his prior belief in the uncertain prior information. However, we recommend the following step for selecting the size of the preliminary test. Suppose the experimenter does not know the size of the test but asserts π = π o and wishes to accept an estimator which has relative efficiency not less than E o. Then the maxmin principle determines α = α such that ARE (α, π o,δ )=E o. Therefore, a user who wishes to find a good alternative to the SRE, RE and UE should be able to specify the minimum relative efficiency (E o ). Example 5.1. Suppose times were recorded by the 100 meter and 1000 meter runners of Canadian pre-olympic track team. After viewing both the sample data of running times, one of the coaches commented that the the running times for both races are the same. Since, the standard deviation in this case is not a good measure of variability in the ability of runners, therefore the coaches wish to estimate the coefficient of variation based on the two data set. The sample mean and sample variance based on 30 samples for 100 meter and 1000 meter running times (in minutes) is calculated and then the coefficient of variation of both data computed as and respectively. Thus, λ 1 =0.058 and λ = The calculated value of the test statistic D n is Since both the samples are reasonably large, the significant value for the distance statistic may be determined by using table 1. Furthermore, if the coaches suspect that π =0.4 and are looking for an estimator with a minimum ARE of at least 0.80, then from table 1 α is Such a choice of α would yield an estimator with a maximum efficiency of 1.38 at δ = 0 with a minimum guaranteed efficiency of 0.80 at δo =5.. The critical value based on 1 degree of freedom is 3.84 and the test is not significant. Hence, the estimate of λ 1 is ˆλ 1 = ˆλ S 1 = On the other hand, if the user wishes to rely on data completely and uses π = 1 when H o is accepted, then from table 1 the size of the preliminary test will be approximately 0.0. In addition, the maximum efficiency drops from 1.38 to 1.0. The use of PE may be limited due to the

18 74 s.e. ahmed large size of Œa, pha, the level of significance, as compared to E. The E has a remarkable edge over PE with respect to the size of the preliminary test. 6. Concluding remarks We have presented the shrinkage preliminary test estimator of the coefficient of variation of a normal distribution when additional sample is aorks having the following features: (1) Each network consists of logical nodes (or events) and directed branches (or activities). () A branch has a probability that the activity associated with it will be performed. (3) Other parameters describe the activities represented by the branches. In this paper, however, reference will be made to a sample size parameter only. The sample size n associated with a branch is characterised by the moment generating function (mgf) of the form M n (θ) = exp (nθ)f(n), where f(n) n denotes the density function of n and θ is any real variable. The probability φ that the branch is realised is multiplied by the mgf to yield the W -function such that W (θ) =φm n (θ)...(3.1) The W -function is used to obtain the information on the relationship which exists between the nodes. 4. GERT analysis of the plan The possible states of the C-1 inspection system described in section () can be defined as follows: S 0 : Initial state of the plan. S 1 (k) : State in which k(= 1,..., i) preceding units are found clear of defects during 100% inspection. : Initial state of partial inspection. S : State in which a unit is not inspected (i.e. passed) during sampling inspection.

19 pooling methodology for coefficient of variation 75 A : State in which a unit is found free of defects during partial (sampling) inspection. R : State in which a unit is found defective during partial inspection. S A : State in which current unit is accepted. S R : State in which current unit is rejected. The above states enable us to construct GERT network representation of the inspection system as shown in Fig. (1) and (). Suppose that the process is in statistical control, so that the probability of any incoming unit being defective is (p) and the probability of any unit being non-defective is q =1 p. First of all, we will show that the probability of acceptance and rejection of a unit during sampling inspection [see Fig. (1)] is same as that of its acceptance and rejection during 100% inspection. Now, by applying Mason s (1953) rule in the representation in Fig. (1), the W -functions from the initial node S 0 to the terminal nodes A and R are respectively found as W 1A (θ) = fqi+1 [1 (1 f)e θ ]e θ + f(1 f)q i+1 e θ 1 [(1 q i )+(1 f)e θ ]+(1 f)(1 q i )e θ...(4.1) and W 1R (θ) = fpqi+1 e θ [1 (1 f)e θ ]+fpq i+1 (1 f)e θ 1 [(1 q i )+(1 f)e θ ]+(1 f)(1 q i )e θ...(4.) From the W -functions defined above, we obtain the probability that a unit is accepted and rejected respectively by sampling procedure as [W 1A (θ)] θ=0 = q

20

21

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

12.5: CHI-SQUARE GOODNESS OF FIT TESTS

12.5: CHI-SQUARE GOODNESS OF FIT TESTS 125: Chi-Square Goodness of Fit Tests CD12-1 125: CHI-SQUARE GOODNESS OF FIT TESTS In this section, the χ 2 distribution is used for testing the goodness of fit of a set of data to a specific probability

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

CHI-SQUARE: TESTING FOR GOODNESS OF FIT

CHI-SQUARE: TESTING FOR GOODNESS OF FIT CHI-SQUARE: TESTING FOR GOODNESS OF FIT In the previous chapter we discussed procedures for fitting a hypothesized function to a set of experimental data points. Such procedures involve minimizing a quantity

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

3.4 Statistical inference for 2 populations based on two samples

3.4 Statistical inference for 2 populations based on two samples 3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

More information

Confidence Intervals for Cp

Confidence Intervals for Cp Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Hypothesis Testing for Beginners

Hypothesis Testing for Beginners Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2 University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

More information

A logistic approximation to the cumulative normal distribution

A logistic approximation to the cumulative normal distribution A logistic approximation to the cumulative normal distribution Shannon R. Bowling 1 ; Mohammad T. Khasawneh 2 ; Sittichai Kaewkuekool 3 ; Byung Rae Cho 4 1 Old Dominion University (USA); 2 State University

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Exact Confidence Intervals

Exact Confidence Intervals Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Two-Sample T-Tests Assuming Equal Variance (Enter Means) Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Dongfeng Li. Autumn 2010

Dongfeng Li. Autumn 2010 Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

More information

Testing for Granger causality between stock prices and economic growth

Testing for Granger causality between stock prices and economic growth MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

More information

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other 1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric

More information

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

More information

Confidence Intervals for Exponential Reliability

Confidence Intervals for Exponential Reliability Chapter 408 Confidence Intervals for Exponential Reliability Introduction This routine calculates the number of events needed to obtain a specified width of a confidence interval for the reliability (proportion

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Understanding the Impact of Weights Constraints in Portfolio Theory

Understanding the Impact of Weights Constraints in Portfolio Theory Understanding the Impact of Weights Constraints in Portfolio Theory Thierry Roncalli Research & Development Lyxor Asset Management, Paris thierry.roncalli@lyxor.com January 2010 Abstract In this article,

More information

Measuring Line Edge Roughness: Fluctuations in Uncertainty

Measuring Line Edge Roughness: Fluctuations in Uncertainty Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Chapter 4: Statistical Hypothesis Testing

Chapter 4: Statistical Hypothesis Testing Chapter 4: Statistical Hypothesis Testing Christophe Hurlin November 20, 2015 Christophe Hurlin () Advanced Econometrics - Master ESA November 20, 2015 1 / 225 Section 1 Introduction Christophe Hurlin

More information

5.1 Identifying the Target Parameter

5.1 Identifying the Target Parameter University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

More information

Confidence Intervals for One Standard Deviation Using Standard Deviation

Confidence Intervals for One Standard Deviation Using Standard Deviation Chapter 640 Confidence Intervals for One Standard Deviation Using Standard Deviation Introduction This routine calculates the sample size necessary to achieve a specified interval width or distance from

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

Fuzzy Differential Systems and the New Concept of Stability

Fuzzy Differential Systems and the New Concept of Stability Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

More information

SIMULATION STUDIES IN STATISTICS WHAT IS A SIMULATION STUDY, AND WHY DO ONE? What is a (Monte Carlo) simulation study, and why do one?

SIMULATION STUDIES IN STATISTICS WHAT IS A SIMULATION STUDY, AND WHY DO ONE? What is a (Monte Carlo) simulation study, and why do one? SIMULATION STUDIES IN STATISTICS WHAT IS A SIMULATION STUDY, AND WHY DO ONE? What is a (Monte Carlo) simulation study, and why do one? Simulations for properties of estimators Simulations for properties

More information

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

More information

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider

More information

Chapter 4 Statistical Inference in Quality Control and Improvement. Statistical Quality Control (D. C. Montgomery)

Chapter 4 Statistical Inference in Quality Control and Improvement. Statistical Quality Control (D. C. Montgomery) Chapter 4 Statistical Inference in Quality Control and Improvement 許 湘 伶 Statistical Quality Control (D. C. Montgomery) Sampling distribution I a random sample of size n: if it is selected so that the

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Chi Square Tests. Chapter 10. 10.1 Introduction

Chi Square Tests. Chapter 10. 10.1 Introduction Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square

More information

A Log-Robust Optimization Approach to Portfolio Management

A Log-Robust Optimization Approach to Portfolio Management A Log-Robust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

PLANE TRUSSES. Definitions

PLANE TRUSSES. Definitions Definitions PLANE TRUSSES A truss is one of the major types of engineering structures which provides a practical and economical solution for many engineering constructions, especially in the design of

More information

Confidence Intervals for the Difference Between Two Means

Confidence Intervals for the Difference Between Two Means Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Numerical Methods for Option Pricing

Numerical Methods for Option Pricing Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly

More information

Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk

Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:

More information

A Comparison of Correlation Coefficients via a Three-Step Bootstrap Approach

A Comparison of Correlation Coefficients via a Three-Step Bootstrap Approach ISSN: 1916-9795 Journal of Mathematics Research A Comparison of Correlation Coefficients via a Three-Step Bootstrap Approach Tahani A. Maturi (Corresponding author) Department of Mathematical Sciences,

More information

HYPOTHESIS TESTING: POWER OF THE TEST

HYPOTHESIS TESTING: POWER OF THE TEST HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

Association Between Variables

Association Between Variables Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

More information

MATH2740: Environmental Statistics

MATH2740: Environmental Statistics MATH2740: Environmental Statistics Lecture 6: Distance Methods I February 10, 2016 Table of contents 1 Introduction Problem with quadrat data Distance methods 2 Point-object distances Poisson process case

More information

Chapter 7 Notes - Inference for Single Samples. You know already for a large sample, you can invoke the CLT so:

Chapter 7 Notes - Inference for Single Samples. You know already for a large sample, you can invoke the CLT so: Chapter 7 Notes - Inference for Single Samples You know already for a large sample, you can invoke the CLT so: X N(µ, ). Also for a large sample, you can replace an unknown σ by s. You know how to do a

More information

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

Standard Deviation Estimator

Standard Deviation Estimator CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of

More information

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7 Solutions to Problems in Goldstein, Classical Mechanics, Second Edition Homer Reid April 21, 2002 Chapter 7 Problem 7.2 Obtain the Lorentz transformation in which the velocity is at an infinitesimal angle

More information

NCSS Statistical Software

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

More information

Means, standard deviations and. and standard errors

Means, standard deviations and. and standard errors CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Two Correlated Proportions (McNemar Test)

Two Correlated Proportions (McNemar Test) Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

II. DISTRIBUTIONS distribution normal distribution. standard scores

II. DISTRIBUTIONS distribution normal distribution. standard scores Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

More information

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Non-Inferiority Tests for Two Means using Differences

Non-Inferiority Tests for Two Means using Differences Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous

More information

Sample Size and Power in Clinical Trials

Sample Size and Power in Clinical Trials Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Fitting Subject-specific Curves to Grouped Longitudinal Data

Fitting Subject-specific Curves to Grouped Longitudinal Data Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: vad5@hw.ac.uk Currie,

More information

Non Parametric Inference

Non Parametric Inference Maura Department of Economics and Finance Università Tor Vergata Outline 1 2 3 Inverse distribution function Theorem: Let U be a uniform random variable on (0, 1). Let X be a continuous random variable

More information

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 1. Which of the following will increase the value of the power in a statistical test

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm

Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

Introduction to Fixed Effects Methods

Introduction to Fixed Effects Methods Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information