MPRA Muich Persoal RePEc Archive A modified Kolmogorov-Smirov test for ormality Zvi Drezer ad Ofir Turel ad Dawit Zerom Califoria State Uiversity-Fullerto 22. October 2008 Olie at http://mpra.ub.ui-mueche.de/14385/ MPRA Paper No. 14385, posted 1. April 2009 04:38 UTC
A Modified Kolmogorov-Smirov Test for Normality Zvi Drezer, Ofir Turel ad Dawit Zerom Steve G. Mihaylo College of Busiess ad Ecoomics Califoria State Uiversity-Fullerto Fullerto, CA 92834. Abstract I this paper we propose a improvemet of the Kolmogorov-Smirov test for ormality. I the curret implemetatio of the Kolmogorov-Smirov test, a sample is compared with a ormal distributio where the sample mea ad the sample variace are used as parameters of the distributio. We propose to select the mea ad variace of the ormal distributio that provide the closest fit to the data. This is like shiftig ad stretchig the referece ormal distributio so that it fits the data i the best possible way. If this shiftig ad stretchig does ot lead to a acceptable fit, the data is probably ot ormal. We also itroduce a fast easily implemetable algorithm for the proposed test. A study of the power of the proposed test idicates that the test is able to discrimiate betwee the ormal distributio ad distributios such as uiform, bi-modal, beta, expoetial ad log-ormal that are differet i shape, but has a relatively lower power agaist the studet t-distributio that is similar i shape to the ormal distributio. I model settigs, the former distictio is typically more importat to make tha the latter distictio. We demostrate the practical sigificace of the proposed test with several simulated examples. Keywords: Closest fit; Kolmogorov-Smirov; Normal distributio. 1 Itroductio May data aalysis methods deped o the assumptio that data were sampled from a ormal distributio or at least from a distributio which is sufficietly close to a ormal distributio. For example, oe ofte tests ormality of residuals after fittig a liear model to the data i order to esure the ormality assumptio of the model is satisfied. Such a assumptio is of great importace because, i may cases, it determies the method that ought to be used to estimate the ukow parameters i the model ad also dictates the test procedures which the aalyst may Correspodig author: Mihaylo College of Busiess ad Ecoomics, Califoria State Uiversity, Fullerto, CA, 92834-6848, (714) 278-3635, dzerom@fullerto.edu. 1
apply. There are several tests available to determie if a sample comes from a ormally distributed populatio. Those theory-drive tests iclude the Kolmogorov-Smirov test, Aderso-Darlig test, Cramer-vo Mises test, Shapiro-Wilk test ad Shapiro-Fracia test. The first three tests are based o the empirical cumulative distributio. Shapiro-Fracia test (Shapiro ad Fracia, 1972 ad Roysto, 1983) is specifically desiged for testig ormality ad is a modificatio of the more geeral Shapiro-Wilk test (Shapiro ad Wilk 1965). There are also tests that exploit the shape of the distributio of the data. For example, the widely available Jarque-Bera test (Jarque ad Bera, 1980) is based o skewess ad kurtosis of the data. To complemet the results of formal tests, graphical methods (such as box-plots ad Q-Q plots) have also bee used ad icreasigly so i recet years. I this paper we focus o the Kolmogorov-Smirov (KS) test. The KS test is arguably the most well-kow test for ormality. It is also available i most widely used statistical software packages. I its origial form, the KS test is used to decide if a sample comes from a populatio with a completely specified cotiuous distributio. I practice, however, we ofte eed to estimate oe or more of the parameters of the hypothesized distributio (say, the ormal distributio) from the sample, i which case the critical values of the KS test may o loger be valid. For the case of ormality testig, Massey (1951) suggests usig sample mea ad sample variace, ad this is the orm i the curret use of KS test. Lilliefors (1967) ad Dallal ad Wilkiso (1986) provide a table of approximate critical values for use with the KS statistics whe usig sample mea ad sample variace. While the use of sample mea ad sample variace seems a atural choice, usig these fixed values is ot ecessarily the best available optio. Whe oe cocludes (after usig the KS test) that a sample is ot ormal, this oly meas that the data is ot ormal at the specified sample mea 2
ad sample variace. But it could well be that the data is ormal or sufficietly close to ormal at other values of the mea ad variace of the ormal distributio. Although the scope of this paper is limited to the KS test, this drawback is also shared by other tests such as Aderso-Darlig ad Cramer-vo Mises tests. Iterestigly, Stephes (1974) writes after comparig several tests (such as KS, Aderso-Darlig ad so o) It appears that sice oe is tryig, i effect, to fit a desity of a certai shape to the data, the precise locatio ad scale is relatively uimportat, ad beig tied dow to fixed values, eve correct oes, is more of a hiderace tha a help. I this paper, we suggest a approach that circumvets the eed to use pre-determied values of mea ad variace. Istead, we look for mea ad variace values such that the resultig ormal distributio fits the give sample data. Whe such values do ot exist, we coclude that the sample data is probably ot ormally distributed. Avoidig the use of fixed parameters, we propose a modified KS test i which we choose data-drive mea ad variace values of the ormal distributio by miimizig the KS statistics. I the traditioal KS test, the data is compared agaist a ormal distributio with fixed parameter values. O the other had, our approach looks for a ormal distributio that fits the data i the best possible way, ad hece favors the sample data whe passig judgmet about its closeess to a ormal distributio. Suppose that the sample cosists of idepedet observatios. These observatios are sorted x 1 x 2... x. The cumulative distributio of the data is a step fuctio (see Figures 1 ad 2). At each x k the step is betwee k 1 ormal distributio at x k is Φ ( xk µ σ ad k. For a give mea µ ad variace σ2, the cumulative ). The KS statistics is give by KS(µ, σ) = max 1 k { k Φ ( xk µ σ ) ( xk µ, Φ σ ) k 1 }. (1) The traditioal KS statistics is simply KS( x, s) where µ = x ad σ = s. We propose a modified KS 3
statistics deoted by KS( µ, σ) where the vector ( µ, σ) is a solutio to the followig miimizatio problem mi {KS(µ, σ)} (2) µ,σ where KS(µ, σ) is as defied i (1). I sectio 2, we aalyze this optimizatio problem ad provide a tractable algorithm for its solutio. I sectio 3, we provide critical values for the modified KS test usig 100 millio replicatios. The proposed algorithm is quite efficiet ad we are able to complete the critical values table (Table 1) i less tha 4 days (6000 calculatios per secod). To facilitate implemetatio of our test, we also provide approximatio formulas (that work for ay 20) for fidig critical values at typical sigificat levels. To best of our kowledge, there has ot bee ay study that exteds the KS test by allowig the use of optimized distributio parameters. Closely related to our work is that of Weber et al (2006) where they cosider the problem of parameter estimatio of cotiuous distributios (ot just ormal distributio) via miimizig the KS statistics. They use the heuristic optimizatio algorithm of Sobieszczaski-Sobieski et al (1998) to estimate the parameters of a umber of widely used distributios ad also provide a user-friedly software tool. The practical advatage of this software is that it suggests a best fitted distributio to give data by lookig at the miimized KS statistics values amog a set of cotiuous distributios. I this sese, our algorithm of miimizig the KS statistics may also serve the same purpose as that of Weber et al (2006) although our paper is wider i scope. To motivate our modified KS test, we give two Mote Carlo based examples that ca highlight the weakesses of the existig KS ad offer iterestig practical implicatios for proper use of the KS 4
test. Example 1: We geerate 999 stadard ormal radom samples of size = 30. The choice of 999 samples (istead of say, 1000) is oly to facilitate the calculatio of the media sample as we will see below. For each sample, we calculate the two KS statistics values, KS( x, s) ad KS( µ, σ), where the algorithm i sectio 2 is used to compute µ ad σ. We also compute = KS( x, s) KS( µ, σ) which is simply the differece betwee the two KS statistics values. It should be oted that KS( µ, σ) KS( x, s) ad hece 0. We do the above steps for all 999 samples. Let j deote a value obtaied for sample j where j = 1..., 999. We select a typical sample, say the k-th sample, to be the oe where k = Media{ j } 999 j=1. Similarly, a extreme sample, say the l-th sample, to be the oe where l = Max{ j } 999 j=1 Based o the typical sample (sample k), Figure 1 gives the empirical cumulative distributio (the step-fuctio), the cumulative ormal distributio (the dotted lie) based o the sample mea ( x k = 0.1078) ad sample variace (s k = 1.022) ad the cumulative ormal distributio (the solid lie) based o µ k = 0.1712 ad σ k = 1.089. The subscript k is attached to estimates to idicate that they correspod to the typical sample k. For this typical sample, KS( x k, s k ) = 0.0954 ad KS( µ k, σ k ) = 0.0704 which idicate a 26% improvemet by the latter. Note from the empirical cumulative distributio plots that the solid lie is closer overall to the sample cumulative distributio. Usig critical values Table 1 (for = 30), both KS statistics values lead to the o-rejectio of the ull of ormality with p-value p > 0.2. This coclusio is correct as we kow the sample is geerated from a ormal distributio. Based o the extreme sample (sample l), Figure 2 gives the empirical cumulative distributio (the step-fuctio), the cumulative ormal distributio (the dotted lie) based o the sample mea ( x l = 0.1628) ad sample variace (s l = 0.9303) ad the cumulative ormal distributio 5
(the solid lie) based o µ l = 0.3238 ad σ l = 0.7436. For this sample, KS( x l, s l ) = 0.1896 ad KS( µ l, σ l ) = 0.0951 which idicate a 50% improvemet by the latter. From the empirical cumulative distributio plots, the solid lie is much closer to the sample cumulative distributio for data values roughly below -0.5 ad these values costitute approximately 80% of the data observatios. Usig the critical values table for = 30 (Table 1), the traditioal KS test implies that the sample data deviates from ormality (at p-value p < 0.01). O the other had, the modified KS test cocludes that we ca ot reject the ull of ormality at a covicig p-value p > 0.2. The coclusio from our test proposal is correct as the sample is geerated from a ormal distributio. This example illustrates that the sample mea ad sample variace do ot ecessarily provide the closest fit to the empirical distributio of the sample. Our approach shifts ad stretches the ormal distributio (by lookig for data-drive mea ad variace values) so that it fits the sample data i the best possible way. Example 2: We cosider = 20, 40,..., 400 (i a iterval of 20). For each, we geerate 10,000 stadard ormal radom samples of 1 ad oe outlier. We defie a outlier as outlier = C where the costat C takes values 4, 5,..., 10. We will oly report results for C = 4, 6, 8, 10 as the implicatios from the other outliers are qualitatively similar. The purpose of this example is to evaluate the two tests: the traditioal KS test (which is based o KS( x, s)) ad the modified KS test (which is based o KS( µ, σ)), i terms of their size usig the level of sigificace α = 0.05. Whe implemetig both tests, we use the approximatio formula i Table 2 for locatig the critical values. Usig 10,000 replicatios, we plot the size of the two tests for each i Figure 3. Size is defied as the percetage of times (out of the total 10,000 samples) a test rejects the ull hypothesis of ormality. If a test is correctly sized, this percetage should be 6
close to 0.05. The dotted lie i the figure correspods to the size of the modified KS test while the solid lies correspod to the traditioal KS test. Iterestigly, the modified KS test is always close to 0.05 regardless of the magitude of the outlier for all (the average size from all is 0.0508 with stadard deviatio of 0.0005). However the traditioal KS test is very sesitive to outliers leadig to clearly wrog coclusios about the distributio of the data. While icreasig the sample size seems to help miimize the effect of a outlier o the test, we still eed urealistically large sample sizes to get rid off the effect. This example is oly meat to illustrate the dager of usig fixed parameter values that do ot respod to the structure of sample data. The modified KS test adapts to the data by attemptig (via choice of µ ad σ) to fit the ormal distributio to the majority of the data by weightig dow the outlier. I practice, researchers ofte deal with small data sets with potetially a few outliers. Eve if much of the data may be well approximated by a ormal distributio, a blid use of traditioal KS test will lead to rejectio of ormality - suggestig use of trasformatios or complex models. I cotrast, the modified KS test is robust to these few outliers ad ca lead to more uaced judgmets regardig the ormality of the data. 2 Algorithm I this sectio, we aalyze the optimizatio problem give i equatio (2) ad provide a tractable algorithm for its solutio. By (1) KS(µ, σ) k ( ) Φ xk µ σ ( ) xk µ KS(µ, σ) Φ k 1 σ 7
Let L be the miimum possible value of KS(µ, σ). The solutio to the followig optimizatio problem is the miimum possible KS(µ, σ) ad thus is equivalet to (2). mi{ L } (3) subject to: ( ) k Φ xk µ L for k > L (4) σ ( ) xk µ Φ k 1 L for k < (1 L) + 1. (5) σ Note that if k L 0, costrait (4) is always true ad if L + k 1 1, costrait (5) is always true. We ca solve (3-5) by desigig a algorithm that fids whether there is a feasible solutio to (4-5) for a give L. For a give L, the costraits are equivalet to: µ x k Φ 1 ( k L ) σ for k > L (6) ( µ x k Φ 1 L + k 1 ) σ for k < (1 L) + 1. (7) Costraits (6) ad (7) ca be combied ito oe costrait each. ( ) } k µ mi {x k Φ 1 k>l L σ ( {x k Φ 1 µ max k<(1 L)+1 L + k 1 ) } σ (8) (9) For a give σ there is a solutio for µ satisfyig the system of equatios (8-9) if ad oly if ( ) } k mi {x k Φ 1 k>l L σ ( max {x k Φ 1 k<(1 L)+1 L + k 1 ) } σ (10) 8
or ( ) } k F (σ, L) = mi {x k Φ 1 k>l L σ max k<(1 L)+1 {x k Φ 1 ( L + k 1 ) } σ 0. (11) For a give L, the fuctio F (σ, L) is a piece-wise liear cocave fuctio i σ (see Figure 4). We prove that F (σ, L) is a cocave fuctio i σ for a give L. Theorem 1: The fuctio F (σ, L) for a give L is cocave i σ. Proof: All the fuctios i the braces of (11) are liear i σ ad all the other values are costats for a give L. Furthermore, the miimum of liear fuctios is cocave ad the maximum of liear fuctios is covex. Therefore, the differece F (σ, L) is a cocave fuctio i σ. By Theorem 1, for a give L, F (σ, L) has oly oe local maximum which is the global oe. The maximum value of F (σ, L) for a give L ca be easily foud by a search o σ. For ay value of σ F (σ, L) ca be calculated ad if the slope is positive we kow that the optimal σ is to the right, ad if it is egative we kow that it is to the left. The solutio is always at the itersectio poit betwee two lies, oe with a positive slope ad oe with a egative slope (see figure 4). Megiddo (1983) suggested a very efficiet method for solvig such a problem. Note that if F (σ, L) 0, ay µ i the rage [ max k<(1 L)+1 {x k Φ 1 ( L + k 1 ) } ( ) } σ, mi {x ] k k Φ 1 k>l L σ (or specifically the midpoit of the rage) with the σ used i calculatig F (σ, L) yields a KS statistic which does ot exceed L. Let G(L) = max {F (σ, L)} foud by either the method i Megiddo (1983) or ay other search σ 9
method. If G(L) 0, there is a solutio (µ, σ) for this value of L ad if G(L) < 0 o such solutio exists. To fid the miimum value of L we propose a biary search. The details of the biary search are ow described. The optimal L must satisfy L KS( x, s). Also, ay KS statistic must be at least 1 2. Therefore, 1 2 L KS( x, s). A biary search o ay segmet [a, b] is performed as follows. G(L) for L = a+b 2 is evaluated. If G(L) 0, there is a solutio (µ, σ) for this value of L ad the search segmet is reduced to [a, a+b 2 ]. If G(L) < 0 o such solutio exists ad the search segmet is reduced to [ a+b 2, b]. I either case the search segmet is cut i half. Followig a relatively small umber of iteratios, the search segmet is reduced to a small eough rage (such as 10 5 ) ad the upper limit of the rage yields a solutio (µ, σ) ad its value of L is withi a give tolerace (the size of the fial segmet) of the optimal value of L. 3 Mote Carlo estimatio of test statistics distributio I this sectio we provide critical values for the modified KS statistics usig Mote Carlo simulatio. To derive the distributio of this statistics, we draw a radom sample of size from a stadard ormal distributio. We estimate µ ad σ ad compute KS( µ, σ), ad for every sample size, we repeat this procedure 100 millio times. The critical values are give i Table 1. We also recalculate the critical values for the traditioal KS test i the same way ad are available i Table 1. Because we use 100 millio samples, the critical values we report for the traditioal KS test are more accurate tha Lilliefors (1967) ad Dallal ad Washigto (1986). The critical values for both KS tests ca be approximated for 20 by the formula a+ b ( ) 1 c where a, b ad c are fuctios of α. These three parameters are give i Table 2. The approximatio is very accurate with a error (whe compared to Table 1) of ot more tha 0.0002. So, the approximatio formula ca replace the tables for 20. We obtai the approximatio formula 10
via multiple regressio, where for each α, the critical values i Table 1 are used as the depedet variable, ad 1 ad 1 are the idepedet variables. We select these two idepedet variables through experimetatio. We begi with a sigle variable regressio ivolvig oly 1. We the add variables, oe at a time, which are fuctios of. A regressio ivolvig 1 ad 1 provides a excellet fit. 4 Power comparisos I this sectio we compare the approximate powers of the modified KS test with the traditioal KS test for a set of selected distributios. These distributios covey a wide array of shapes where some resemble the ormal distributio while others are substatially differet. Some of these distributios are also used i Lilliefors (1967) ad Stephes (1974), amog others. We cosider a uiform (0,1) distributio; a bi-modal distributio which is a composite of two ormal distributios, oe cetered at +2 ad oe at -2 with variace of 1; a beta(1,2) distributio whose desity fuctio is a straight lie coectig (0, 0) ad (1, 1); a expoetial distributio with mea ad variace of 1; a log-ormal distributio with mea e 1/2 ad variace e(e 1) ad three t-distributios with degrees of freedom 1, 2 ad 6. We also iclude the ormal distributio where we expect power to be close to α. To save space, we oly report results for α = 0.05 (the behavior is very similar for other values of α). For a give alterative hypothesis (say, a uiform distributio), computatio of the power of the modified KS test is doe as follows. We draw a radom sample of size from the distributio specified i the alterative hypothesis. Based o this sample, we estimate the parameters µ ad σ usig the algorithm outlied i sectio 2 ad compute KS( µ, σ). The, apply the critical values i Table 2 to test if such sample comes from a ormal distributio. Repeatig this procedure 11
10,000 times, ad coutig the umber of correct decisios gives the approximate power. The same approach is followed to compute power for traditioal KS test. The complete power results are give i Table 3. From Table 3 we ca see that the power of the modified KS test is cosistetly better tha the traditioal KS test for uiform, beta ad bi-modal distributios. The improvemet is quite large especially for uiform ad beta distributios. These power results idicate that the proposed KS test is able to better discrimiate betwee the ormal distributio ad those distributios that are very differet i shape from ormal, i.e. those that substatially deviate from ormality. For expoetial ad log-ormal distributios, the powers of the two KS tests are quite similar where both achieve reasoably good powers for 40. For the t-distributios, the modified KS test has a much lower power tha the traditioal KS test. What is commo to the t-distributios is that they resemble the ormal distributio except for their heavier tails. I theory, with icreasig degrees of freedom, the tails of the t-distributio get lighter evetually behavig like the ormal distributio. The modified KS test has difficulty detectig o-ormality whe the observed distributio is similar to ormal ad icreasigly so with larger degrees of freedom, i.e. as it gets closer to ormal. O the surface, the low power for the t-distributio may seem like a weakess of the modified KS test. However, would oe expect, with a small, that data geerated by a t 6 distributio be distiguishable from a ormal distributio - thus be idetified as o-ormal? We argue that the reaso the traditioal KS test has a higher power is that it rejects data which ca be fitted quite well to a ormal distributio by a proper selectio of µ ad σ. It is ideed strage that the power of the traditioal KS test is higher for a t 2 distributio tha it is for the uiform ad beta distributios while the latter are substatially differet from ormality. By costructio, the modified KS test tries to look for those mea ad variace values that lead to the closest fit to the 12
data. I a way, we are tryig to approximate the referece distributio (the t-distributio) with a ormal distributio. If such a ormal approximatio exists, the data may be cosidered sufficietly ormal. For example, for t 6, the powers at 100 are close to α = 0.05 implyig the sample data is hardly distiguishable from the ormal distributio (see how close the powers of t 6 are to those of the ormal distributio). Whe the degrees of freedom is made smaller, the power of the modified KS test improves because the deviatio from ormality gets larger. Whe ormal approximatio ca ot be achieved, the sample data is flagged as o-ormal. For t 2, the modified KS test is able to detect differece from ormality at = 200 while t 6 requires a very large to be detected by the modified KS. For t 1, the power of the proposed KS test gets a lot better reachig decet power at = 100. The reaso is that t 1 has a much heavier tail tha the ormal distributio makig ormal approximatio via data drive mea ad variace values very difficult. To see why the modified KS test treats several small data from the t-distributio as ormally distributed, we use the t 2 -distributio as a example. To do so, we repeat the experimets described i Example 1 (see sectio 1) but draw 999 samples (of = 30) from a t 2 distributio. The odd umber of simulatio replicatios has the same purpose as i Example 1. We select a typical sample i terms of the differece betwee the traditioal KS statistic ad our proposed KS statistic. Similar to Figures 1 ad 2, three cumulative distributio are depicted i Figure 5 (the rage of x was trucated for better expositio). For this typical sample, x = 0.0117, s = 2.506, µ = 0.0321 ad σ = 1.571. The traditioal KS is KS( x, s) = 0.1805 while the modified KS is KS( µ, σ) = 0.1003. Usig the critical value tables i sectio 3, the traditioal KS test rejects the ormality with a p-value of p = 0.05. O the cotrary, the modified KS test does ot reject ormality with p-value p > 0.10. 13
5 Coclusio May data aalysis methods (t-test, ANOVA, regressio) deped o the assumptio that data were sampled from a ormal distributio. Oe of the most frequetly used test to evaluate how far data are from ormality is the Kolmogorov-Smirov (KS) test. I implemetig the KS test, most statistical software packages use the sample mea ad sample variace as the parameters of the ormal distributio. However, the sample mea ad sample variace do ot ecessarily provide the closest fit to the empirical distributio of the data. Therefore, we propose a modified KS test i which we optimally choose the mea ad variace of the ormal distributio by miimizig the KS statistics. To facilitate easy implemetatio we also provide a algorithm to solve for the optimal parameters. Refereces 1. Dallal G. E. ad L. Wilkiso (1986) A aalytic approximatio to the distributio of Lilliefors s test statistic for ormality, The America Statisticia, 40, 294-296. 2. Jarque, C.M. ad A.K. Bera (1980) Efficiet Tests for Normality, Homoscedasticity ad Serial Idepedece of Regressio Residuals, Ecoomics Letters, 6(3), 255-259. 3. Lilliefors H. W. (1967) O the Kolmogorov-Smirov test for ormality with mea ad variace ukow, Joural of the America Statistical Associatio, 62, 399-402. 4. Massey F. J. (1951) The Kolmogorov-Smirov test for goodess of fit, Joural of the America Statistical Associatio, 46, 68-78. 5. Megiddo N. (1983) Liear-time algorithms for liear programmig ir 3 ad related problems, SIAM Joural o Computig, 12, 759-776. 6. Roysto, J. P. (1983) A Simple Method for Evaluatig the Shapiro-Fracia W Test of No-Normality, Statisticia, 32(3) (September), 297-300. 7. Shapiro, S. S. ad R. S. Fracia (1972) A Approximate Aalysis of Variace Test for Normality, Joural of the America Statistical Associatio, 67, 215-216. 8. Shapiro, S. S. ad M. B. Wilk (1965) A Aalysis of Variace Test for Normality (Complete Samples), Biometrika, 52(3/4) (December), 591-611. 14
9. Sobieszczaski-Sobieski, J., Laba, K. ad R. Kicaid (1998) Bell-curve evolutioary optimizatio algorithm, Proceedigs of the 7th AIAA Symposium o Multidiscipliary Aalysis ad Optimizatio, St. Louis, MO, 2-4 September, AIAA paper 98-4971. 10. Stephes, M.A. (1974) EDF statistics for goodess of fit ad some comparisos, Joural of the America Statistical Associatio, 69, 730-737. 11. Weber, M., Leemis, L. ad R. Kicaid (2006) Miimum Kolmogorov-Smirov test statistic parameter estimates, Joural of Statistical Computatio ad Simulatio, 76, 3, 195-206. 15
Table 1: Critical Values for the Traditioal ad Modified KS Test Traditioal KS statistics Modified KS statistics Upper Tail Probabilities Upper Tail Probabilities 0.20 0.15 0.10 0.05 0.01 0.001 0.20 0.15 0.10 0.05 0.01 0.001 4 0.3029 0.3215 0.3453 0.3753 0.4131 0.4327 0.2396 0.2436 0.2474 0.2499 0.2987 0.3518 5 0.2894 0.3027 0.3189 0.3430 0.3967 0.4388 0.2000 0.2108 0.2255 0.2458 0.2763 0.3063 6 0.2687 0.2809 0.2971 0.3234 0.3705 0.4232 0.1962 0.2046 0.2147 0.2286 0.2570 0.2945 7 0.2523 0.2643 0.2802 0.3042 0.3508 0.4011 0.1855 0.1922 0.2006 0.2139 0.2435 0.2708 8 0.2388 0.2503 0.2651 0.2880 0.3328 0.3827 0.1748 0.1810 0.1899 0.2038 0.2281 0.2502 9 0.2272 0.2381 0.2522 0.2741 0.3172 0.3657 0.1661 0.1727 0.1811 0.1932 0.2151 0.2418 10 0.2171 0.2274 0.2410 0.2621 0.3035 0.3509 0.1591 0.1650 0.1725 0.1836 0.2045 0.2324 11 0.2081 0.2181 0.2312 0.2514 0.2914 0.3375 0.1524 0.1578 0.1648 0.1753 0.1972 0.2240 12 0.2003 0.2099 0.2224 0.2420 0.2807 0.3255 0.1462 0.1514 0.1580 0.1681 0.1902 0.2158 13 0.1932 0.2025 0.2146 0.2335 0.2710 0.3146 0.1407 0.1457 0.1521 0.1627 0.1839 0.2081 14 0.1869 0.1958 0.2076 0.2259 0.2623 0.3048 0.1358 0.1406 0.1472 0.1576 0.1780 0.2012 15 0.1811 0.1898 0.2012 0.2189 0.2543 0.2958 0.1314 0.1363 0.1428 0.1528 0.1725 0.1949 16 0.1759 0.1843 0.1954 0.2126 0.2471 0.2875 0.1276 0.1325 0.1388 0.1485 0.1674 0.1893 17 0.1710 0.1793 0.1900 0.2068 0.2404 0.2800 0.1243 0.1290 0.1351 0.1445 0.1628 0.1845 18 0.1666 0.1746 0.1851 0.2015 0.2342 0.2729 0.1211 0.1257 0.1316 0.1407 0.1585 0.1799 19 0.1625 0.1703 0.1806 0.1965 0.2285 0.2663 0.1182 0.1226 0.1284 0.1372 0.1545 0.1756 20 0.1587 0.1663 0.1763 0.1919 0.2232 0.2603 0.1154 0.1198 0.1254 0.1339 0.1510 0.1716 25 0.1430 0.1499 0.1589 0.1730 0.2014 0.2351 0.1040 0.1079 0.1129 0.1207 0.1363 0.1547 30 0.1312 0.1376 0.1458 0.1588 0.1849 0.2161 0.0955 0.0990 0.1036 0.1108 0.1251 0.1422 40 0.1145 0.1200 0.1272 0.1385 0.1614 0.1889 0.0833 0.0864 0.0905 0.0967 0.1092 0.1242 50 0.1029 0.1078 0.1143 0.1245 0.1450 0.1699 0.0749 0.0777 0.0813 0.0869 0.0982 0.1116 60 0.0943 0.0988 0.1047 0.1140 0.1328 0.1556 0.0687 0.0712 0.0745 0.0797 0.0900 0.1023 70 0.0875 0.0917 0.0972 0.1058 0.1233 0.1445 0.0638 0.0661 0.0692 0.0740 0.0835 0.0950 80 0.0820 0.0859 0.0911 0.0992 0.1156 0.1355 0.0598 0.0620 0.0649 0.0694 0.0783 0.0891 90 0.0775 0.0812 0.0860 0.0937 0.1092 0.1279 0.0565 0.0586 0.0613 0.0655 0.0740 0.0841 100 0.0736 0.0771 0.0817 0.0890 0.1037 0.1216 0.0537 0.0557 0.0583 0.0623 0.0703 0.0799 400 0.0373 0.0390 0.0414 0.0450 0.0524 0.0615 0.0273 0.0283 0.0296 0.0316 0.0356 0.0405 900 0.0249 0.0261 0.0277 0.0301 0.0351 0.0411 0.0183 0.0190 0.0198 0.0212 0.0239 0.0271 Table 2: Coefficiets for the approximate formulas Traditioal KS test Modified Ks test α a b c a b c 0.20 0.00053 0.73574 0.78520 0.00060 0.53446 0.80443 0.15 0.00049 0.77149 0.78515 0.00068 0.55329 0.76285 0.10 0.00059 0.81689 0.77062 0.00062 0.57999 0.78034 0.05 0.00052 0.89105 0.79780 0.00061 0.62082 0.81183 0.01 0.00054 1.03964 0.84912 0.00055 0.70276 0.85751 0.001 0.00052 1.22182 0.99171 0.00056 0.79997 0.89234 16
Table 3: Powers (%) of the Traditioal ad Modified KS tests (α = 0.05) Uiform Bi-modal Beta 20 9.59 16.28 35.11 44.48 17.73 25.09 40 19.30 29.14 70.57 76.65 36.20 51.99 60 32.15 44.66 90.40 92.48 54.74 75.78 80 46.30 60.53 97.34 97.89 69.53 89.83 100 58.58 74.36 99.43 99.65 81.45 96.57 200 94.53 99.20 100.00 100.00 99.66 99.99 300 99.81 100.00 100.00 100.00 100.00 100.00 400 99.99 100.00 100.00 100.00 100.00 100.00 Expoetial Log-ormal t 1 20 58.38 57.06 79.88 67.88 84.86 30.46 40 90.49 91.94 98.27 95.97 98.16 56.05 60 98.66 99.16 99.94 99.68 99.82 76.60 80 99.87 99.96 99.99 99.98 99.99 88.70 100 100.00 100.00 100.00 100.00 100.00 95.16 200 100.00 100.00 100.00 100.00 100.00 99.98 300 100.00 100.00 100.00 100.00 100.00 100.00 400 100.00 100.00 100.00 100.00 100.00 100.00 t 2 t 6 Normal 20 45.74 9.87 11.40 5.33 5.01 5.03 40 68.93 14.85 15.32 5.40 5.04 4.93 60 84.02 21.15 17.83 5.80 5.13 5.02 80 91.89 29.31 21.78 6.25 5.13 4.92 100 95.86 36.80 25.01 7.16 5.15 4.99 200 99.92 73.15 40.01 9.27 5.45 5.22 300 100.00 92.16 53.43 11.46 5.05 5.36 400 100.00 98.35 64.87 14.30 5.16 4.87 Traditioal KS test Modified KS test 17
Figure 1: The Typical Sample Cumulative Distributios Figure 2: The Extreme Sample Cumulative Distributios Data Sample parameters Optimal Data Sample parameters Optimal 18
Figure 3: Sizes of the Traditioal (solid lie) ad Modified (dotted lie) KS tests (α = 0.05) Outlier=10 Outlier=8 Outlier=6 Outlier=4 Size Figure 4: The Fuctio F (σ, L) 0 σ F (σ, L) 19
Figure 5: Typical t 2 Samples Cumulative Distributios Data Sample parameters Optimal 20