# Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Size: px
Start display at page:

Download "Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur"

Transcription

1 Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce one of the most important distributions in the theory of probability and statistics- it is called the normal distribution. The normal distribution has become prominent because of one basic theorem in distribution theory which is called the central limit theorem, it tells that if we are having a sequence of independent and identically distributed random variables, then the distribution of the sample mean or the sample sum under certain conditions is approximately normal distribution, or as N becomes large the distribution of the sample mean or the distribution of the sample sum is a normal distribution with certain mean and variance. (Refer Slide Time: 01:14) We will talk about the central limit theorem a little later firstly, let me introduce the normal distribution. So, a continuous random variable X is said to have a normal distribution with mean mu and variance sigma square. So, we will denote it by N mu

2 sigma square if it has the probability density function given by 1 by sigma root 2 pi e to the power minus 1 by 2 x minus mu by sigma whole square. The range of the variable is the entire real line, the parameter mu is a real number and sigma is a positive real number. Now, we will firstly, show that this is a proper probability density function and we will consider the characteristics of this. To prove that it is a proper probability density function, we should see that it is a non-negative function, which it is obviously, because here it is an exponential function and sigma is a positive number then, we look at the integral of fx dx over the full range; here we make a transformation, so, 1 by sigma root 2 pi e to the power minus 1 by 2 x minus mu by sigma whole square minus infinity to infinity. Let us make the transformation here say, z is equal to X minus mu by sigma 1 by, so, dz will become equal to 1 by sigma dx - this is a one to one transformation over the range of the variable X. (Refer Slide Time: 03:36) Therefore, this integral is reducing to integral from minus infinity to infinity 1 by root 2 pi e to the power minus z square by 2 dz, this 1 by root 2 pi e to the power minus z square by 2 is also known as error function. So, we observe here that first of all it is a convergent integral because we can write this z square by 2 as less than modulus z, and here we can consider two reasons, one is z is less than root 2 and z is greater than 2, so,

3 basically, this entire quantity e to the power minus z square by 2 can be considered to be bounded, and therefore, this is equal to 2 times integral 0 to infinity 1 by root 2 pi e to the power minus z square by 2 dz, over the range 0 to infinity we can substitute z square by 2 is equal to say, t that is, z is equal to 2t to the power half and dz is equal to 1 by root 2t dt, so, this becomes 0 to infinity 1 by root 2 pi e to the power minus t 1 by root 2t dt, that is equal to 1 by root pi d to the power half minus 1 e to the power minus t dt, which is nothing but gamma half by root pi, now, gamma half is root pi, so this is equal to 1- so, this is a proper probability density function. We look at the moments of this distribution. Now, if we consider the transformation that we have made here, that is z is equal to X minus mu by sigma, this suggests that it will be easier to calculate moments of X minus mu or moments of X minus mu by sigma. So, we will do that. (Refer Slide Time: 06:09) Let us consider expectation of X minus mu by sigma to the power k. So, this is equal to integral minus infinity to infinity x minus mu by sigma to the power k 1 by sigma root 2 pi e to the power minus 1 by 2 x minus mu by sigma whole square dx. So, consider the transformation x minus mu by sigma is equal to z, so, this will, this particular integral will reduce to minus infinity to infinity z to the power k 1 by root 2 pi e to the power minus z square by 2 dz. If we look at this function, the function is an odd function if k is odd and therefore, this will vanish, so, this will vanish if k is odd, and it is equal to if k is

4 of the form say 2m, then this integral will reduce to 2 times 0 to infinity z to the power 2m 1 by root 2 pi e to the power minus z square by 2 dz. At this stage let us consider the second transformation that we made that is z square by 2 is equal to t. So, if we make this transformation, then this quantity reduces to 2 times 0 to infinity, now, z square is equal to 2 t, so this becomes 2 t to the power m 1 by root 2 pi e to the power minus t 1 by root 2 t dt, by considering dz is equal to 1 by root 2 dt. So, we can simplify this here, there are two square root 2 is in the denominator, so, that will cancel with this. (Refer Slide Time: 08:34) So, we are getting 2 to the power m by root pi 0 to infinity integral t to the power m minus 1 by 2 e to the power minus dt, which is nothing but a gamma function. So, this is equal to 2 the power m by root pi gamma m plus 1 by 2. So, if m is any integer here, m is equal to 1, 2 and so on that is, k is an even integer, then expectation of X minus mu by sigma to the power 2m is given by 2 to the power m by root pi gamma m plus half. Of course, we can further simply this to write in a slightly convenient looking form, we can write it as m minus half m minus 3 by 2 and so on up to 3 by 2, 1 by 2 and gamma half, that is canceling out, so it is equal to 2 m minus 1, 2 m minus 3 and so on up to 5, 3, 1. So, we are able to obtain a general moment of X minus mu by sigma. So, if we utilize this, suppose I put k is equal to 1, then this is zero. So, k is equal 1 gives expectation of X minus mu by sigma is equal to 0, which means that expectation of X is equal to mu.

5 That means, the parameter mu of the normal distribution is actually the mean of it at first non-central moment therefore, the terms expectation of X minus mu to the power k that gives us a central moments of the normal distribution. Now, we have already shown that if k is odd, this is zero that means, all odd ordered central moments of the normal distribution are 0. So, expectation of X minus mu to the power k is 0 for k odd. (Refer Slide Time: 11:06) That is we can write that all odd ordered central moments of a normal distribution vanish. Now, this is quite important here because we are considering any parameters mu and sigma square, and for any parameters mu and sigma square all the central moments are vanishing provided they are of odd order. Now, let us consider even order. So, if you consider even order, we are getting the formula sigma to the power 2m in to 2m minus 1, 2m minus 3 up to 5, 3, 1. In particular, suppose I put m is equal to 1 here then I get, by putting m is equal to 1, 2m minus 1 is 1, so, that is sigma square. So, in particular, if I put m is equal to 1, this gives expectation of X minus mu square, that is equal to sigma square that is, mu 2, the second central moment of the normal distribution that is, the variance is sigma square. As we have already seen that generally mu and sigma square are used to denote the mean and variance of a distribution. So, the nomenclature comes from the normal distribution where the parameters mu and sigma square are actually corresponding to the mean and

6 variance of the random variable. If we look at, so, obviously, mu 3 is 0, if we look at mu4 here, the fourth moment that is, if I put m is equal to 2, then here I will get 3, this is 1, so this will become 3 sigma to the power 4. So, the fourth central moment is 3 sigma to the power 4; obviously, the measure of skewness is 0, measure of kurtosis that is, mu4 by mu2 squares minus 3 is also zero. That means, the peak of the normal distribution is a normal peak. So, when we introduced the measure of kurtosis or the concept of peakedness we said that it has to be compared with the peak of normal distribution or a normal peak. So, basically, the peak of the normal distribution is considered as a control or a standard. So, if we look at the shape of this distribution, the normal distribution, it is perfectly symmetric around mu and the peak of it is normal distribution; the value at X equal to mu is 1 by root 2 pi that is, the mode of the distribution, the maximum value. Since it is symmetric about mu it is clear that the median of the distribution is also mu and the mode of the distribution is also mu, that is a value at which the highest density value is taken. Let us consider the moment generating function of a normal distribution. (Refer Slide Time: 14:56) So, MX t that is, expectation of e to the power tx, this is equal to integral e to the power tx 1 by sigma root 2 pi e to the power minus 1 by 2 x minus mu by sigma whole square dx. So, we will still consider the same transformations which we introduced for the evaluation of any integral involving the normal density function that is, z is equal to x

7 minus mu by sigma and z square by 2 is equal to something. So, here if we write x minus mu by sigma is equal to z, then we are having x is equal to mu plus sigma z. So, the integral becomes e to the power t mu plus sigma z 1 by root 2 pi e to the power minus z square by 2 dz. So, since it is a quadratic in z we will again convert it into e to the power some term, which will involve a squaring z. So, we can write it as 1 by root 2 pi e to the power minus 1 by 2 z square minus sigma tz with a 2 here. This suggests that we should add sigma square t square and subtract it, if we subtract it, then the term will be half sigma square t square. So, if you look at this particular term, this is z minus sigma t whole square. So, the integrant denotes a probability density function of a normal random variable with mean sigma t and variance 1 therefore, this integral should be reducing to 1 and therefore, e to the power mut plus half sigma square t square becomes a moment generating function of a normal distribution with parameters mu and sigma square. Using the moment generating function of a normal distribution we can prove an interesting feature. (Refer Slide Time: 18:08) Consider say, so, let X follow normal mu sigma square; let us consider Y is equal to say ax plus b where a is any non-zero real and b is any real; consider the moment generating function of Y, that is equal to expectation of e to the power ty, this is equal to e to the power bt expectation of e to the power at X, this can be considered as the moment

8 generating function of the random variable X at the point at. Now, the distribution of X is normal and moment generating function of X that is, MXt is given by e to the power mu t plus half sigma square t square, so we can substitute at in place of t in the expression of MXt. So, we will get here e to the power bt e to the power mu at plus half sigma squares at whole square. So, we can adjust the terms a mu plus bt plus half a square sigma square t square. If we compare this term with the moment generating function of a normal distribution with parameters mu and sigma square, then we observe here that mu is replaced by a mu plus b and sigma square is replaced by a square sigma square. So, we can say that this is mgf of a normal a mu plus b, a square sigma square distribution. (Refer Slide Time: 20:34) So, by the uniqueness property of the moment generating function we have proved that if X follows normal mu, sigma square and Y is equal to ax plus b where a is not 0, then Y is also normally distributed with parameters amu plus b and a square sigma square- this is called the linearity property of normal distribution; that means, any linear function of a normal random variable is again normally distributed. Using this let us consider a random variable z defined as X minus mu by sigma. So, if X follows normal mu, sigma square and we make a linear transformation of this type, so it is 1 by sigma X minus mu by sigma; that means, if we compare here, then a is 1 by sigma and b is minus mu by sigma.

9 So, if we substitute here, we will get mu by sigma minus mu by sigma, that is 0, and this will become 1 by sigma square into sigma square, that is 1, so this will follow normal 0, 1. A random variable z which has a normal distribution with mean 0 and variance 1 is called standard normal random variable. Let us look at the density function. The pdf of z is denoted by, so, there is a standard notation, it is phi of z, small phi of z, it is 1 by root 2 pi e to the power minus 1 by 2 z square. We can see the shape of it, this is symmetric around z is equal to 0. (Refer Slide Time: 23:22) The cumulative distribution function of standard normal random variable is denoted by capital Phi of z that is, integral from minus infinity to z say, phi t dt where a small phi t is the probability density function of a standard normal random variable. Now, before going to the problems, let us look at the properties of this distribution. The standard normal distribution is symmetric about z is equal to 0. So, if we are considering say, if this is the point z, then phi z is actually this area, so this area will become equal to 1 minus phi of z, if we call this area as capital Phi z, then this is 1 minus Phi of z. By symmetry of distribution if we consider the corresponding point say, minus z here, then the area here is Phi of minus z, which shows that 1 minus Phi of z is equal to Phi of minus z. So, we have 1 minus Phi of z is equal to Phi of minus z, this is true for all z that means, we can write Phi of minus z plus Phi of z is equal to 1.

10 And another thing of course, we could have observed here is that phi of, small phi of minus z is equal to small phi of z for all z, that is because of the symmetric property of the distribution. In particular, we can put z is equal to 0 then this will give Phi of 0 is equal to half, which is true because the median of the standard normal distribution will be 0. Now, this will help us in evaluation of the probabilities related to any normal distribution. So, if we are having a general normal distribution that is, normal mu sigma square and we are interested to calculate say, probability of a less than X less than or equal to b then it is equal to f of b minus f of a. (Refer Slide Time: 26:34) However, if we consider the result here that X minus mu by sigma will have a standard normal distribution, then this can be shifted to FX that is, probability of X less than or equal to x; this we can write as probability X minus mu by sigma less than or equal to small x minus mu by sigma. Now, this is Z. So, this is equal to Phi of X minus mu by sigma; that means, the probabilities related to normal distribution can be calculated in terms of probabilities related to standard normal distribution. Now, how would you evaluate this? Capital Phi of z is equal to integral of minus infinity to z e to the power minus z square by 2 dz. If we made the transformation z square by 2 is equal to t after suitably altering the ranges so that it is a 1 to 1 transformation, it is reducing to an incomplete gamma function; so, the incomplete gamma function can be evaluated using

11 numerical integration say Simpson s one-third rule etcetera, and tables of the standard normal distribution are available in all the statistical books. So, if we want to evaluate the probability related to any normal distribution, we will firstly, convert it to a probability related to standard normal distribution and then utilize the tables or numerical integration here. In particular, if we consider say, probability, any particular probability say, X less than or equal to b say probability X greater than a, probability a less than X less than b, or 1 minus probability of a less than X less than b, so, these are some of the usual probabilities that are required in normal calculations, so, all of this can be evaluated using the properties of the standard normal cumulative distribution function. One more point here is that since the values of the cdf can be evaluated using Phi of minus z plus Phi of z is equal to 1 therefore, many times the tables are tabulated only for either positive arguments of z or negative arguments of z. (Refer Slide Time: 29:49) So, suppose that the time required for a distance runner to run a mile is a normal random variable with parameters mu is equal to 4 minute 1 second and standard deviation 2 seconds, what is a probability that this athlete will run the mile in less than 4 minutes or in more than 3 minutes 55 seconds, if we consider X as the time required and we consider it in the time measured in seconds, then X will follow normal distribution with mean 241 seconds, here 4 minutes 1 second is 241 seconds and sigma square is 4

12 seconds. So, what is the probability of running in less than 4 minutes that means, X is less than 240, what is the probability of this event? So, utilizing this relationship, we can write this as X minus 241 by 2 less than 240 minus 241 by 2, which is probability z less than minus 0.5, which is Phi of minus 0.5. So, this value will see from the tables of the standard normal cdf and this value turns out to be most of the tables are given up to four or five decimal points and we can also use a numerical integration rule such as Simpson s one-third rule, etcetera, to evaluate this. Similarly, if we want to calculate what is the probability that he will run in more than 3 minute 55 seconds, then probability X greater than 235 that is, probability z greater than minus 3, that is if we put 235 minus 241 divided by two, so it becomes z greater than minus three; so, if we look at the shape of the distribution minus 3 suppose here then 3 are here, so this is equivalent to Phi of 3, which is , it is extremely high probability. So, here you can see that mean is 4 minute 1 second that means, almost surely he will complete the race within 3 minute within, in more than 3 minute 55 seconds. This also brings out another important property of the normal distribution. The normal distribution is having high concentration of probability in the center, since you are having in the density function e to the power minus z square by 2 the terms go rapidly towards 0, the convergence towards 0 is very fast; therefore, in the range around the mean mu most of the probability is concentrated. (Refer Slide Time: 26:34) So, if we consider say, probability of say, modulus z less than 0.5 that is, minus 0.5 less than z less than 0.5, here z denotes standard normal distribution; so, this is equal to Phi of 0.5 minus Phi of minus 0.5. So, we can write because the value of one of them needs to be seen, we need not see both of them, either we see Phi of 0.5 or Phi of minus 0.5 and the other one we can write in terms of 1 minus that; so, this we can write as 1 minus Phi of minus 0.5; so, this becomes 1 minus twice the value of Phi minus 0.5 as 0.30; so, this is equal to 1 minus 2 into , that is equal to 1 minus , that is equal to 0... That means, within a very short distance that is, minus 0.5 to 5, itself almost forty percent of the probability is concentrated. If we consider say, minus 1 to 1, this consist of almost 60 percent of the probability; if we consider minus 2 to 2, this consist of more

13 than 90 percent of the probability; if we consider minus 3 to 3, this consist of more than percent of the probability. So, in terms of mu sigma this means that mu minus 3 sigma less than X less than mu plus 3 sigma is greater than 0.99, these are known as 3 sigma limits then, minus 2 to 2 is called as 2 sigma limits. So, in the industrial applications where certain product requirement such as, the width of certain bolts produced, diameters of certain nuts, etcetera, or various kind of quality control features which are implied in industry if they follow the normal distribution, then the industrial standards specify that in order that the product we defined as a good item or proper item, the specification should be within 3 sigma limits. So, in industry industrial standards these things are quite useful. (Refer Slide Time: 29:49) Let us consider another application. Assume that the high price of a commodity tomorrow is a normal random variable with mu is equal to 10 and sigma is equal to 1 by 4, what is the, what is the shortest interval that has probability 0.95 of including tomorrow s high price for this commodity? Now, from the properties of the normal distribution that we discussed just now, we should consider here the interval to be asymmetric interval around the mean, because we are requiring a shortest interval, so shortest interval will be symmetric around this, because if we take non symmetric interval then this will become slightly longer because of the tail is converging faster and there is more concentration of the probabilities towards the center.

14 (Refer Slide Time: 37:25) So, we can consider, since here the mean is 10, we can consider the interval of the form 10 minus a to 10 plus a. So, we want the value of that the probability of X lying in this interval 10 minus a top 10 plus a is So, consider the transformation X minus mu by sigma. So, here mu is 4, mu is 10 here, so, X minus mu by sigma, so, it becomes minus a by sigma less than or equal to z less than or equal to a by sigma, and sigma is 1 by 4, so this is 4a; so, this is becoming Phi of 4a minus Phi of minus 4a is equal to 0.95; what is the value of a for which this is satisfying? Once again we utilize the relation between capital Phi of X and capital Phi of minus X. So, this becomes twice phi of 4a minus 1 is equal to 0.95, and this gives us Phi of 4 a is equal to and from the tables of the normal distribution we can see that the point up to which the probability is so, after evaluation a becomes 0.49; and therefore, the interval 10 minus a to 10 plus a reduces to 9.51 to So, if the mean price is 10 and the standard deviation is 0.25, then the interval which will have the high price with probability 0.95 is 9.5 to 10.5 approximately, which is basically 2 sigma, because here sigma is 0.25, so 2 sigma becomes 0.5, so around 10 the interval of 0.25 is, so, in 2 sigma limits we have more than 95 percent of the probability here.

15 (Refer Slide Time: 39:56) Another important point which I mentioned was that the origin of the normal distribution, the normal distribution was derived as a distribution of the errors by Gauss. So, he was considering astronomical measurements, now, he did not consider one measurement, he considered several measurements and considered the average of those measurements to consider that as the estimate of the actual measurement. So, since in each measurement some error will be concentrated and therefore, if we look at the distribution of the errors, gauss observe that it is normal distribution, that is why it was called, normal distribution is also called the law of errors or the error distribution, etcetera. And it turns out that the sample mean or the sample sum is normally distributed. However, even apart from the gauss, mathematician such as De-moivre and Laplace, etcetera, they obtained the normal distribution as an approximation to binomial distribution, or distribution of Poisson approximated to normal distribution. So, let us look at it, normal approximation to binomial. So, let X follow binomial n, p. As n tends to infinity, the distribution of X minus np divided by root npq is approximately normal 0. Similarly, Poisson approximation too. So, let X follow poison lambda distribution. As lambda tends to infinity X minus lambda by root lambda, this converges to normal 0. Basically, these were the original central limit theorems, the modern versions are for any random variable X. So, let us look at applications of this year. (Refer Slide Time: 37:25)

16 The probability that the patient recovers from rare blood disease is 0.4, if 100 people are known to have contracted this disease, what is the probability that less than 30 will survive? So, if we consider here X as the number of survivors, then X follows binomial 100, 0.4, and we are interested to calculate the probability that X is less than 30. (Refer Slide Time: 43:25) Now, if we look at the actual calculation of this, using the binomial distribution, then this is reducing to sigma 100cj 0.4 to the power j 0. 6 to the power 100 minus j, j is equal to 0 to 29. You can look at the difficulty of the evaluations, of the complexity of the terms here, we have factorials involving 100c0, 100c10, 100c20, 100c25, etcetera, and then the powers of numbers which are smaller than 1, so, the large powers of these numbers will yield lot of computational errors and also the terms will be complex. However, here we can see that N is large, so if we can consider, if we try to look at np is equal to 40 and we want to use Poisson approximation, then that will also be very complicated because that will involve the same summation e to the power minus to the power j by j factorial, which again involves large terms here. So, in place of that we will use the normal approximation, so np is 40 and npq is 24, so square root of that is So, probability of X less than 30, now, here what we do, we apply so called continuity corrections, this continuity correction is required to approximate a discrete distribution with a continuous distribution.

17 Consider like this, in the binomial distribution the curve is like this; now, if we are approximating it by a normal distribution and suppose this is 30, so, it could be also less than 29, X less than or equal 29, so in that case if we had approximated it by normal, we should have attained X less than or equal to 29 whereas, here if we write straight away, we will write X less than or equal to 30 because in continuous distribution the probability of a point is negligible, so, a better thing would be to take a middle value between 29 and 30 as so, this is called continuity corrections. Now, we make use of the fact that this X minus mu by sigma is approximately normal, so, 29.5 minus 40 divided by 4.899, this is z less than or equal to minus 2.14 that is, the cdf of the standard normal distribution at the point minus So, from the tables of the normal distribution we can observe it is So, the probability is quite small that less than 30 will survive. Let us look at another example where the Poisson distribution is approximated by normal distribution. So, consider a large town where the home burglaries occur like events in a Poisson process with lambda is equal half per day that is, a rate, find the probability that no more than 10 burglaries occur in a month, or not less than 17 occur in a month. So, if we consider this, then X follows Poisson 15 if I denote by X the number of burglaries occurring in a month s time. So, since lambda is half per day in a month we assume 30 days, so the parameter lambda will become15, so, probability X less than or equal to 10. Again we make use of the continuity correction, since it is less than or equal to 10 it is also same as probability X less than11. So, in normal distribution, it could have been calculated as X less than or equal to 10 or X less than or equal to11. So, as a continuity correction we take the midpoint X less than or equal to So, if we make use of the normal approximation here, then X minus lambda by root lambda is approximately normal as lambda becomes large, so, here, it is 15 here, so, 10.5 minus 15 by root 15, which are after simplification minus 1.16, so, the value of the normal distributions cumulative distribution functions at this point is Similarly, not more, less than 17 in a month that is, probability X greater than or equal to 17, which is 1 minus probability X less than or equal to 16, or 1 minus probability X less than or equal to 16 by 0.5- that is the continuity correction- and after shifting by 15 and dividing by root 15, it becomes 0.39, which we can see from the tables of the normal distribution, and is the value if we compare with the exact value which we could have calculated from the e to the power minus to the power j by j factorial; in the

18 first one we would have j is equal to 0 to 10, then this value is actually 0.118, so, we can see that there is a very small margin of error even for lambda is equal to 15; in the second one the value is by approximation and the exact value is 0.336, so, the error margin is extremely small. (Refer Slide Time: 50:03) A related distribution to normal distribution is called lognormal distribution. So, if we say Y follows normal mu, sigma square, then X is equal to e to the power Y has lognormal distribution. So, the density function of log normal distribution will be given by 1 by sigma x root 2 pi e to the power minus 1 by 2 sigma square log x minus mu square, here x is positive and mu and sigma as usual. The necessity of this kind of distribution can be understood like this, that many times X observations may be very large and log x observations may be useful on the other hand, if Y observations are very small, then e to the power Y observations may be of the reasonable size, so if we take e to the power Y, then that will have a lognormal distribution. The moments of the lognormal distribution are obviously, in the form of the moment generating function of, so, this is nothing but the moment generating function of Y at 1 that is, e to the power mu plus sigma square by 2. If we consider the second moment, then it is equal to expectation of e to the power 2y that is, My 2 that means, variance of X is equal to e to the power 2 mu plus 2 sigma square. In general, mu k prime is equal to expectation of e to the power ky that is equal to e to the power kmu plus half k square

19 sigma square. So, moments of all orders of the log normal distribution exists and they can be expressed in terms of the moments of the moment generating function of normal distribution. (Refer Slide Time: 52:45) Lets us look at one application here. Suppose the demand of a certain item follows a lognormal distribution with mean 7.43 and variance 0.56, what is the probability that X is greater than 8? So, if we utilize this formula here, the mean of a lognormal distribution is e to the power mu plus sigma square by 2, the second moment is e to the power 2 mu plus 2 sigma square, so, we have mu1prime is equal to 7.43, which yields the equation mu plus sigma square by 2 is approximately 2, and mu2prime is variance plus mu1prime square, so, 0.56 plus 7.43 square and we substitute here the value for this, so, after simplification this gives the equation mu plus sigma square is equal to So, if we solve these two equations we get mu is approximately 2 and sigma is approximately 0.1. So, now, the probability of a lognormal random variable can be calculated using normal distribution. So, probability of X greater than 8 reduces to probability logx greater than log8, which is probability z greater than log8 minus 2 divided by 0.1, which is probability z greater than 0.79, and from the tables of the standard normal distribution we can see it is So, this distribution is quite useful in various applications and since it is directly related to the normal distribution the calculations related to this are quite conveniently handled using the properties of the normal distribution. Another thing

20 that you can observe here is that this distribution will be skewed distribution, X is having symmetric distribution, but logx is having skewed distribution here. We can actually the third moment, the fourth moment, etcetera, to consider the measures of skewness and kurtosis, etcetera. We have considered almost all the important continuous distributions which arise in practice of course, one can say that we are looking at any given phenomena, then what will be the distribution corresponding to that that can be considered by making a frequency polygamma,(()) histogram, and looking at the data we can see that what kind of distribution will be best suited to describe that data. The distribution that we have discussed so far are the ones which are more important in the sense that they arise in lot of practical applications and also historically they are important as they were considered as certain phenomena like, some physical phenomena, or some genetic phenomena etcetera where they arise. In the next lecture will be considering various applications of these distributions. So, we stop here.

### UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

### Measures of Central Tendency and Variability: Summarizing your Data for Others

Measures of Central Tendency and Variability: Summarizing your Data for Others 1 I. Measures of Central Tendency: -Allow us to summarize an entire data set with a single value (the midpoint). 1. Mode :

### Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

### 6.4 Normal Distribution

Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

### CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous

### Normal distribution. ) 2 /2σ. 2π σ

Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

### Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

### 99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

### 4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

### Random variables, probability distributions, binomial random variable

Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

### Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

### Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### Exploratory Data Analysis

Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction

### Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

### Notes on Continuous Random Variables

Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

### CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

### Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds

Isosceles Triangle Congruent Leg Side Expression Equation Polynomial Monomial Radical Square Root Check Times Itself Function Relation One Domain Range Area Volume Surface Space Length Width Quantitative

### Introduction to Statistics for Psychology. Quantitative Methods for Human Sciences

Introduction to Statistics for Psychology and Quantitative Methods for Human Sciences Jonathan Marchini Course Information There is website devoted to the course at http://www.stats.ox.ac.uk/ marchini/phs.html

Exponents and Radicals (a + b) 10 Exponents are a very important part of algebra. An exponent is just a convenient way of writing repeated multiplications of the same number. Radicals involve the use of

### Statistics 100A Homework 8 Solutions

Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

### Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 04 Digital Logic II May, I before starting the today s lecture

### 1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

### FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics

### STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random

### Lecture - 4 Diode Rectifier Circuits

Basic Electronics (Module 1 Semiconductor Diodes) Dr. Chitralekha Mahanta Department of Electronics and Communication Engineering Indian Institute of Technology, Guwahati Lecture - 4 Diode Rectifier Circuits

### Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution 8 October 2007 In this lecture we ll learn the following: 1. how continuous probability distributions differ

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

### Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

### Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

### seven Statistical Analysis with Excel chapter OVERVIEW CHAPTER

seven Statistical Analysis with Excel CHAPTER chapter OVERVIEW 7.1 Introduction 7.2 Understanding Data 7.3 Relationships in Data 7.4 Distributions 7.5 Summary 7.6 Exercises 147 148 CHAPTER 7 Statistical

### Linear Equations and Inequalities

Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................

### E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

### A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

### Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

### Review of Random Variables

Chapter 1 Review of Random Variables Updated: January 16, 2015 This chapter reviews basic probability concepts that are necessary for the modeling and statistical analysis of financial data. 1.1 Random

### Soil Dynamics Prof. Deepankar Choudhury Department of Civil Engineering Indian Institute of Technology, Bombay

Soil Dynamics Prof. Deepankar Choudhury Department of Civil Engineering Indian Institute of Technology, Bombay Module - 2 Vibration Theory Lecture - 8 Forced Vibrations, Dynamic Magnification Factor Let

### This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

### 6 3 The Standard Normal Distribution

290 Chapter 6 The Normal Distribution Figure 6 5 Areas Under a Normal Distribution Curve 34.13% 34.13% 2.28% 13.59% 13.59% 2.28% 3 2 1 + 1 + 2 + 3 About 68% About 95% About 99.7% 6 3 The Distribution Since

### MBA 611 STATISTICS AND QUANTITATIVE METHODS

MBA 611 STATISTICS AND QUANTITATIVE METHODS Part I. Review of Basic Statistics (Chapters 1-11) A. Introduction (Chapter 1) Uncertainty: Decisions are often based on incomplete information from uncertain

### 8. THE NORMAL DISTRIBUTION

8. THE NORMAL DISTRIBUTION The normal distribution with mean μ and variance σ 2 has the following density function: The normal distribution is sometimes called a Gaussian Distribution, after its inventor,

### Lecture 5 : The Poisson Distribution

Lecture 5 : The Poisson Distribution Jonathan Marchini November 10, 2008 1 Introduction Many experimental situations occur in which we observe the counts of events within a set unit of time, area, volume,

### A logistic approximation to the cumulative normal distribution

A logistic approximation to the cumulative normal distribution Shannon R. Bowling 1 ; Mohammad T. Khasawneh 2 ; Sittichai Kaewkuekool 3 ; Byung Rae Cho 4 1 Old Dominion University (USA); 2 State University

### Probability Calculator

Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

### Section 5.1 Continuous Random Variables: Introduction

Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

### Lesson 4 Measures of Central Tendency

Outline Measures of a distribution s shape -modality and skewness -the normal distribution Measures of central tendency -mean, median, and mode Skewness and Central Tendency Lesson 4 Measures of Central

### In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

### 2013 MBA Jump Start Program

2013 MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Algebra Review Calculus Permutations and Combinations [Online Appendix: Basic Mathematical Concepts] 2 1 Equation of

### Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

### SOLVING EQUATIONS WITH RADICALS AND EXPONENTS 9.5. section ( 3 5 3 2 )( 3 25 3 10 3 4 ). The Odd-Root Property

498 (9 3) Chapter 9 Radicals and Rational Exponents Replace the question mark by an expression that makes the equation correct. Equations involving variables are to be identities. 75. 6 76. 3?? 1 77. 1

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have

8.6 Rational Exponents 8.6 OBJECTIVES 1. Define rational exponents 2. Simplify expressions containing rational exponents 3. Use a calculator to estimate the value of an expression containing rational exponents

### Week 13 Trigonometric Form of Complex Numbers

Week Trigonometric Form of Complex Numbers Overview In this week of the course, which is the last week if you are not going to take calculus, we will look at how Trigonometry can sometimes help in working

### EMPIRICAL FREQUENCY DISTRIBUTION

INTRODUCTION TO MEDICAL STATISTICS: Mirjana Kujundžić Tiljak EMPIRICAL FREQUENCY DISTRIBUTION observed data DISTRIBUTION - described by mathematical models 2 1 when some empirical distribution approximates

### Transformations and Expectations of random variables

Transformations and Epectations of random variables X F X (): a random variable X distributed with CDF F X. Any function Y = g(x) is also a random variable. If both X, and Y are continuous random variables,

### Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion

Descriptive Statistics Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion Statistics as a Tool for LIS Research Importance of statistics in research

### 7. Normal Distributions

7. Normal Distributions A. Introduction B. History C. Areas of Normal Distributions D. Standard Normal E. Exercises Most of the statistical analyses presented in this book are based on the bell-shaped

### Chapter 4. Probability Distributions

Chapter 4 Probability Distributions Lesson 4-1/4-2 Random Variable Probability Distributions This chapter will deal the construction of probability distribution. By combining the methods of descriptive

### The right edge of the box is the third quartile, Q 3, which is the median of the data values above the median. Maximum Median

CONDENSED LESSON 2.1 Box Plots In this lesson you will create and interpret box plots for sets of data use the interquartile range (IQR) to identify potential outliers and graph them on a modified box

### Maximum Likelihood Estimation

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

### 26 Integers: Multiplication, Division, and Order

26 Integers: Multiplication, Division, and Order Integer multiplication and division are extensions of whole number multiplication and division. In multiplying and dividing integers, the one new issue

### 6.4 Logarithmic Equations and Inequalities

6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.

### An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

### Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

### Descriptive Statistics and Measurement Scales

Descriptive Statistics 1 Descriptive Statistics and Measurement Scales Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample

### Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering, Indian Institute of Technology, Delhi

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering, Indian Institute of Technology, Delhi Lecture - 27 Product Mix Decisions We had looked at some of the important

### Characteristics of Binomial Distributions

Lesson2 Characteristics of Binomial Distributions In the last lesson, you constructed several binomial distributions, observed their shapes, and estimated their means and standard deviations. In Investigation

### 5/31/2013. 6.1 Normal Distributions. Normal Distributions. Chapter 6. Distribution. The Normal Distribution. Outline. Objectives.

The Normal Distribution C H 6A P T E R The Normal Distribution Outline 6 1 6 2 Applications of the Normal Distribution 6 3 The Central Limit Theorem 6 4 The Normal Approximation to the Binomial Distribution

### Engineering Problem Solving and Excel. EGN 1006 Introduction to Engineering

Engineering Problem Solving and Excel EGN 1006 Introduction to Engineering Mathematical Solution Procedures Commonly Used in Engineering Analysis Data Analysis Techniques (Statistics) Curve Fitting techniques

### BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

### Chapter 5: Normal Probability Distributions - Solutions

Chapter 5: Normal Probability Distributions - Solutions Note: All areas and z-scores are approximate. Your answers may vary slightly. 5.2 Normal Distributions: Finding Probabilities If you are given that

### Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

### A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions

A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved First Draft February 8, 2006 1 Contents 25

### WEEK #22: PDFs and CDFs, Measures of Center and Spread

WEEK #22: PDFs and CDFs, Measures of Center and Spread Goals: Explore the effect of independent events in probability calculations. Present a number of ways to represent probability distributions. Textbook

### The Binomial Distribution

The Binomial Distribution James H. Steiger November 10, 00 1 Topics for this Module 1. The Binomial Process. The Binomial Random Variable. The Binomial Distribution (a) Computing the Binomial pdf (b) Computing

### (Refer Slide Time: 01:11-01:27)

Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,

### LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

### Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.

### Def: The standard normal distribution is a normal probability distribution that has a mean of 0 and a standard deviation of 1.

Lecture 6: Chapter 6: Normal Probability Distributions A normal distribution is a continuous probability distribution for a random variable x. The graph of a normal distribution is called the normal curve.

### Hypothesis Testing for Beginners

Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes

### Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Section 7 Algebraic Manipulations and Solving Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Before launching into the mathematics, let s take a moment to talk about the words

### Introduction; Descriptive & Univariate Statistics

Introduction; Descriptive & Univariate Statistics I. KEY COCEPTS A. Population. Definitions:. The entire set of members in a group. EXAMPLES: All U.S. citizens; all otre Dame Students. 2. All values of

### CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

### M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

### Means, standard deviations and. and standard errors

CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

### A review of the portions of probability useful for understanding experimental design and analysis.

Chapter 3 Review of Probability A review of the portions of probability useful for understanding experimental design and analysis. The material in this section is intended as a review of the topic of probability

### The Standard Normal distribution

The Standard Normal distribution 21.2 Introduction Mass-produced items should conform to a specification. Usually, a mean is aimed for but due to random errors in the production process we set a tolerance

### Lesson 20. Probability and Cumulative Distribution Functions

Lesson 20 Probability and Cumulative Distribution Functions Recall If p(x) is a density function for some characteristic of a population, then Recall If p(x) is a density function for some characteristic

### Gamma Distribution Fitting

Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics