x : X bar Mean (i.e. Average) of a sample



Similar documents
1. C. The formula for the confidence interval for a population mean is: x t, which was

Lesson 15 ANOVA (analysis of variance)

CHAPTER 7: Central Limit Theorem: CLT for Averages (Means)

One-sample test of proportions

I. Chi-squared Distributions

Z-TEST / Z-STATISTIC: used to test hypotheses about. µ when the population standard deviation is unknown

GCSE STATISTICS. 4) How to calculate the range: The difference between the biggest number and the smallest number.

Inference on Proportion. Chapter 8 Tests of Statistical Hypotheses. Sampling Distribution of Sample Proportion. Confidence Interval

PSYCHOLOGICAL STATISTICS

The following example will help us understand The Sampling Distribution of the Mean. C1 C2 C3 C4 C5 50 miles 84 miles 38 miles 120 miles 48 miles

5: Introduction to Estimation

Lesson 17 Pearson s Correlation Coefficient

Confidence Intervals for One Mean

Hypothesis testing. Null and alternative hypotheses

Math C067 Sampling Distributions

Sampling Distribution And Central Limit Theorem

Case Study. Normal and t Distributions. Density Plot. Normal Distributions

Chapter 7: Confidence Interval and Sample Size

Determining the sample size

Measures of Spread and Boxplots Discrete Math, Section 9.4

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the.

1 Computing the Standard Deviation of Sample Means

Overview. Learning Objectives. Point Estimate. Estimation. Estimating the Value of a Parameter Using Confidence Intervals

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution

Practice Problems for Test 3

Chapter 7 - Sampling Distributions. 1 Introduction. What is statistics? It consist of three major areas:

Confidence Intervals

Definition. A variable X that takes on values X 1, X 2, X 3,...X k with respective frequencies f 1, f 2, f 3,...f k has mean

Center, Spread, and Shape in Inference: Claims, Caveats, and Insights

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

Mann-Whitney U 2 Sample Test (a.k.a. Wilcoxon Rank Sum Test)

Confidence intervals and hypothesis tests

Output Analysis (2, Chapters 10 &11 Law)

MEI Structured Mathematics. Module Summary Sheets. Statistics 2 (Version B: reference to new book)

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n

Chapter 14 Nonparametric Statistics

Properties of MLE: consistency, asymptotic normality. Fisher information.

Maximum Likelihood Estimators.

Quadrat Sampling in Population Ecology

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable

Now here is the important step

Normal Distribution.

Unit 8: Inference for Proportions. Chapters 8 & 9 in IPS

How To Solve The Homewor Problem Beautifully

Incremental calculation of weighted mean and variance

Present Value Factor To bring one dollar in the future back to present, one uses the Present Value Factor (PVF): Concept 9: Present Value

Infinite Sequences and Series

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method

hp calculators HP 12C Statistics - average and standard deviation Average and standard deviation concepts HP12C average and standard deviation

BINOMIAL EXPANSIONS In this section. Some Examples. Obtaining the Coefficients

Statistical inference: example 1. Inferential Statistics

STA 2023 Practice Questions Exam 2 Chapter 7- sec 9.2. Case parameter estimator standard error Estimate of standard error

Exam 3. Instructor: Cynthia Rudin TA: Dimitrios Bisias. November 22, 2011

Chapter 7 Methods of Finding Estimators

Overview of some probability distributions.

Research Method (I) --Knowledge on Sampling (Simple Random Sampling)

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

THE ARITHMETIC OF INTEGERS. - multiplication, exponentiation, division, addition, and subtraction


1 Correlation and Regression Analysis

This document contains a collection of formulas and constants useful for SPC chart construction. It assumes you are already familiar with SPC.

3 Basic Definitions of Probability Theory

A Mathematical Perspective on Gambling

TI-83, TI-83 Plus or TI-84 for Non-Business Statistics

, a Wishart distribution with n -1 degrees of freedom and scale matrix.

Analyzing Longitudinal Data from Complex Surveys Using SUDAAN

.04. This means $1000 is multiplied by 1.02 five times, once for each of the remaining sixmonth

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here).

Basic Elements of Arithmetic Sequences and Series

Repeating Decimals are decimal numbers that have number(s) after the decimal point that repeat in a pattern.

Section 11.3: The Integral Test

CHAPTER 3 THE TIME VALUE OF MONEY

OMG! Excessive Texting Tied to Risky Teen Behaviors

Topic 5: Confidence Intervals (Chapter 9)

Biology 171L Environment and Ecology Lab Lab 2: Descriptive Statistics, Presenting Data and Graphing Relationships

Hypergeometric Distributions

Probability Distribution for Discrete Random Variables

Soving Recurrence Relations

Descriptive Statistics

% 60% 70% 80% 90% 95% 96% 98% 99% 99.5% 99.8% 99.9%

A Test of Normality. 1 n S 2 3. n 1. Now introduce two new statistics. The sample skewness is defined as:

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE The absolute value of the complex number z a bi is

A PROBABILISTIC VIEW ON THE ECONOMICS OF GAMBLING

TI-89, TI-92 Plus or Voyage 200 for Non-Business Statistics

Elementary Theory of Russian Roulette

Learning objectives. Duc K. Nguyen - Corporate Finance 21/10/2014

Multi-server Optimal Bandwidth Monitoring for QoS based Multimedia Delivery Anup Basu, Irene Cheng and Yinzhe Yu

THE TWO-VARIABLE LINEAR REGRESSION MODEL

Simple Annuities Present Value.

Cooley-Tukey. Tukey FFT Algorithms. FFT Algorithms. Cooley

Parametric (theoretical) probability distributions. (Wilks, Ch. 4) Discrete distributions: (e.g., yes/no; above normal, normal, below normal)

Central Limit Theorem and Its Applications to Baseball

CHAPTER 11 Financial mathematics

Chapter 5: Inner Product Spaces

G r a d e. 2 M a t h e M a t i c s. statistics and Probability

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Solutions 9 Spring 2006

Transcription:

A quick referece for symbols ad formulas covered i COGS14: MEAN OF SAMPLE: x = x i x : X bar Mea (i.e. Average) of a sample x i : X sub i This stads for each idividual value you have i your sample. For example, whe you re fidig the mea of values 3, 4, ad 5, you substitute 3 ito the x i spot, the 4, the 5, ad the add these together : the umber of observatios i your sample; for the above example of fidig the mea of 3, 4, 5, = 3 observatios MEAN OF POPULATION: = x i N = mu Mea of a Populatio Notice that this equatio is very similar to the oe for the mea of a sample, the oly differece is that you kow you have observed the ENTIRE populatio (this is rare i real life) ESTIMATED POPULATION VARIANCE/VARIANCE OF A SAMPLE: s 2 = (x i x ) 2 1 s 2 : S squared the term for the variace of a sample, also kow as the estimated variace of a populatio x i : X sub i This stads for each idividual value you have i your sample. For example, whe you re fidig the variace withi the sample of values 3, 4, ad 5, you substitute 3 ito the x i spot, subtract the mea from 3, ad the square this value. Repeat this step for as may values of x i as you have, the add those results together. -1: Number of observatios i your sample mius 1; for the example of observatios equalig 3, 4, ad 5, =3, so -1=2.

ESTIMATED POPULATION STANDARD DEVIATION/STANDARD DEVIATION OF A SAMPLE: s = s 2 s = stadard deviatio of a sample, also kow as estimated stadard deviatio for the populatio see above for how to calculate s 2, the take the square root of your aswer to fid stadard deviatio POPULATION VARIANCE: σ 2 = (x i ) 2 N Note that this equatio is very similar to the equatio for estimated populatio variace above The differece is that you divide by N i the deomiator to fid populatio variace, which is equal to the total umber of members of your populatio, whereas you divide by -1 to fid the ESTIMATED populatio variace σ 2 = sigma squared the term used for populatio variace x i : X sub i This stads for each idividual value you have i your sample. For example, whe you re fidig the variace withi the sample of values 3, 4, ad 5, you substitute 3 ito the x i spot, subtract the mea from 3, ad the square this value. Repeat this step for as may values of x i as you have, the add those results together. N: Number of members of your populatio/observatios **This equatio will oly be used whe you ca observe the ENTIRE populatio, which is commoly ot feasible i real life. But you should uderstad how to fid populatio variace, ad how it is related/differet from ESTIMATED populatio variace POPULATION STANDARD DEVIATION: σ = σ 2 σ = sigma the term for populatio stadard deviatio see above for how to calculate sigma squared, the take the square root of your aswer to fid sigma ENTROPY: H = f (x i )log 2 ( f (x i )) H: the symbol to deote etropy

f (x i ) : Relative frequecy of somethig occurrig; For example, you flip a coi 10 times, ad 4 times it comes up heads. The relative frequecy = 0.4 For each outcome, figure out the relative frequecy, the fid log 2 of that frequecy, ad the multiply that value times the relative frequecy itself. Oce you have doe this for each outcome you had, add all your aswers together ad take the egative of it to fid etropy. MAXIMUM POSSIBLE ENTROPY: H max = log 2 (1/ k) = log 2 (k) k: the umber of possible outcomes. For example, with a coi toss, there are 2 possible outcomes. With a die roll, there are 6. RELATIVE ENTROPY: J = H H max A value close to 1 idicates maximum possible etropy. A value close to 0 idicates miimum possible etropy. EXPECTED VALUE OF A RANDOM VARIABLE: E(X) = P(X = x i )x i E(X) = Notatio for Expected Value P= Probability x i : X sub i This agai stads for each possible observed value. For example, you are tryig to fid the observed value for a die that has 5 sides showig 1 ad 1 side showig 0 ; the 1 ad 0 are your values you plug i for x i. You would first figure out the probability of rollig a 1 (P=5/6) ad the multiply that P times the actual value of 1. The repeat with the probability of rollig a 0 (P=1/6) times the value of 1, add these results together, ad fid your expected value (E(X)) = 5/6 VARIANCE OF A RANDOM VARIABLE: Var(X) = P(X = x i )(x i E(X)) 2 Var(X) = Notatio for Variace of a radom variable E(X) see above, this meas expected value of a radom variable. So to fid the variace of a radom variable, you will first eed to fid the expected value.

x i : X sub i This stads for each possible observed value. To fid the variace, plug i each possible value for x i ad the subtract the expected value from this observed value, ad square this aswer. The multiply this aswer by the probability of gettig that observed value. For example, assume we roll a fair die ad wat to kow what the variace of the radom variable will be. We fid that the Exptected Value = 3.5. For each possible value of the die (1, 2, 3, 4, 5, 6) we will plug each value i for xi, subtract the expected value of 3.5, square the aswer, ad the multiply it by the probability of rollig that value (i this case each umber has a 1/6 chace of beig rolled). Calculate this for all 6 umbers, ad sum those compoets together to fid the variace. STANDARD DEVIATION OF A RANDOM VARIABLE: Std(X) = Var(X) Std(X) = Stadard Deviatio of a Radom Variable Oce you compute variace as i the above example, take the square root of it to get the stadard deviatio of a Radom variable BINOMIAL DISTRIBUTION: P(k, p) = p k (1 p) k k EXPANDED TO:! P(k, p) = p k (1 p) k k!( k)! k: The umber of successful outcomes. You defie what you thik a success is it could be somethig like gettig heads o a coi flip. : The umber of trials. Whe you are doig a biomial equatio, this might be listed as the umber of times you flip the coi, reach ito a bag, etc. p: The probability of gettig a successful outcome. If you are flippig a coi ad have defied success as gettig heads, the p=the probability of gettig a head whe you flip the coi. P(k, p) : The probability of gettig k successes, give umber of trials, ad p probability of success k : choose k You defie gettig k umber of successes out of umber of trials (see below to calculate)

! k!( k)! the Expasio of choose k.! meas factorial, which meas you take ad multiply it by all umbers smaller tha. For example, to fid 4!, you multiply 4x3x2x1 The rest of the equatio is just pluggig i values to figure out the correct probability of gettig k umber of successes across umber of trials, give that you have a p probability of gettig k o ay give trial **Defie, k, ad p before you start the problem It might help to write them ext to the biomial equatio ad the just go back ad plug them i where eeded. THE SAMPLING DISTRIBUTION OF THE MEAN: x = E(X) = E(X) = x The cocept of the samplig distributio of the mea is a very helpful ad crucial cocept for statistics. I short, the samplig distributio of the mea is a hypothetical distributio that represets what you would get if you took ifiite samples of size, took the mea of each of those samples, ad the graphed those meas. Some thigs we kow about the samplig distributio of the mea are: o For a large eough (25-100), the samplig distributio of the mea will be ormally distributed o The mea of the samplig distributio of the mea = mea of the populatio x : Mea of the samplig distributio of the mea x : Mea of the populatio BUT: E(X) : Expected value of the samplig distributio of the mea E(X) : Expected value of the populatio σ x = Var(X ) = σ x σ x : Stadard deviatio of the samplig distributio of the mea Var( X) : Variace of the populatio : Number of observatios σ x : Stadard deviatio of the populatio

SO, we kow that the stadard deviatio of the samplig distributio of the mea will always be smaller tha the stadard deviatio of the populatio by a specific amout (i.e. populatio stadard deviatio divided by the square root of the umber of observatios i a sample) COHEN S D: d = x σ x : mea of your sample : mea of the ull hypothesis σ : Stadard deviatio of the ull hypothesis Cohe s d is a measure of effect size, or how large of a effect your sample had i compariso to the ull hypothesis d =0.20 (small effect), d = 0.50 (medium effect), d = 0.80 (large effect) OBSERVED Z-SCORE: z = x u σ x EXPANDED TO: z = x u σ x : mea of your sample : mea of the ull hypothesis σ x : stadard error of the mea (also kow as the stadard deviatio of the populatio divided by the square root of the umber of observatios)

CONFIDENCE INTERVALS (FOR A Z-TEST): x ± (z cof )σ x x : observed mea of your sample (z cof ) : the critical z-scores for your level of cofidece. For purposes of this class, thik of these like whe you are fidig critical z-scores for two-tailed z- tests. If you have a 95% cofidece iterval, you will have the same z cof as you would have for a 2-tailed z-test with a alpha level of 0.05. To fid your z cof subtract your level of cofidece from 100 (i.e. 100-95% cofidece = 5). Divide this 5% by 2 =2.5% or 0.025, fid 0.025 i the C Colum of the z-table, the fid the correspodig z-score i the A colum. σ x : stadard error of the mea (also kow as the stadard deviatio of the populatio divided by the square root of the umber of observatios) ONE SAMPLE T-TEST (3 related formulas): 1) t = x s x 2) s x = σ^ 3) x = s = σ^ s = σ^ = (x x) 2 1 x : your sample mea x: Each idividual observatio i your sample -1: the umber of observatios i your sample, mius 1

: the populatio mea (usually what you are comparig your sample mea to, to see if there is a differece s x / σ^ x : The estimated stadard error of the mea. Note this is also represeted as the Greek letter sigma σ, with a hat, so we ca call it sigma hat this idicates it s a estimate s = σ^ : The estimated stadard deviatio of the populatio. To estimate the populatio stadard deviatio, we eed to fid s or, which we fid i a similar way to how we always calculate stadard deviatio. Take each idividual score (x) ad subtract the mea ( x). Square that value. Repeat for each idividual score ad the add up what you get. The divide that value by the umber of observatios mius 1 (-1), ad fially take the square root of your aswer to fid s or **Note that you will use at least 2 times i the t-score formula: oce to fid the estimated stadard deviatio (formula #3 above) ad agai whe fidig the estimated stadard error of the mea (formula #2). You will also eed to kow to fid your critical t-score o your t-score chart. Your degrees of freedom (df) is equal to the umber of observatios mius 1 for a oe-sample t-test (so df=(-1) for this test) σ^ CONFIDENCE INTERVAL FOR A ONE-SAMPLE T-TEST: x ± t cof (s x ) x : Observed mea of your sample t cof : the critical t-scores for your level of cofidece. For purposes of this class, thik of these like whe you are fidig critical t-scores for two-tailed t- tests. If you have a 95% cofidece iterval, you will have the same t cof as you would have for a 2-tailed t-test with a alpha level of 0.05. To fid your z cof subtract your level of cofidece from 100 (i.e. 100-95% cofidece = 5). Go to the 2-tailed test side of the t-test table, fid the colum for 0.05 ad go dow to your df to fid the correct t cof σ^ s x : Estimated stadard error of the mea (see above to calculate)