# The Scalar Algebra of Means, Covariances, and Correlations

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 3 The Scalar Algebra of Means, Covariances, and Correlations In this chapter, we review the definitions of some key statistical concepts: means, covariances, and correlations. We show how the means, variances, covariances, and correlations of variables are related when the variables themselves are connected by one or more linear equations by developing the linear combination and transformation rules. These rules, which hold both for lists of numbers and random variables, form the algebraic foundations for regression analysis, factor analysis, and SEM. 3.1 MEANS, VARIANCES, COVARIANCES, AND CORRELATIONS In this section, we quickly review the basic definitions of the statistical quantities at the heart of structural equation modeling Means The mean of a statistical population is simply the arithmetic average of all the numbers in the population. Usually, we model statistical populations as random variables with a particular statistical distribution. Since this is not a course in probability theory, we will not review the formal definition of a random variable, but rather use a very informal notion of such a variable as a process that generates numbers according to a specifiable system. The mean of the population can be represented as the expected value of the random variable. 31

2 32 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS Definition 3.1 (Expected Value of a Random Variable) The expected value of a random variable X, denoted E(X), is the long run average of the numbers generated by the random variable X. This definition will prove adequate for our purposes in this text, although the veteran of a course or two in statistics undoubtedly knows the formal definitions in terms of sums and integrals. The arithmetic average of a sample of numbers taken from some statistical population is referred to as the sample mean. This was defined already in section Deviation Scores The deviation score plays an important role in statistical formulas. The definitions are quite similar, for the sample and the population. In each case, the deviation score is simply the difference between the score and the mean. Deviation scores in a statistical population are obtained by subtracting the population mean from every score in the population. If the population is modeled with a random variable, the random variable is converted to deviation score form by subtracting the population mean from each observation, as described in the following definition. Definition 3.2 (Deviation Score Random Variable) The deviation score form of a random variable X is defined as [dx] = X E(X). Sample deviation scores were defined in Definition LINEAR TRANSFORMATION RULES The linear transformation is an important concept in statistics, because many elementary statistical formulas involve linear transformations. Definition 3.3 (Linear Transformation) Suppose we have two variables, X and Y, representing either random variables or two lists of numbers. Suppose furthermore that the Y scores can be expressed as linear functions of the X scores, that is, y i = ax i + b for some constants a and b. Then we say that Y is a linear transformation (or linear transform ) of X. If the multiplicative constant a is positive, then the linear transformation is order-preserving Linear Transform Rules for Means Both sample and population means behave in a predictable and convenient way if one variable is a linear transformation of the other.

3 LINEAR TRANSFORMATION RULES 33 Theorem 3.1 (The Mean of a Linear Transform) Let Y and X represent two lists of N numbers. If, for each y i, we have y i = ax i + b, then y = ax + b. If Y and X represent random variables, and if Y = ax + b, then E(Y ) = ae(x) + b. Proof. For lists of numbers, we simply substitute definitions and employ elementary summation algebra. That is, y = (1/N) = (1/N) N y i N (ax i + b) N N = (1/N) ax i + (1/N) b = a(1/n) = ax + b N x i + (1/N)Nb The second line above follows from the distributive rule of summation algebra (Result 2.4, page 16). The third line uses the second constant rule of summation algebra Result 2.3, page 15, for the left term, and the first constant rule Result 2.1, page 13, for the right. Structurally similar proofs hold both for discrete and continuous random variables, and are given in most elementary mathematical statistics texts. For brevity, we will omit them here. Equation 3.1 actually includes four fundamental rules about the sample mean as special cases. First, by letting b = 0 and remembering that a can represent either multiplication or division by a constant, we see that multiplying or dividing a variable (or every number in a sample) by a constant multiplies or divides its mean by the same constant. Second, by letting a = 1, and recalling that b can represent either addition or subtraction, we see that adding or subtracting a constant from a variable adds or subtracts that constant from the mean of that variable Variances and Standard Deviations The variance of a statistical sample or population is a measure of how far the numbers are spread out around the mean. Since the deviation score reflects distance from the mean, perhaps the two most obvious measures of spread are the average absolute deviation and the average squared deviation. For reasons of mathematical tractability, the latter is much preferred by inferential statisticians.

4 34 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS Definition 3.4 (Population Variance) The variance of a statistical population, denoted V ar(x) or σ 2 X, is the average squared deviation score in that population. Hence, σ 2 X = E (X E(X)) 2. An important relationship, which we give here without proof, is Result 3.1 (Population Variance Computational Formula) σ 2 X = E ( X 2) (E(X)) 2 (3.1) = E ( X 2) µ 2 (3.2) Definition 3.5 (The Population Standard Deviation) The population standard deviation is the square root of the population variance, i.e., q σ X = σx 2 The natural transfer of the notion of a population variance to a sample statistic would be to simply compute, for a sample of N numbers, the average squared (sample) deviation score. Unfortunately, such an estimate tends to be, on average, lower than the population variance, i.e., negatively biased. To correct this bias, we divide the sum of squared deviations by N 1, instead of N. Definition 3.6 (The Sample Variance) The sample variance for a set of N scores on the variable X is defined as S 2 X = 1/(N 1) NX (x i x ) 2 As an elementary exercise in summation algebra, it can be shown that the sample variance may also be computed via the following equivalent, but more convenient formula: S 2 X = 1/(N 1) 0 N x 2 i P N xi 2 N 1 C A Definition 3.7 (The Sample Standard Deviation) The sample standard deviation is the square root of the sample variance, i.e., q S X = SX 2

5 LINEAR TRANSFORMATION RULES Linear Transform Rules for Variances and Standard Deviations In this section, we examine the effect of a linear transformation on the variance and standard deviation. Since both the variance and standard deviation are measures on deviation scores, we begin by investigating the effect of a linear transformation of raw scores on the corresponding deviation scores. We find that the multiplicative constant comes straight through in the deviation scores, while an additive or subtractive constant has no effect on deviation scores. Result 3.2 (Linear Transforms and Deviation Scores) Suppose a variable X is is transformed linearly into a variable Y, i.e., Y = ax + b. Then the Y deviation scores must relate to the X deviation scores via the equation [dy ] = a[dx]. Proof. First, consider the case of sets of N scores. If y i = ax i + b, then [dy] i = y i y = (ax i + b) y = (ax i + b) (ax + b) [Theorem 3.1] = ax i + b ax b = a(x i x ) = a[dx] i Next, consider the case of random variables. We have [dy ] = Y E(Y ) = ax + b ae(x) + b [Theorem 3.1] = a(x E(X)) = a[dx] This completes the proof. A simple numerical example will help concretize these results. Consider the original set of scores shown in Table 3.1. Their deviation score equivalents are 1, 0, and +1, as shown. Suppose we generate a new set of Y scores using the equation Y = 2X + 5. The new deviation scores are twice the old set of deviation scores. The multiplicative constant 2 has an effect on the deviation scores, but the additive constant 5 does not. From the preceding results, it is easy to deduce the effect of a linear transformation on the variance and standard deviation. Since additive constants have no effect on deviation scores, they cannot affect either the variance or the standard deviation. On the other hand, multiplicative constants constants come straight through in the deviation scores, and this is reflected in the standard deviation and the variance.

6 36 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS [dx] X Y = 2X + 5 [dy ] Table 3.1 Effect of a Linear Transform on Deviation Scores Theorem 3.2 (Effect of a LT on the Variance and SD) Suppose a variable X is transformed into Y via the linear transform Y = ax + b. Then, for random variables, the following results hold. σ Y = a σ X σ 2 Y = a 2 σ 2 X Similar results hold for samples of N numbers, i.e., S Y = a S X S 2 Y = a 2 S 2 X Proof. Consider the case of the variance of sample scores. By definition, S 2 Y = 1/(N 1) = 1/(N 1) = 1/(N 1) N [dy] 2 i N (a[dy] i ) 2 (from Result 3.2) N a 2 [dx] 2 i = a 2 (1/(N 1) = a 2 S 2 X For random variables, we have σ 2 Y = E(Y E(Y )) 2 ) N [dx] 2 i = E(aX + b E(aX + b)) 2 = E(aX + b (ae(x) + b)) 2 (from Theorem 3.1) = E(aX + b ae(x) b) 2 = E(aX ae(x)) 2 = E(a 2 (X E(X)) 2 ) (from Theorem 3.1) = a 2 E(X E(X)) 2 = a 2 σ 2 X

7 LINEAR TRANSFORMATION RULES Standard Scores Having deduced the effect of a linear transformation on the mean and variance of a random variable or a set of sample scores, we are now better equipped to understand the notion of standard scores. Standard scores are scores that have a particular desired mean and standard deviation. So long as a set of scores are not all equal, it is always possible to transform them, by a linear transformation, so that they have a particular desired mean and standard deviation. Result 3.3 (Linear Standardization Rules) Suppose we restrict ourselves to the class of order-preserving linear transformations, of the form Y = ax+b, with a > 0. If a set of scores currently has mean x and standard deviation S X, and we wish the scores to have a mean y and standard deviation S Y, we can achieve the desired result with a linear transformation with and a = S Y S X b = y ax Similar rules hold for random variables, i.e., a random variable X with mean µ X and standard deviation σ X may be linearly transformed into a random variable Y with mean µ Y and standard deviation σ Y by using a = σ Y σ X and b = µ Y aµ X Example 3.1 (Rescaling Grades) Result 3.3 is used frequently to rescale grades in university courses. Suppose, for example, the original grades have a mean of 70 and a standard deviation of 8, but the typical scale for grades is a mean of 68 and a standard deviation of 12. In that case, we have a = SY S X = 12 8 = 1.5 b = y ax = 68 (1.5)(70) = 37 Suppose that there is a particular desired metric (i.e., desired mean and standard deviation) that scores are to be reported in, and that you were seeking a formula for transforming any set of data into this desired metric. For example, you seek a formula that will transform any set of x i into y i with a mean of 500 and a standard deviation of 100. Result 3.3 might make it seem at first glance that such a formula does not exist, i.e., that the formula

8 38 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS would change with each new data set. This is, of course, true in one sense. However, in another sense, we can write a single formula that will do the job on any data set. We begin by defining the notion of Z-scores. Definition 3.8 (Z-Scores) Z-scores are scores that have a mean of 0 and a standard deviation of 1. Any set of N scores x i that are not all equal can be converted into Z-scores by the following transformation rule z i = xi x S X We say that a random variable is in Z-score form if it has a mean of 0 and a standard deviation of 1. Any random variable X having expected value µ and standard deviation σ may be converted to Z-score form via the transformation Z = X µ σ That the Z-score transformation rule accomplishes its desired purpose is derived easily from the linear transformation rules. Specifically, when we subtract the current mean from a random variable (or a list of numbers), we do not affect the standard deviation, while we reduce the mean to zero. At that point, we have a list of numbers with a mean of 0 and the original, unchanged standard deviation. When we next divide by the standard deviation, the mean remains at zero, while the standard deviation changes to 1. Once the scores are in Z-score form, the linear transformation rules reveal that it is very easy to transform them into any other desired metric. Suppose, for example, we wished to transform the scores to have a mean of 500 and a standard deviation of 100. Since the mean is 0 and the standard deviation 1, multiplying by 100 will change the standard deviation to 100 while leaving the mean at zero. If we then add 500 to every value, the resulting scores will have a mean of 500 and a standard deviation of 100. Consequently, the following transformation rule will change any set of X scores into Y scores with a mean of 500 and a standard deviation of 100, so long as the original scores are not all equal. ( ) xi x y i = (3.3) S X The part of Equation 3.3 in parentheses transforms the scores into Z-score form. Multiplying by 100 moves the standard deviation to 100 while leaving the mean at 0. Adding 500 then moves the mean to 500 while leaving the standard deviation at 100. Z-scores also have an important theoretical property, i.e., their invariance under linear transformation of the raw scores. Result 3.4 (Invariance Property of Z-Scores) Suppose you have two variables X and Y, and Y is an order-preserving linear transformation of X.

9 [zx] X Y = 2X 5 [zy ] LINEAR TRANSFORMATION RULES 39 Table 3.2 Invariance of Z-Scores Under Linear Transformation Consider X and Y transformed to Z-score variables [zx] and [zy ]. Then [zx] = [zy ]. The result holds for sets of sample scores, or for random variables. To construct a simple example that dramatizes this result, we will employ a well-known fundamental result in statistics. Result 3.5 (Properties of 3 Evenly Spaced Numbers) Consider a set of 3 evenly spaced numbers, with middle value M and spacing D, the difference between adjacent numbers. For example, the numbers 3,6,9 have M = 6 and D = 3. For any such set of numbers, the sample mean x is equal to the middle value M, and the sample standard deviation S X is equal to D, the spacing. This result makes it easy to construct simple data sets with a known mean and standard deviation. It also makes it easy to compute the mean and variance of 3 evenly spaced numbers by inspection. Example 3.2 (Invariance of Z-Scores Under Linear Transformation) Consider the data in Table The X raw scores are evenly spaced, so it is easy to see they have a mean of 100 and a standard deviation of 15. The [zx] scores are +1, 0, and 1 respectively. Now, suppose we linearly transform the X raw scores into Y scores with the equation Y = 2X 5. The new scores, as can be predicted from the linear transformation rules, will have a mean given by and a standard deviation of Y = 2x 5 = 2(100) 5 = 195, S Y = 2S X = 2(15) = 30 Since the Y raw scores were obtained from the X scores by linear transformation, we can tell, without computing them, that the [zy ] scores must also be +1, 0, and 1. This is easily verified by inspection. An important implication of Result 3.4 is that any statistic that is solely a function of Z-scores must be invariant under linear transformations of the observed variables.

10 40 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS Covariances and Correlations Multivariate analysis deals with the study of sets of variables, observed on one or more populations, and the way these variables covary, or behave jointly. A natural measure of how variables vary together is the covariance, based on the following simple idea. Suppose there is a dependency between two variables X and Y, reflected in the fact that high values of X tend to be associated with high values of Y, and low values of X tend to be associated with low values of Y. We might refer to this as a direct (positive) relationship between X and Y. If such a relationship exists, and we convert X and Y to deviation score form, the cross-product of the deviation scores should tend to be positive, because when an X score is above its mean, a Y score will tend to be above its mean (in which case both deviation scores will be positive), and when X is below its mean, Y will tend to be below its mean (in which case both deviation scores will be negative, and their product positive). Covariance is simply the average cross-product of deviation scores. Definition 3.9 (Population Covariance) The covariance of two random variables, X and Y, denoted σ XY, is the average cross-product of deviation scores, computed as σ XY = E [(X E(X))(Y E(Y ))] (3.4) = E(XY ) E(X)E(Y ) (3.5) Comment. Proving the equality of Equations 3.4 and 3.5 is an elementary exercise in expected value algebra that is given in many mathematical statistics texts. Equation 3.5 is generally more convenient in simple proofs. When defining a sample covariance, we average by N 1 instead of N, to maintain consistency with the formulas for the sample variance and standard deviation. Definition 3.10 is defined as (Sample Covariance) The sample covariance of X and Y S XY = 1/(N 1) = 1/(N 1) = 1/(N 1) NX NX [dx] i[dy] i (3.6) (x i x )(y i y ) (3.7) NX x iy i P N xi P N yi N! (3.8) The covariance, being a measure on deviation scores, is invariant under addition or subtraction of a constant. On the other hand, multiplication or

11 LINEAR TRANSFORMATION RULES 41 division of a constant affects the covariance, as described in the following result. Note that the variance of a variable may be thought of as its covariance with itself. Hence, in some mathematical discussions, the notation σ XX is used (i.e., the covariance of X with itself) is used to signify the variance of X. Result 3.6 (Effect of a Linear Transform on Covariance) Suppose the variables X and Y are transformed into the variables W and M via the linear transformations W = ax + b and M = cy + d. Then the covariance between the new variables W and M relates to the original covariance in the following way. For random variables, we have For samples of scores, we have σ W M = acσ XY S W M = acs XY Covariances are important in statistical theory. (If they were not, why would we be studying the analysis of covariance structures?) However, the covariance between two variables does not have a stable interpretation if the variables change scale. For example, the covariance between height and weight changes if you measure height in inches as opposed to centimeters. The value will be 2.54 times as large if you measure height in centimeters. So, for example, if someone says The covariance between height and weight is 65, it is impossible to tell what this implies without knowing the measurement scales for height and weight The Population Correlation As we noted in the preceding section, the covariance is not a scale-free measure of covariation. To stabilize the covariance across changes in scale, one need only divide it by the product of the standard deviations of the two variables. The resulting statistic, ρ xy, the Pearson Product Moment Correlation Coefficient, is defined as the average cross-product of Z scores. From Result 3.4, we can deduce without effort one important characteristic of the Pearson correlation, i.e., it is invariant under changes of scale of the variables. So, for example, the correlation between height and weight will be the same whether you measure height in inches or centimeters, weight in pounds or kilograms. Definition 3.11 (The Population Correlation ρ xy) For two random variables X and Y the Pearson Product Moment Correlation Coefficient, is defined as ρ XY = E([zX][zY ]) (3.9) Alternatively, one may define the correlation as ρ XY = σxy σ Xσ Y (3.10)

12 42 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS The Sample Correlation The sample correlation coefficient is defined analogously to the population quantity. We have Definition 3.12 (The Sample Correlation r xy) For N pairs of scores X and Y the Pearson Product Moment Correlation Coefficient, is defined as r XY = 1/(N 1) NX [zx] i[zy] i (3.11) There are a number of alternative computational formulas, such as or its fully explicit expansion r XY = SXY S XS Y, (3.12) r XY = N P x iy i P x i P yi (N P x 2 i (P x i) 2 ) (N P y 2 i (P y i) 2 ) (3.13) (Note: Since all summations in Equation 3.13 run from 1 to N, I have simplified the notation by eliminating them.) With modern statistical software, actual hand computation of the correlation coefficient is a rare event. 3.3 LINEAR COMBINATION RULES The linear combination (or LC) is a key idea in statistics. While studying factor analysis, structural equation modeling, and related methods, we will encounter linear combinations repeatedly. Understanding how they behave is important to the study of SEM. In this section, we define linear combinations, and describe their statistical behavior Definition of a Linear Combination Definition 3.13 (Linear Combination) A linear combination (or linear composite) L of J random variables X j is any weighted sum of those variables, i.e.., L = JX c jx j (3.14) For sample data, in a data matrix X with N rows and J columns, representing N scores on each of J variables, a linear combination score for the ith set of observations is any expression of the form L i = JX c jx ij (3.15)

14 44 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS is µ L = c j µ j For a set of N sample scores on J variables, with linear combination scores computed as l i = c j x ij, the mean of the linear combination scores is given by l = c j x j This simple, but very useful result allows one to compute the mean of a LC without actually calculating LC scores. Example 3.4 (Computing the Mean of a Linear Combination) Consider again the data in Table 3.3. One may calculate the mean of the final grades by calculating the individual grades, summing them, and dividing by N. An alternative way is to compute the mean from the means of X and Y. So, for example, the mean of the final grades must be g = 1 3 x y 1 = = = Variance and Covariance Of Linear Combinations The variance of a LC also plays a very important role in statistical theory. Theorem 3.4 (Variance of a Linear Combination) Given J random variables x j having variances σ 2 j = σ jj and covariances σ jk (between variables X j and X k ), the variance of a linear combination is L = c j X j

15 LINEAR COMBINATION RULES 45 σ 2 L = = c j c k σ jk (3.17) k=1 J 1 c 2 jσj c j c k σ jk (3.18) j=2 k=1 For a set of N sample scores on J variables, with linear combination scores computed as L i = c j x ij, the variance of the linear combination scores is given by S 2 L = = c j c k S jk (3.19) k=1 J 1 c 2 jsj c j c k S jk (3.20) j=2 k=1 It is, of course, possible to define more than one LC on the same set of variables. For example, consider a personality questionnaire that obtains responses from individuals on a large number of items. Several different personality scales might be calculated as linear combinations of the items. (Items not included in a scale have a linear weight of zero.) In that case, one may be concerned with the covariance or correlation between the two LCs. The following theorem describes how the covariance of two LCs may be calculated. Theorem 3.5 (Covariance of Two Linear Combinations) Consider two linear combinations L and M of a set of J random variables x j, having variances and covariances σ ij. (If i = j, a variance of variable i is being referred to, otherwise a covariance between variables i and j.) The two LCs are, and L = M = c j X j, d j X j, The covariance σ LM between the two variables L and M is given by σ LM = k=1 c j d k σ jk (3.21)

16 46 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS The theorem generalizes to linear combinations of sample scores in the obvious way. For a set of N sample scores on J variables, with linear combination scores computed as L i = c j x ij and M i = d j x ij the covariance of the linear combination scores is given by S LM = k=1 c j d k S jk (3.22) Some comments are in order. First, note that Theorem 3.5 includes the result of Theorem 3.4 as a special case (simply let L = M, recalling that the variance of a variable is its covariance with itself). Second, the results of both theorems can be expressed as a heuristic rule that is easy to apply in simple cases. Result 3.7 (Heuristic Rules for LCs and LTs) To compute a variance of a linear combination or transformation, for example perform the following steps: L = X + Y, 1. Express the linear combination or transformation in algebraic form; 2. Square the expression; X + Y (X + Y ) 2 = X 2 + Y 2 + 2XY 3. Transform the result with the following conversion rules: (a) Wherever a squared variable appears, replace it with the variance of the variable. (b) Wherever the product of two variables appears, replace it with the covariance of the two variables. (c) Any expression that does not contain either the square of a variable or the product of two variables is eliminated. X 2 + Y 2 + 2XY σ 2 X + σ 2 Y + 2σ XY To calculate the covariance of two linear combinations, replace the first two steps in the heuristic rules as follows:

17 LINEAR COMBINATION RULES Write the two linear combinations side-by-side, for example; 2. Compute their algebraic product; (X + Y ) (X 2Y ) X 2 XY 2Y 2 The conversion rule described in step 3 above is then applied, yielding, with the present example σ 2 X σ XY 2σ 2 Y Result 3.7 applies identically to random variables or sets of sample scores. It provides a convenient method for deriving a number of classic results in statistics. A well known example is the difference between two variables, or, in the sample case, the difference between two columns of numbers. Example 3.5 (Variance of Difference Scores) Suppose you gather pairs of scores x i and y i for N people, and compute, for each person, a difference score as d i = x i y i Express the sample variance S 2 D of the difference scores as a function of S 2 X, S 2 Y, and S XY. Solution. First apply the heuristic rule. Squaring X Y, we obtain X 2 + Y 2 2XY. Apply the conversion rules, and the result is S 2 D = S 2 X + S 2 Y 2S XY Example 3.6 (Variance of Course Grades) The data in Table 3.3 give the variances of the X and Y tests. Since the deviation scores are easily calculated, we can calculate the covariance S XY = 1/(N 1) [dx][dy ] between the two exams as -20. Since the final grades are a linear combination of the exam grades, we can generate a theoretical formula for the variance of the final grades. Since G = 1/3X + 2/3Y, application of the heuristic rule shows that SG 2 = 1/9SX 2 + 4/9SY 2 + 4/9S XY = 1/9(100) + 4/9(16) + 4/9( 20) = 84/9 = 9.33 The standard deviation of the final grades is thus 9.33 = One aspect of the data in Table 3.3 is unusual, i.e., the scores on the two exams are negatively correlated. In practice, of course, this is unlikely to happen unless the class size is very small and the exams unusual. Normally,

20 50 THE SCALAR ALGEBRA OF MEANS, COVARIANCES, AND CORRELATIONS X Y W = X + Y M = X Y Table 3.4 Constructing Numbers with Zero Correlation of numbers, W and M. W is the sum of X and Y, M the difference between X and Y. W and M will have a zero correlation. An example of the method is shown in Table Using any general purpose statistical program, you can verify that the numbers indeed have a precisely zero correlation. Can you explain why? Solution. We can develop a general expression for the correlation between two columns of numbers, using our heuristic rules for linear combinations. Let s begin by computing the covariance between W and M. First, we take the algebraic product of X + Y and X Y. We obtain (X + Y )(X Y ) = X 2 Y 2 Applying the conversion rule, we find that S X+Y,X Y = S 2 X S 2 Y Note that the covariance between W and M is unrelated to the covariance between the original columns of numbers, X and Y! Moreover, if X and Y have the same variance, then W and M will have a covariance of exactly zero. Since the correlation is the covariance divided by the product of the standard deviations, it immediately follows that W and M will have a correlation of zero if and only if X and Y have the same variance. If X and Y contain the same numbers (in permuted order), then they have the same variance, and W and M will have zero correlation. Problems 3.1 Suppose you have a set of data in the variable X having a sample mean x = 100 and a sample standard deviation S X = 10. For each of the following transformed variables, indicate the mean y and the standard deviation S Y y i = 2x i y i = x i y i = (x i x )/S X y i = 2(x i 5)/10 + 7

21 PROBLEMS A set of scores that are in Z-score form are multiplied by 10, then 5 is added to each number. What will be the mean and standard deviation of the resulting scores? 3.3 You have two sets of scores X and Y, on the same N individuals. Suppose x = 34.5, y = 44.9, S 2 X = 38.8, S2 Y = 44.4, and S XY = Compute the mean and variance of the linear combination scores w i = 2x i y i Compute the covariance and correlation between the two linear combinations a i = x i + y i and b i = x i 2y i. 3.4 The grades in a particular course have a mean of 70 and a standard deviation of 10. However, they are supposed to have a mean of 65 and a standard deviation of 8. You and a friend are the teaching assistants in the course, and are asked to transform the grades. You decide to multiply each grade by.8, then add 9 to each grade. You are about to do this when your friend interrupts you, and says that you should first add to each score, and then multiply by.8. Who is correct? 3.5 Given random variables X and Y, suppose it is known that both random variables have zero means, and that E(X 2 ) = 9, E(Y 2 ) = 4, and that E(XY ) = 4. Find the covariance and correlation between X and Y, i.e., ρ xy and σ xy.

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Summation Algebra. x i

2 Summation Algebra In the next 3 chapters, we deal with the very basic results in summation algebra, descriptive statistics, and matrix algebra that are prerequisites for the study of SEM theory. You

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### 4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

### Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

### Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Covariance and Correlation

Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

### Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

### Elementary Statistics. Scatter Plot, Regression Line, Linear Correlation Coefficient, and Coefficient of Determination

Scatter Plot, Regression Line, Linear Correlation Coefficient, and Coefficient of Determination What is a Scatter Plot? A Scatter Plot is a plot of ordered pairs (x, y) where the horizontal axis is used

### 3 Random vectors and multivariate normal distribution

3 Random vectors and multivariate normal distribution As we saw in Chapter 1, a natural way to think about repeated measurement data is as a series of random vectors, one vector corresponding to each unit.

### MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

### Covariance and Correlation. Consider the joint probability distribution f XY (x, y).

Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 2: Section 5-2 Covariance and Correlation Consider the joint probability distribution f XY (x, y). Is there a relationship between X and Y? If so, what kind?

### Exercises with solutions (1)

Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

### The Bivariate Normal Distribution

The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Multiple regression - Matrices

Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,

### Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

### Numerical Summarization of Data OPRE 6301

Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting

### Data reduction and descriptive statistics

Data reduction and descriptive statistics dr. Reinout Heijungs Department of Econometrics and Operations Research VU University Amsterdam August 2014 1 Introduction Economics, marketing, finance, and most

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### Statistical Foundations: Measures of Location and Central Tendency and Summation and Expectation

Statistical Foundations: and Central Tendency and and Lecture 4 September 5, 2006 Psychology 790 Lecture #4-9/05/2006 Slide 1 of 26 Today s Lecture Today s Lecture Where this Fits central tendency/location

### Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 2 Simple Linear Regression

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 2 Simple Linear Regression Hi, this is my second lecture in module one and on simple

0.7 Quadratic Equations 8 0.7 Quadratic Equations In Section 0..1, we reviewed how to solve basic non-linear equations by factoring. The astute reader should have noticed that all of the equations in that

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### MATH Fundamental Mathematics II.

MATH 10032 Fundamental Mathematics II http://www.math.kent.edu/ebooks/10032/fun-math-2.pdf Department of Mathematical Sciences Kent State University December 29, 2008 2 Contents 1 Fundamental Mathematics

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

Expectations Expectations. (See also Hays, Appendix B; Harnett, ch. 3). A. The expected value of a random variable is the arithmetic mean of that variable, i.e. E() = µ. As Hays notes, the idea of the

### Chapters 5. Multivariate Probability Distributions

Chapters 5. Multivariate Probability Distributions Random vectors are collection of random variables defined on the same sample space. Whenever a collection of random variables are mentioned, they are

### The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### Definition The covariance of X and Y, denoted by cov(x, Y ) is defined by. cov(x, Y ) = E(X µ 1 )(Y µ 2 ).

Correlation Regression Bivariate Normal Suppose that X and Y are r.v. s with joint density f(x y) and suppose that the means of X and Y are respectively µ 1 µ 2 and the variances are 1 2. Definition The

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### 1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

### Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

### Solving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,

Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777-855 Gaussian-Jordan Elimination In Gauss-Jordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations

### Basics of Polynomial Theory

3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### Inferential Statistics

Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are

### Summarizing Data: Measures of Variation

Summarizing Data: Measures of Variation One aspect of most sets of data is that the values are not all alike; indeed, the extent to which they are unalike, or vary among themselves, is of basic importance

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### MATH REVIEW KIT. Reproduced with permission of the Certified General Accountant Association of Canada.

MATH REVIEW KIT Reproduced with permission of the Certified General Accountant Association of Canada. Copyright 00 by the Certified General Accountant Association of Canada and the UBC Real Estate Division.

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### Lecture 16: Expected value, variance, independence and Chebyshev inequality

Lecture 16: Expected value, variance, independence and Chebyshev inequality Expected value, variance, and Chebyshev inequality. If X is a random variable recall that the expected value of X, E[X] is the

### Variances and covariances

Chapter 4 Variances and covariances 4.1 Overview The expected value of a random variable gives a crude measure for the center of location of the distribution of that random variable. For instance, if the

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### 2. INEQUALITIES AND ABSOLUTE VALUES

2. INEQUALITIES AND ABSOLUTE VALUES 2.1. The Ordering of the Real Numbers In addition to the arithmetic structure of the real numbers there is the order structure. The real numbers can be represented by

### Factoring, Solving. Equations, and Problem Solving REVISED PAGES

05-W4801-AM1.qxd 8/19/08 8:45 PM Page 241 Factoring, Solving Equations, and Problem Solving 5 5.1 Factoring by Using the Distributive Property 5.2 Factoring the Difference of Two Squares 5.3 Factoring

### Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

### Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

### Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

5 Joint Probability Distributions and Random Samples Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Two Discrete Random Variables The probability mass function (pmf) of a single

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Normal distribution. ) 2 /2σ. 2π σ

Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

### Mathematics of Cryptography

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### 7 - Linear Transformations

7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

### Grade Level Year Total Points Core Points % At Standard 9 2003 10 5 7 %

Performance Assessment Task Number Towers Grade 9 The task challenges a student to demonstrate understanding of the concepts of algebraic properties and representations. A student must make sense of the

### Equations and Inequalities

Rational Equations Overview of Objectives, students should be able to: 1. Solve rational equations with variables in the denominators.. Recognize identities, conditional equations, and inconsistent equations.

### We have discussed the notion of probabilistic dependence above and indicated that dependence is

1 CHAPTER 7 Online Supplement Covariance and Correlation for Measuring Dependence We have discussed the notion of probabilistic dependence above and indicated that dependence is defined in terms of conditional

### Correlation key concepts:

CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

### Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

### a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### Some probability and statistics

Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### Notes 5: More on regression and residuals ECO 231W - Undergraduate Econometrics

Notes 5: More on regression and residuals ECO 231W - Undergraduate Econometrics Prof. Carolina Caetano 1 Regression Method Let s review the method to calculate the regression line: 1. Find the point of

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### 6.4 Normal Distribution

Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

### Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

Page 1 of 13 Review of Linear Expressions and Equations Skills involving linear equations can be divided into the following groups: Simplifying algebraic expressions. Linear expressions. Solving linear

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Basic Statistcs Formula Sheet

Basic Statistcs Formula Sheet Steven W. ydick May 5, 0 This document is only intended to review basic concepts/formulas from an introduction to statistics course. Only mean-based procedures are reviewed,

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### 5. Linear Regression

5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

### Math 141. Lecture 7: Variance, Covariance, and Sums. Albyn Jones 1. 1 Library 304. jones/courses/141

Math 141 Lecture 7: Variance, Covariance, and Sums Albyn Jones 1 1 Library 304 jones@reed.edu www.people.reed.edu/ jones/courses/141 Last Time Variance: expected squared deviation from the mean: Standard

### Simultaneous Equations

Paper Reference(s) 666/01 Edexcel GCE Core Mathematics C1 Advanced Subsidiary Simultaneous Equations Calculators may NOT be used for these questions. Information for Candidates A booklet Mathematical Formulae

### Algebra Tiles Activity 1: Adding Integers

Algebra Tiles Activity 1: Adding Integers NY Standards: 7/8.PS.6,7; 7/8.CN.1; 7/8.R.1; 7.N.13 We are going to use positive (yellow) and negative (red) tiles to discover the rules for adding and subtracting

### ST 371 (VIII): Theory of Joint Distributions

ST 371 (VIII): Theory of Joint Distributions So far we have focused on probability distributions for single random variables. However, we are often interested in probability statements concerning two or

### UNIT TWO POLYNOMIALS MATH 421A 22 HOURS. Revised May 2, 00

UNIT TWO POLYNOMIALS MATH 421A 22 HOURS Revised May 2, 00 38 UNIT 2: POLYNOMIALS Previous Knowledge: With the implementation of APEF Mathematics at the intermediate level, students should be able to: -

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Solving Equations with One Variable Type - The Algebraic Approach. Solving Equations with a Variable in the Denominator - The Algebraic Approach

3 Solving Equations Concepts: Number Lines The Definitions of Absolute Value Equivalent Equations Solving Equations with One Variable Type - The Algebraic Approach Solving Equations with a Variable in

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

### Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

### Data. ECON 251 Research Methods. 1. Data and Descriptive Statistics (Review) Cross-Sectional and Time-Series Data. Population vs.

ECO 51 Research Methods 1. Data and Descriptive Statistics (Review) Data A variable - a characteristic of population or sample that is of interest for us. Data - the actual values of variables Quantitative

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then