Projections and Regressions
|
|
- Karen Bridges
- 7 years ago
- Views:
Transcription
1 Projections and Regressions Simple example - no intercept term Start with a standard cost estimation problem from cost accounting. Suppose we have collected the following data on direct labor hours and manufacturing overhead. The problem is to estimate a linear relationship to predict the cost of manufacturing overhead given the number of direct labor hours worked. Period Manufacturing overhead dollars (MOH) Direct labor hours (DLH) Usually in a problem like this, we would allow for a fixed cost, or intercept, component. But to start with here, assume there is no fixed cost; we wish only to find the variable cost of a direct labor hour, call it α. The cost equation we wish to estimate, then, will be MOH = α DLH, where MOH and DLH are the data vectors from the table. And we are looking for an α. The nice thing about setting the problem up this way (without a fixed cost component) is that is it can be solved using the techniques in the separating hyperplane note. The MOH vector is comparable to the x vector from before, so we are looking for the nearest point on the
2 extended DLH vector to MOH. As before, let λ be the difference vector between MOH and the extended DLH, so λ = α DLH - MOH. As before, we wish to minimize the length (λ T λ) of the λ vector. And, also as before, we can write down the orthogonality condition which ensures λ T λ is minimized. DLH T λ = DLH T (α DLH - MOH) = 0. Solving for α: α = (DLH T DLH) -1 DLH T MOH α= = = And we can also get the difference vector λ and its length λ T λ. λ = αdlh MOH = = λλ T = = =
3 The point of the exercise is that, using nothing more than the orthogonality condition, we have effectively solved the estimation problem by projecting the MOH vector into the extended DLH vector. Or, to use alternative terminology, we have regressed MOH on DLH. It is useful at this stage to compare our calculations with a standard computer regression package. The following is output from Excel's regression command. For this regression run, the intercept was suppressed, which can be seen in the output as the coefficient on the intercept is zero. 3
4 MOH DLH SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 5 ANOVA df SS MS F Significance F Regression Residual Total 5.8 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept 0 #N/A #N/A #N/A #N/A #N/A X Variable We notice immediately that the coefficient on the x variable (DLH) is equal to what we calculated for α. In other words, the weight on the independent variable is equal to where we position the projection on the DLH vector. Furthermore, the length of the difference vector, λ T λ, shows up in the regression output as the residual sum of squares. 4
5 Several of the other numbers in the output can be calculated directly from these two. The standard error of the regression is a measure of how much uncertainty remains in MOH after using the information in DLH. It is calculated as the square root of the residual sum of squares divided by its degrees of freedom. (Here the degrees of freedom is four, since there are five data points less one degree of freedom used for the independent variable, DLH.) The standard error of the estimate is a measure of dispersion of the estimate α, that is, how confident we are in the estimate. We can use our expression for α to write down the variance of α. Var α = Var (DLH T DLH) -1 DLH T MOH We already have an estimate for the variance of MOH: the standard error of the regression. Using the definition of variance, the coefficient of the random variable MOH is brought out in front of the variance operator and squared. Var α = (DLH T DLH) - DLH T DLH Var MOH = (DLH T DLH) -1 Var MOH We already know DLH T DLH to be 9. The standard error of the estimate is the standard error of the regression divided by 9. Std error of estimate = = R-squared is a measure of the power of DLH to explain MOH; it compares the amount of the variation in MOH, itself, with the amount of variation in MOH remaining when DLH is used 5
6 for prediction. The latter number is measured by the length of the difference vector, λ. The unconditional variation in MOH is sum of the squared deviations of MOH from its mean: MOH. ( ) = ( ) MOH MOH MOH = MOH MOH T MOH n ( MOH ) ( ) = = n 8. The unconditional variation in MOH is reported as the total sum of squares. To normalize R between zero and one, it is reported as one minus the ratio of residual to total sums of squares. Res SS 593 R = 1 = 1 9 = Tot SS 8. One more thing: if we are interested in the predicted MOH numbers given the DLH data (usually denoted MOH ), we can multiply the DLH vector by the computed α. T 1 MOH T = DLH( DLH DLH) ( DLH MOH)= DLHα = = The predicted MOH numbers are referred to as the projection of MOH on DLH. Same example - intercept term included 6
7 Now consider the problem the way it is usually presented in a cost accounting setting: allow for the existence of a fixed cost component in MOH. This will allow a better fit of the data and we would expect better goodness of fit measures, such as R. Even though the calculations are slightly more complicated, the basic idea remains the same, and the sequence of events is identical. We have one more vector to project the MOH numbers into (to allow for the intercept), but we start with the same orthogonality condition. Orthogonality allows us to calculate the weights on the two vectors we are projecting into (the regression coefficients), and then calculate the length of the difference vector, λ T λ. Various goodness of fit statistics can be calculated as before, paying some attention to degree of freedom issues. Period Manufacturing overhead dollars (MOH) Fixed Cost Direct labor hours (DLH) In the preceding table, a column has been added for the fixed cost term: a column of ones. The fixed cost is the same each period, so what we need to do is find a weight to attach to the vector of ones; that will be the fixed cost number. The variable cost is found as before: the weight on the DLH vector. The orthogonality condition is the same as before, except that we have two vectors that must be orthogonal to the difference vector. 7
8 α + α = The difference vector, λ, is the second term in square brackets. It consists of a weighted combination of the fixed cost and DLH vectors, using weights α 1 and α, and then the MOH vector subtracted out. The difference vector must be orthogonal to both the fixed cost vector and DLH, and that, of course, is what the equation says, since there is a zero vector on the right hand side. After doing the arithmetic, the above matrix equation is reduced to the following system of two equations in two unknowns. 5α + 31α = α + 9α = 95 1 Solving two linear equtions for two unkowns is a relatively simple matter, and so we have the regression weights. α α 1 61 = = = = And we can substitute back in to find λ. 8
9 λ= ( ) + ( ) = The length of the difference vector: λλ T = = = The preceding discussion can be simplified by using matrix notation. Let X be a matrix whose columns are the vector of ones (for the fixed cost) and the DLH vector. Similarly, let β be a vector whose elements are α 1 and α X = = α 1 β α Now the orthogonality condition can be written as X T (Xβ - MOH) = 0. Similarly, the difference vector and the regression weights are written as follows. λ = Xβ MOH T β = ( XX) 1 T XMOH Notice these are the same expressions as for the previous - no intercept - case except that the DLH vector is replaced by the X matrix, and the scalar α is now the vector β. We should note that X T X is now a matrix and (X T X) -1 is now a matrix inverse instead of just the reciprocal of a 9
10 scalar. The two expressions are presented for completeness and because they are used in the computer regression output. T T XX= 5 31 XX 1 ( ) = SUMMARY OUTPUT MOH FC DLH Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 5 ANOVA df SS MS F Significance F Regression Residual Total 4.8 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept X Variable Now there are two regression coefficients, and we calculated those above. The standard error of the regression is the square root of λ T λ divided by the degrees of freedom; notice there are now only three degrees of freedom, since two are being used for the parameters of the regression. 10
11 179 Std err regression = 184 = Recall that in the previous regression the standard error of the coefficient was the regression standard error times the square root of (DLH T DLH) -1. Now (X T X) -1 is a matrix and the diagonal elements are used in the calculation of the coefficient standard errors. Std err intercept 179 = = Std err variable 1 = = There is one more thing worth noting before concluding this section: the elements of λ sum to zero. This is sometimes referred to as the zero mean error property. It holds here since λ is constrained to be orthogonal to the fixed cost vector of all ones; multiplying λ by a vector of ones merely sums the elements in λ, hence, they must sum to zero. It is of interest to us that the logic works the other way, as well. That is, when the elements of a vector sum to zero, the vector must be orthogonal to a vector of ones. In the accounting problem we deal with an x vector which sums to zero; also every column of the A matrix does, as well. It turns out to be useful on occasion that x and the columns of A are all orthogonal to a vector of ones. It is, in fact, one way to state the nature of the double entry system: the balances in the accounts (and any individual journal entry) are orthogonal to the same (unit) vector. 11
12 Projecting into the nullspace In this section we briefly present an alternative way to calculate the difference vector λ. It relies on the fact that λ is orthogonal to the columns of X. Therefore, if we took vectors orthogonal to all the columns in X, and projected MOH into them, we would reproduce λ. The logic is the same as the simple two dimensional example in the nearest distance note. (0, b) (a, b) (a, 0) If we are interested in the difference vector between the vector (a, b) and the nearest point on the vertical axis, we could project (a, b) into the vertical axis and then subtract to get the difference vector: (a, b) - (0, b) = (a, 0). Alternatively, we could find the difference vector by projecting directly into the horizontal axis: (a, 0). Either technique works, since the vertical and horizontal axes are orthogonal. For the problem at hand, the following three vectors, denoted as the columns of N, are orthogonal to the columns of X. It's easy to verify that the inner product of any column of N with any column of X is zero. Because they multiply to zero, the matrix N is sometimes referred to as a nullspace matrix relative to X. 1
13 N = The projection of MOH into N is accomplished using the relationships from above T 1 T 1 NNN ( ) NMOH = The sign is different from l, since this is the vector to be subtracted. See the picture. In this problem there isn't any advantage to using the nullspace. However, in the accounting problem there are three things which make the nullspace approach a good way to go. 1. The number of vectors in the nullspace tends to be small.. The nullspace vectors are easy to work with since they are already orthogonal to each other. 3. It is easy to see by inspection what the nullspace vectors are. (Here we take advantage of the fact that in double entry, the vector of ones is orthogonal to the columns of A.) 13
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
More information3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationThe Point-Slope Form
7. The Point-Slope Form 7. OBJECTIVES 1. Given a point and a slope, find the graph of a line. Given a point and the slope, find the equation of a line. Given two points, find the equation of a line y Slope
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More information3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or
More informationDimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationBiggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress
Biggar High School Mathematics Department National 5 Learning Intentions & Success Criteria: Assessing My Progress Expressions & Formulae Topic Learning Intention Success Criteria I understand this Approximation
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationOne-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationUnivariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
More informationMULTIPLE REGRESSION WITH CATEGORICAL DATA
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting
More informationEquations, Inequalities & Partial Fractions
Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More informationRegression III: Advanced Methods
Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationOne-Way Analysis of Variance
One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
More information1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More information" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationSTATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent
More informationData Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
More informationRegression step-by-step using Microsoft Excel
Step 1: Regression step-by-step using Microsoft Excel Notes prepared by Pamela Peterson Drake, James Madison University Type the data into the spreadsheet The example used throughout this How to is a regression
More informationAlgebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationSection 1.1 Linear Equations: Slope and Equations of Lines
Section. Linear Equations: Slope and Equations of Lines Slope The measure of the steepness of a line is called the slope of the line. It is the amount of change in y, the rise, divided by the amount of
More informationSimple Methods and Procedures Used in Forecasting
Simple Methods and Procedures Used in Forecasting The project prepared by : Sven Gingelmaier Michael Richter Under direction of the Maria Jadamus-Hacura What Is Forecasting? Prediction of future events
More informationData Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number
1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression
More informationSection 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationUsing R for Linear Regression
Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationOutline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
More information2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More informationStatistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationChapter 3 Quantitative Demand Analysis
Managerial Economics & Business Strategy Chapter 3 uantitative Demand Analysis McGraw-Hill/Irwin Copyright 2010 by the McGraw-Hill Companies, Inc. All rights reserved. Overview I. The Elasticity Concept
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationSimple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
More informationModule 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
More information5. Linear Regression
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
More information6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
More informationSimple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
More informationVector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More informationMBA Jump Start Program
MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right
More informationSection 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables
The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,
More informationRegression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
More informationCoefficient of Determination
Coefficient of Determination The coefficient of determination R 2 (or sometimes r 2 ) is another measure of how well the least squares equation ŷ = b 0 + b 1 x performs as a predictor of y. R 2 is computed
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationWeek TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500 6 8480
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
More informationHedge Effectiveness Testing
Hedge Effectiveness Testing Using Regression Analysis Ira G. Kawaller, Ph.D. Kawaller & Company, LLC Reva B. Steinberg BDO Seidman LLP When companies use derivative instruments to hedge economic exposures,
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationLecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationSection V.3: Dot Product
Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationBiostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationMULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part
More informationForecasting in STATA: Tools and Tricks
Forecasting in STATA: Tools and Tricks Introduction This manual is intended to be a reference guide for time series forecasting in STATA. It will be updated periodically during the semester, and will be
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationThis chapter will demonstrate how to perform multiple linear regression with IBM SPSS
CHAPTER 7B Multiple Regression: Statistical Methods Using IBM SPSS This chapter will demonstrate how to perform multiple linear regression with IBM SPSS first using the standard method and then using the
More informationSimple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationVocabulary Words and Definitions for Algebra
Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms
More informationFinal Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationis identically equal to x 2 +3x +2
Partial fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. 4x+7 For example it can be shown that has the same value as 1 + 3
More informationMISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
More informationAlgebra 1 Course Title
Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept
More informationA Concrete Introduction. to the Abstract Concepts. of Integers and Algebra using Algebra Tiles
A Concrete Introduction to the Abstract Concepts of Integers and Algebra using Algebra Tiles Table of Contents Introduction... 1 page Integers 1: Introduction to Integers... 3 2: Working with Algebra Tiles...
More informationSolving simultaneous equations using the inverse matrix
Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix
More informationChapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
More information3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes
Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general
More informationHow To Understand And Solve Algebraic Equations
College Algebra Course Text Barnett, Raymond A., Michael R. Ziegler, and Karl E. Byleen. College Algebra, 8th edition, McGraw-Hill, 2008, ISBN: 978-0-07-286738-1 Course Description This course provides
More informationSensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
More informationPremaster Statistics Tutorial 4 Full solutions
Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for
More informationModule 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
More information10.2 Series and Convergence
10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and
More informationCS 147: Computer Systems Performance Analysis
CS 147: Computer Systems Performance Analysis One-Factor Experiments CS 147: Computer Systems Performance Analysis One-Factor Experiments 1 / 42 Overview Introduction Overview Overview Introduction Finding
More informationMath 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
More informationMultivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
More informationExample G Cost of construction of nuclear power plants
1 Example G Cost of construction of nuclear power plants Description of data Table G.1 gives data, reproduced by permission of the Rand Corporation, from a report (Mooz, 1978) on 32 light water reactor
More informationMATHEMATICS FOR ENGINEERING BASIC ALGEBRA
MATHEMATICS FOR ENGINEERING BASIC ALGEBRA TUTORIAL 3 EQUATIONS This is the one of a series of basic tutorials in mathematics aimed at beginners or anyone wanting to refresh themselves on fundamentals.
More information