Estimation of subparameters by IPM method. IPM yöntemi ile alt parametrelerin tahmini
|
|
- Conrad Whitehead
- 7 years ago
- Views:
Transcription
1 SAÜ Fen Bil Der 0. Cilt. Sayı s Estimation of subparameters by IPM method Melek Eriş * * Nesrin Güler ABSTRACT Geliş/Received Kabul/Accepted In this study a general partitioned linear model y X V y X X V is considered to determine the best linear unbiased estimators (BLUEs) of subparameters X X. Some results are given related to the BLUEs of subparameters by using the inverse partitioned matrix (IPM) method based on a generalized inverse of a symmetric block partitioned matrix which is obtained from the fundamental BLUE equation. Keywords: BLUE generalized inverse general partitioned linear model ÖZ IPM yöntemi ile alt parametrelerin tahmini Bu çalışmada X ve X alt parametrelerinin en iyi lineer yansız tahmin edicilerini (BLUE larını) belirlemek için bir y X V y X X V genel parçalanmış lineer modeli ele alınmıştır. Temel BLUE denkleminden elde edilen simetrik blok parçalanmış matrisin bir genelleştirilmiş tersine dayanan parçalanmış matris tersi (IPM) yöntemi kullanılarak alt parametrelerin BLUE ları ile ilgili bazı sonuçlar verilmiştir. Anahtar Kelimeler: BLUE genelleştirilmiş ters genel parçalanmış lineer model Sorumlu Yazar / Corresponding Author Karadeniz Teknik Üniversitesi Fen Fakültesi İstatistik ve Bilgisayar Bilimleri Bölümü Trabzon - melekeris@ktu.edu.tr Sakarya Üniversitesi Fen Edebiyat Fakültesi İstatistik Bölümü Sakarya - nesring@sakarya.edu.tr
2 M. Eriş N. Güler Estimation of subparameters by IPM method. INTRODUCTION Consider the general partitioned linear model y X X X () where nx y R is an observable rom vector X X : X R is a known matrix with X R X R px : R is a p x vector of unknown parameters with R p x nx R R is a rom error vector. Further the expectation E y n X the covariance matrix Cov y V R is a known nonnegative definite matrix. We may denoted the model () as a triplet y X V y X X V. () Partitioned linear models are used in the estimations of partial parameters in regression models as well as in the investigations of some submodels reduced models associated with the original model. In this study we consider the general partitioned linear model we deal with the best linear unbiased estimators (BLUEs) of subparameters under this model. Our main purpose is to obtain the BLUEs of subparameters X X under by using the inverse partitioned matrix (IPM) method which is introduced by Rao [] for statistical inference in general linear models. We also investigate some consequences on the BLUEs of subparameters obtained by using IPM approach. Under the linear models BLUE has been investigated by many statisticians. Some valuable properties of BLUE have been obtained e.g. [-6]. By applying matrix rank method some characterizations of BLUE have been given by Tian [78]. IPM method for the general linear model with linear restrictions has been considered by Baksalary [9]. The BLUE of. PRELIMINARIES X under denoted as BLUE X is defined to be an unbiased linear estimator Gy such that its covariance matrix Cov Gy is minimal in the Löwner sense among all covariance matrices Cov Fy such that Fy is unbiased for X. It is well-known see e.g. [0] that Gy BLUE X if only if G satisfies the fundamental BLUE equation X G X : VQ : 0 () where Q Px with x P is orthogonal projector onto the column space C X. Note that the equation () has a unique solution if only if rank X : V n the observed value of Gy is unique with probability if only if is consistent i.e. y C X : V C X : VQ holds with probability ; see []. In the study it is assumed that the model is consistent. The corresponding condition for Ay to be BLUE of an estimable parametric function K is : K : 0 A X VQ K is estimable under C X. Recall that a parametric function if only if C K in particular X X is estimable under if only if C X C X ; see []. 0 The fundamental BLUE equation given in () equivalently expressed as follows. Gy BLUE X if only if there exists a matrix p L R such that G is solution to V X G 0 i.e X 0 L X G 0 Z. () L X Partitioned matrices their generalized inverses play an important role in the concept of linear models. According to Rao [] the problem of inference from a linear model can be completely solved when one has obtained an arbitrary generalized inverse of the partitioned matrix Z. This approach based on the numerical evaluation of an inverse of the partitioned matrix Z is known as the IPM method see [-5]. C C Let the matrix C be an arbitrary C C generalized inverse of Z i.e. C is any matrix satisfying the equation ZCZ Z where C R C R () is n. Then one solution to the (consistent) equation 60 SAÜ Fen Bil Der 0. Cilt. Sayı s
3 Estimation of subparameters by IPM method M. Eriş N. Güler G C C 0 C X. (5) L C C X C X Therefore we see that BLUE X XCy XC y which is one representation for the BLUE of X under. If we let C vary through all generalized inverses of Z we obtain all solutions to () thereby all representations Gy for the BLUE of X under. As further reference for submatrices C i i their statistical applications see [6-].. SOME RESULTS ON A GENERALIZED INVERSE OF Z Some explicit algebraic expression for the submatrices of C was obtained in [5 Theorem.]. The purpose of this section is to extend this theorem to x symmetric block partitioned matrix to obtain the BLUEs of subparameters their properties. Let D Z expressed as D0 D D V X X D E F F X 0 0 E F F X 0 0 n where D0 R D R D R E E F F F F are conformable matrices Z sts for the set of all generalized inverse of Z. In the following theorem we collect some properties related to the submatrices of D given in (6). (6) Theorem. Let V X X D 0 D D E E F F F F be defined as before let C X C X. Then the following hold: 0 V X X D0 E E (i) X 0 0 D F F X 0 0 D F F is another choice of a generalized inverse. (ii) VD0 X XE X X E X X X D0 X 0 X D0 X 0. (iii) VD0 X XE X X E X X X D0 X 0 X D0 X 0. (iv) VD0V XEV X EV V X D0V 0 X D0V 0. VD X X F X X F X XD X X (v) XD X 0. VD X X F X X F X X D X X (vi) X D X 0. Proof: The result (i) is proved by taking transposes of either side of (6). We observe that the equations Va Xb X c Xd X a 0 X a 0 (7) are solvable for any d in which case a D0 Xd b E Xd c E Xd is a solution. Substituting this solution in (7) omitting d we have (ii). To prove (iii) we can write the equations Va Xb X c X d X a 0 X a 0 (8) which are solvable for any d. Then a D0 X d b E X d c E X d is a solution. Substituting this solution in (8) omitting d we have (iii). To prove (iv) the equations which are solvable for any d Va Xb X c Vd X a 0 X a 0 (9) are considered. In this case one solution is a D0Vd b EVd c EVd. If we substitute this solution in (9) omit d we have (iv). In view of the assumption C X C X we can consider the equations 0 Va Xb X c 0 X a X d X a 0 (0) Va Xb X c 0 X a 0 X a X d () for the proof of (v) (vi) respectively see [8 Theorem 7..8]. In this case a D X d b F X d c F X d is a solution for (0) a D X d b F X d c F X d is a solution for (). Substituting these solutions into corresponding equations omitting d d we obtain the required results. SAÜ Fen Bil Der 0. Cilt. Sayı s
4 M. Eriş N. Güler Estimation of subparameters by IPM method. IPM METHOD FOR SUBPARAMETERS The fundamental BLUE equation given in () can be accordingly written for Ay being the BLUE of estimable K that is Ay BLUE K exists a matrix if only if there p L R such that A is solution to A 0 Z. () L K Now assumed that X X are estimable under. If we take K X : 0 K 0 : X respectively from equation () we get the BLUE equations of subparameters X X. There exist p p p p L R L R L R L R such that G G are solution to following the equations respectively G y BLUE X V X X G 0 X 0 0 L X X 0 0 L 0 G y BLUE X X 0 0 L X () V X X G 0 X 0 0 L 0. () G D0 D D 0 L E F F X L E F F 0 D0 D D V X X E F F X 0 0 U E F F X 0 0 thereby we get G y X Dy U VD X D X D y 0 U X D y U X D y 0 0. Here y can be written as y XL X L VQL for some L L L since the model is assumed to be consistent. From Theorem we see that U VD X D X D X L X L VQL 0 0 U X D X L X L VQL 0 0 U X D y U X D X L X L VQL Moreover according to Theorem (i) we can replace D by E. Therefore BLUE X X Dy X E y is obtained. BLUE X X D y X E y is obtained by similar way The following results are easily obtained from Theorem (v) (vi) under. Therefore the following theorem can be given to determine the BLUE of subparameters by the IPM method. Theorem. Consider the general partitioned linear model the matrix D given in (6). Suppose that C X C X. Then 0 BLUE X X Dy X E y BLUE X X D y X E y. (5) Proof: The general solution of the matrix equation given in () is E BLUE X E BLUE X Cov BLUE X X X Cov BLUE X Cov BLUE X X F X X F X BLUE X X F X X FX. (6) (7) (8) 6 SAÜ Fen Bil Der 0. Cilt. Sayı s
5 Estimation of subparameters by IPM method M. Eriş N. Güler 5. ADDITIVE DECOMPOSITION OF THE BLUES OF SUBPARAMETERS The purpose of this section is to give some additive properties of BLUEs of subparameters under. Theorem. Consider the model assume that X X are estimable under. BLUE X BLUE X BLUE for X under. is always Proof: Let BLUE X BLUE X be given as in (5). Then we can write BLUE X BLUE X X D X D y. According to fundamental BLUE equation from Theorem (v) (vi) we see that X D X D X : X : VQ X : X : 0 for all y C X : VQ obtained.. Therefore the required result is The following results are easily obtained from Theorem (iv) (6)-(8). BLUE X E BLUE X BLUE X X Cov BLUE X X F X X F X X F X X F X. REFERENCES [] C. R. Rao Unified theory of linear estimation Sankhyā Ser. A : [Corrigendum (97) p. 9 p. 77.]. [] M. Nurhanen S. Puntanen Effect of deleting an observation on the equality of the OLSE BLUE Linear Algebra its Applications vol. 76 pp [] H. J. Werner C. Yapar Some equalities for estimations of partial coefficients under a general linear regression model Linear Algebra its Applications 7/ [] P. Bhimasankaram R. Saharay On a partitioned linear model some associated reduced models Linear Algebra its Applications vol. 6 pp [5] J. Gross S. Puntanen Estimation under a general partitioned linear model Linear Algebra its Applications vol. pp [6] B. Zhang B. Liu C. Lu A study of the equivalence of the BLUEs between a partitioned singular linear model its reduced singular linear models Acta Mathematica Sinica English Series vol. 0 no. pp [7] Y. Tian S. Puntanen On the equivalence of estimations under a general linear model its transformed models Linear Algebra its Applications vol. 0 pp [8] Y. Tian On properties of BLUEs under general linear regression models Journal of Statistical Planning Inference vol. pp [9] J. K. Baksalary P. R. Pordzik Inverse- Partitioned-Matrix method for the general Gauss- Markov model with linear restrictions Journal of Statistical Planning Inference vol. pp [0] C. R. Rao Least squares theory using an estimated dispersion matrix its application to measurement of signals. In: Le Cam L. M. Neyman J. eds. Proc. Fifth Berkeley Symposium on Mathematical Statistics Probability: Berkeley California 965/966. Vol.. Univ. Of California Press Berkeley [] G. Zyskind On canonical forms non-negative covariance matrices best simple least squares linear estimators in linear models The Annals of Mathematical Statistics vol. 8 pp [] C. R. Rao Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix Journal of Multivariate Analysis vol. pp [] I. S. Alalouf G. P. H. Styan Characterizations of estimability in the general linear model The Annals of Statistics vol. 7 pp [] S. Puntanen G.P.H. Styan J. Isotalo Matrix tricks for linear statistical models Our Personal Top Twenty Springer Heidelberg 0. [5] C. R. Rao A note on the IPM method in the unified theory of linear estimation Sankhyā Ser.A. vol. pp [6] H. Drygas A note on the Inverse-Partitioned- Matrix method in linear regression analysis Linear Algebra its Applications vol. 67 pp [7] F. J. Hall C. D. Meyer Generalized inverses of the fundamental bordered matrix used in linear SAÜ Fen Bil Der 0. Cilt. Sayı s
6 M. Eriş N. Güler Estimation of subparameters by IPM method estimation Sankhyā Ser. A. vol. 7 pp [Corrigendum (978) 0 p. 99.] [8] D. A. Harville Matrix algebra from a statisticians perspective Springer New York 997. [9] J. Isotalo Puntanen S. Styan G. P. H. A useful matrix decomposition its statistical applications in linear regression Commun. Statist. Theor. Meth. vol. 7 pp [0] R. M. Pringle A. A. Rayner Generalized Inverse Matrices with Applications to Statistics Hafner Publishing Company New York 97. [] C. R. Rao Some recent results in linear estimation Sankhyā Ser. B vol. pp [] C. R. Rao Linear Statistical Inference its Applications nd Ed. Wiley New York 97. [] H. J. Werner C. R. Rao s IPM method: a geometric approach. New Perspectives in Theoretical Applied Statistics (M. L. Puri J. P. Vilaplana & W. Wertz eds.). Wiley New York SAÜ Fen Bil Der 0. Cilt. Sayı s
Linear estimation and prediction in the general Gauss Markov model
JARKKO ISOTALO Linear estimation and prediction in the general Gauss Markov model ACADEMIC DISSERTATION To be presented, with the permission of the Faculty of Information Sciences of the University of
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More information3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationRow Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation
More information3.1 Least squares in matrix form
118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationMatrices 2. Solving Square Systems of Linear Equations; Inverse Matrices
Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationQuadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationA SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationSimultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators
Shalabh & Christian Heumann Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators Technical Report Number 104, 2011 Department of Statistics University of Munich
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationWHEN DOES A CROSS PRODUCT ON R n EXIST?
WHEN DOES A CROSS PRODUCT ON R n EXIST? PETER F. MCLOUGHLIN It is probably safe to say that just about everyone reading this article is familiar with the cross product and the dot product. However, what
More information1.5 SOLUTION SETS OF LINEAR SYSTEMS
1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationSeparable First Order Differential Equations
Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationTypical Linear Equation Set and Corresponding Matrices
EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationSolving simultaneous equations using the inverse matrix
Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationis in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationA Direct Numerical Method for Observability Analysis
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method
More informationVariance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationChapter 2 Portfolio Management and the Capital Asset Pricing Model
Chapter 2 Portfolio Management and the Capital Asset Pricing Model In this chapter, we explore the issue of risk management in a portfolio of assets. The main issue is how to balance a portfolio, that
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationThe Method of Least Squares
Hervé Abdi 1 1 Introduction The least square methods (LSM) is probably the most popular technique in statistics. This is due to several factors. First, most common estimators can be casted within this
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationMehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationFactor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
More informationThe four [10,5,4] binary codes
1 Preliminaries The four [10,,] binary codes There are four distinct [10; ; ] binary codes. We shall prove this in a moderately elementary way, using the MacWilliams identities as the main tool. (For the
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationOn the degrees of freedom in shrinkage estimation
On the degrees of freedom in shrinkage estimation Kengo Kato Graduate School of Economics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan kato ken@hkg.odn.ne.jp October, 2007 Abstract
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More information8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes
Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationPrinciple Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
More informationUniversity of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014.
University of Ljubljana Doctoral Programme in Statistics ethodology of Statistical Research Written examination February 14 th, 2014 Name and surname: ID number: Instructions Read carefully the wording
More informationThe Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
More informationOn the Eigenvalues of Integral Operators
Çanaya Üniversitesi Fen-Edebiyat Faültesi, Journal of Arts and Sciences Say : 6 / Aral 006 On the Eigenvalues of Integral Operators Yüsel SOYKAN Abstract In this paper, we obtain asymptotic estimates of
More informationChapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way
Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS HOMOGENEOUS LINEAR SYSTEMS A system of linear equations is said to be homogeneous if it can be written in the form A 0, where A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationGrassmann Algebra in Game Development. Eric Lengyel, PhD Terathon Software
Grassmann Algebra in Game Development Eric Lengyel, PhD Terathon Software Math used in 3D programming Dot / cross products, scalar triple product Planes as 4D vectors Homogeneous coordinates Plücker coordinates
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationInteger roots of quadratic and cubic polynomials with integer coefficients
Integer roots of quadratic and cubic polynomials with integer coefficients Konstantine Zelator Mathematics, Computer Science and Statistics 212 Ben Franklin Hall Bloomsburg University 400 East Second Street
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationMATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More information= 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that
Instructions. Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Credit will not be given for answers (even correct ones) without
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationEEG işaretlerinin epileptik analizi için boyut azaltmanın etkileri
SAÜ. Fen Bil. Der. 18. Cilt, 2. Sayı, s. 93-97, 2014 SAU J. Sci. Vol 18, No 2, p. 93-97, 2014 EEG işaretlerinin epileptik analizi için boyut azaltmanın etkileri Murat Yıldız 1*, Erhan Bergil 2, Canan Oral
More informationAN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC
AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l-4] on graphs and matroids I used definitions equivalent to the following.
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationPermanents, Order Statistics, Outliers, and Robustness
Permanents, Order Statistics, Outliers, and Robustness N. BALAKRISHNAN Department of Mathematics and Statistics McMaster University Hamilton, Ontario, Canada L8S 4K bala@mcmaster.ca Received: November
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More information