Leontief Input-Output Model

Similar documents
Chapter 6. Orthogonality

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Notes on Determinant

Solution to Homework 2

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Inner products on R n, and more

DATA ANALYSIS II. Matrix Algorithms

LS.6 Solution Matrices

Mathematics Course 111: Algebra I Part IV: Vector Spaces

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

October 3rd, Linear Algebra & Properties of the Covariance Matrix

The Assignment Problem and the Hungarian Method

Orthogonal Diagonalization of Symmetric Matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Section Inner Products and Norms

Linear Programming: Chapter 11 Game Theory

Notes on Symmetric Matrices

1 Determinants and the Solvability of Linear Systems

Continuity of the Perron Root

160 CHAPTER 4. VECTOR SPACES

T ( a i x i ) = a i T (x i ).

MATH APPLIED MATRIX THEORY

Inner Product Spaces and Orthogonality

Introduction to Matrix Algebra

The Characteristic Polynomial

Linear Algebra Notes

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Linear Programming. March 14, 2014

Lecture 3: Finding integer solutions to systems of linear equations

4.6 Linear Programming duality

6. Cholesky factorization

Matrix Representations of Linear Transformations and Changes of Coordinates

THE DIMENSION OF A VECTOR SPACE

Lecture 2 Matrix Operations

University of Lille I PC first year list of exercises n 7. Review

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Systems of Linear Equations

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

α = u v. In other words, Orthogonal Projection

3. INNER PRODUCT SPACES

Lecture 5: Singular Value Decomposition SVD (1)

Similar matrices and Jordan form

[1] Diagonal factorization

Lecture 5 Principal Minors and the Hessian

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

8 Square matrices continued: Determinants

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Continued Fractions and the Euclidean Algorithm

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Math Lecture 33 : Positive Definite Matrices

EXCEL SOLVER TUTORIAL

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

26. Determinants I. 1. Prehistory

Solving Linear Systems, Continued and The Inverse of a Matrix

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Suk-Geun Hwang and Jin-Woo Park

Systems of Linear Equations

Multivariate normal distribution and testing for means (see MKB Ch 3)

Factorization Theorems

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

1 Sets and Set Notation.

ELA

LINEAR ALGEBRA. September 23, 2010

CSE 135: Introduction to Theory of Computation Decidability and Recognizability

Math 312 Homework 1 Solutions

Methods for Finding Bases

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Classification of Cartan matrices

1 VECTOR SPACES AND SUBSPACES

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Inner Product Spaces

1 Solving LPs: The Simplex Algorithm of George Dantzig

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Lecture 1: Schur s Unitary Triangularization Theorem

Direct Methods for Solving Linear Systems. Matrix Factorization

GROUPS ACTING ON A SET

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Name: Section Registered In:

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

GENERATING SETS KEITH CONRAD

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Vector and Matrix Norms

The Determinant: a Means to Calculate Volume

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

1.2 Solving a System of Linear Equations

Iterative Methods for Solving Linear Systems

Transcription:

Leontief Input-Output Model We suppose the economy to be divided into n sectors (about 500 for Leontief s model). The demand vector d R n is the vector whose ith component is the value (in dollars, say) of production of sector i demanded annually by consumers. Let C = (c i,j ) be the n n matrix in which c i,j is the dollar value of the output of sector i needed by sector j to produce one dollar of output. If c 1,j + c 2,j + + c n,j < 1, then it costs sector j less than one dollar to produce a dollar of output; in that case we say sector j is profitable. The economy represented by the matrix C and demand vector d is productive if there is an output vector x R n so that x = C x + d, or (I C) x = d. Note that all entries of the demand and output vectors must be nonnegative, and all entries of the matrix C must also be nonnegative. Here is a way to think about this. Suppose the sectors first order inputs Cd to meet the projected demand d. Then they will need to order additional inputs C(Cd) = C 2 d to produce the required Cd for each other; this leads to a further required input C 3 d to produce C 2 d, and so forth. The final demand is d + Cd + C 2 d + C 3 d + = (I + C + C 2 + C 3 + ) d. If the matrix C is such that powers C n get small as n, then the matrix sum I + C + C 2 +... converges to (I C) 1. This is because (I + C + C 2 + + C n )(I C) = I C n+1 for any n, so if C n 0 as n we can take limits to get (I + C + C 2 + )(I C) = I, or (I C) 1 = I + C + C 2 + In that case we can write the required production vector x as x = (I C) 1 d. But when does C n 0 as n? The following result gives an answer. 1

Theorem 1. If C is a nonnegative matrix (that is, all its entries are nonnegative), then the following conditions are equivalent. 1. (I C) 1 exists and is nonnegative; 2. C n 0 as n ; 3. There is a nonnegative vector x so that (I C) x is positive (that is, all entries of (I C) x are positive.) This has the following corollary. Corollary 1. If the nonnegative matrix C has all its row sums less than 1, then (I C) 1 exists and is nonnegative. 1 Proof. If x =, note that the ith entry of C x is just the ith column sum 1. 1 of C. But then C x < x, so (I C) x > 0. A slight twist to the preceding corollary gives a very natural result. Corollary 2. If the nonnegative matrix C has all its column sums less than 1, then (I C) 1 exists and is nonnegative. Proof. Suppose C has all its column sums less than 1. Then by the preceding corollary, (I C T ) 1 exists and is nonnegative. But I is symmetric, so (I C T ) 1 = ((I C) T ) 1 = ((I C) 1 ) T ; and since the transpose of a nonnegative matrix is nonnegative, this means that (I C) 1 exists and is nonnegative. Now the jth column sum of a consumption matrix being less than 1 says that sector j is profitable, so this latter corollary can be stated as follows. Corollary 3. An economy is productive for any demand vector d 0 if each sector is profitable. Call a consumption matrix C productive if the economy represented by C and any demand vector d 0 is productive. While Theorem 1 gives a necessary and sufficient condition for C to be productive, the condition that 2

there is some nonnegative vector x making (I C) x positive, or equivalently C x < x, is not easy to check. (The hypotheses of Corollaries 1 and 2 are easy to check, but they may not be true.) So other criteria have been found. The following sufficient condition is called the Hawkins-Simon condition in the economics literature. Theorem 2. If C is nonnegative and every principal minor of I C is positive, then (I C) 1 is nonnegative. Another criterion can be given involving the eigenvalues of C. The Perron-Frobenius theorem states that if C is a nonnegative matrix, then there is a real eigenvalue λ pf of C such that λ pf 0 and λ < λ pf for all other eigenvalues λ of C. We call λ pf the maximal eigenvalue of C. Then the following result holds. Theorem 3. A nonnegative matrix C is productive if and only if the maximal eigenvalue λ pf of C satisfies λ pf < 1. To illustrate these three theorems, we shall take some examples of threesector economies. Remember that this is just for illustration, since any realistic input-output model has hundreds of sectors. First consider the economy with consumption matrix 0.65.55 C =.25.05.1..25.05 0 Then C satisfies the hypothesis of Corollary 2, since every sector is profitable. But the hypothesis of Corollary 1 doesn t hold, since the first row sum is 1.2. Since 1.65.55 I C =.25.95.1,.25.05 1 has principal minors 1, 0.7875, 0.62875, the Hawkins-Simon condition is satisfied. The hypothesis of Theorem 3 also holds in this case, since C has maximal eigenvalue λ pf 0.60174. On the other hand, the three-sector economy with consumption matrix.5.15 0 C =.25.15.5.25.15.5 3

fails to satisfy the hypothesis of Corollary 2; indeed sectors 1 and 3 are not profitable. But it does satisfy the hypothesis of Corollary 1, since the row sums are 0.65, 0.9, and 0.9. The hypothesis of Theorem 2 also holds, since.5.15 0 I C =.25.85.5.25.15.5 has principal minors 0.5, 0.3875, 0.1375. In addition, C has maximal eigenvalue λ pf 0.78267, so the hypothesis of Theorem 3 is satisfied as well. Finally, consider.7.3.2 C =.1.4.3..2.4.1 This matrix violates the hypothesis of Corollary 2, since sectors 1 and 2 are not profitable. It also doesn t satisfy the hypothesis of Corollary 1, since the first row sum is 1.2. With some persistence Theorem 1 can still be used to show C productive, since 2 1.9 2 C 1 =.9 < 1. 1.9 1 The principal minors of.3.3.2 I C =.1.6.3.2.4.9 are 0.3, 0.15, 0.049, so Theorem 2 implies that C is productive. Also, C has maximal eigenvalue λ pf 0.92736, so Theorem 3 applies as well. But how can we show a consumption matrix is not productive? One way is by Theorem 3, but this requires knowing the maximal eigenvalue of C. We can at least get some information from row and column sums. As we ve seen, C may still be productive even though individual row sums or column sums exceed 1. But what if, say, all the column sums of C are 1 or more? If C is productive, Theorem 1 implies that there is a vector x 0 with c 1,1 x 1 + c 1,2 x 2 + + c 1,n x n x 1 c 2,1 x 1 + c 2,2 x 2 + + c 2,n x n C x =. < x 2.. c n,1 x 1 + c n,2 x 2 + + c n,n x n x n 4

Adding these n inequalities gives (c 1,1 + c 2,1 + + c n,1 )x 1 + (c 1,2 + c 2,2 + + c n,2 )x 2 + + (c 1,n + c 2,n + + c n,n )x n < x 1 + x 2 + + x n. But we ve assumed that all the column sums c 1,1 + + c n,1, c 1,2 + + c n,2,..., c 1,n + + c n,n are 1, so we have the contradictory inequality (c 1,1 + c 2,1 + + c n,1 )x 1 + (c 1,2 + c 2,2 + + c n,2 )x 2 + + (c 1,n + c 2,n + + c n,n )x n x 1 + x 2 + + x n. This contradiction shows that C cannot be productive, giving us another corollary to Theorem 1. Corollary 4. The nonnegative matrix C cannot be productive if every column sum is 1 or more. To put it another way, an economy cannot be productive if every sector is not profitable. By taking transposes as above we also have the following result. Corollary 5. The nonnegative matrix C cannot be productive if every row sum is 1 or more. Exercises 1. Determine which of the following consumption matrices is productive..5.4.2.7.3.25 a..2.3.3 b..2.4.25.3.4.4.05.15.25.1.5.4.1.1.3 c..4.3.3 d..9.8.3.3.4.4.1.5.4 2. Give an example of a (nonzero) productive consumption matrix that is not invertible. 5

.7.3.2 3. For real numbers t 0, let C(t) =.1.4.3. For which t s can you.2.4 t say whether C(t) is productive or not? 6