The Solution of Linear Simultaneous Equations

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Introduction to Matrix Algebra

The Characteristic Polynomial

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

1 Introduction to Matrices

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Notes on Determinant

Lecture 2 Matrix Operations

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Matrix Differentiation

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Introduction to Matrices for Engineers

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

MAT188H1S Lec0101 Burbulla

Unit 18 Determinants

Solving simultaneous equations using the inverse matrix

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Solution to Homework 2

Name: Section Registered In:

Systems of Linear Equations

Data Mining: Algorithms and Applications Matrix Math Review

Using row reduction to calculate the inverse and the determinant of a square matrix

Typical Linear Equation Set and Corresponding Matrices

Linear Algebra: Vectors

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

2.3. Finding polynomial functions. An Introduction:

Solving Systems of Linear Equations

9 MATRICES AND TRANSFORMATIONS

Direct Methods for Solving Linear Systems. Matrix Factorization

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Linear Programming. March 14, 2014

1 Sets and Set Notation.

Math 312 Homework 1 Solutions

Brief Introduction to Vectors and Matrices

Continued Fractions and the Euclidean Algorithm

1 Determinants and the Solvability of Linear Systems

Excel supplement: Chapter 7 Matrix and vector algebra

Linear Algebra: Determinants, Inverses, Rank

5.5. Solving linear systems by the elimination method

Matrix algebra for beginners, Part I matrices, determinants, inverses

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Solving Systems of Linear Equations Using Matrices

NOTES ON LINEAR TRANSFORMATIONS

Lecture 4: Partitioned Matrices and Determinants

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

The Basics of FEA Procedure

Equations, Inequalities & Partial Fractions

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Lecture notes on linear algebra

Operation Count; Numerical Linear Algebra

Question 2: How do you solve a matrix equation using the matrix inverse?

9.2 Summation Notation

Similarity and Diagonalization. Similar Matrices

GENERATING SETS KEITH CONRAD

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

A Concrete Introduction. to the Abstract Concepts. of Integers and Algebra using Algebra Tiles

Circuit Analysis using the Node and Mesh Methods

26. Determinants I. 1. Prehistory

Solution of Linear Systems

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Row Echelon Form and Reduced Row Echelon Form

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Determinant: a Means to Calculate Volume

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Introduction to Matrix Algebra

MBA Jump Start Program

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Properties of Real Numbers

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

6. Vectors Scott Surgent (surgent@asu.edu)

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

x = + x 2 + x

Chapter 2 Determinants, and Linear Independence

Lecture 20: Transmission (ABCD) Matrix.

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

26 Integers: Multiplication, Division, and Order

Solving Systems of Linear Equations

LINEAR ALGEBRA. September 23, 2010

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA

Vocabulary Words and Definitions for Algebra

This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

1.6 The Order of Operations

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

1 Solving LPs: The Simplex Algorithm of George Dantzig

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

8 Square matrices continued: Determinants

A =

Linear Algebra Review. Vectors

Transcription:

Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve such a set of equations. The theory of determinants (with applications) can be found in most intermediate-level algebra texts. (A particularly good reference for engineering students is Chapter of E.A. Guillemin s The Mathematics of Circuit Analysis [New York: Wiley, 949]. In our review here, we will limit our discussion to the mechanics of solving simultaneous equations with determinants. A. Preliminary Steps The first step in solving a set of simultaneous equations by determinants is to write the equations in a rectangular (square) format. In other words, we arrange the equations in a vertical stack such that each variable occupies the same horizontal position in every equation. For example, in Eqs. A., the variables i, i 2, i 3 occupy the first, second, third position, respectively, on the left-h side of each equation: 2i - 9i 2-2i 3 = -33, -3i + 6i 2-2i 3 = 3, (A.) -8i - 4i 2 + 22i 3 = 5. Alternatively, one can describe this set of equations by saying that i occupies the first column in the array, i 2 the second column, i 3 the third column. If one or more variables are missing from a given equation, they can be inserted by simply making their coefficient zero. Thus Eqs. A.2 can be squared up as shown by Eqs. A.3: 2v - v 2 = 4, 4v 2 + 3v 3 = 6, (A.2) 7v + 2v 3 = 5; 2v - v 2 + v 3 = 4, v + 4v 2 + 3v 3 = 6, (A.3) 7v + v 2 + 2v 3 = 5. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 759

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 76 The Solution of Linear Simultaneous Equations A.2 Cramer s Method The value of each unknown variable in the set of equations is expressed as the ratio of two determinants. If we let N, with an appropriate subscript, represent the numerator determinant represent the denominator determinant, then the kth unknown x k is x k = N k. (A.4) The denominator determinant is the same for every unknown variable is called the characteristic determinant of the set of equations. The numerator determinant N k varies with each unknown. Equation A.4 is referred to as Cramer s method for solving simultaneous equations. A.3 The Characteristic Determinant Once we have organized the set of simultaneous equations into an ordered array, as illustrated by Eqs. A. A.3, it is a simple matter to form the characteristic determinant. This determinant is the square array made up from the coefficients of the unknown variables. For example, the characteristic determinants of Eqs. A. A.3 are 2-9 -2 = 3-3 6-2 3-8 -4 22 (A.5) 2 - = 3 4 33, 7 2 (A.6) respectively. A.4 The Numerator Determinant The numerator determinant N k is formed from the characteristic determinant by replacing the kth column in the characteristic determinant with the column of values appearing on the right-h side of the equations.

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.5 The Evaluation of a Determinant 76 For example, the numerator determinants for evaluating,, in Eqs. A. are i i 2 i 3-33 -9-2 N = 3 3 6-2 3, 5-4 22 (A.7) 2-33 -2 N 2 = 3-3 3-2 3, -8 5 22 (A.8) 2-9 -33 N 3 = 3-3 6 33. -8-4 5 (A.9) The numerator determinants for the evaluation of,, in Eqs. A.3 are v v 2 v 3 4 - N = 3 6 4 3 3, 5 2 (A.) 2 4 N 2 = 3 6 33, 7 5 2 (A.) 2-4 N 3 = 3 4 63. 7 5 (A.2) A.5 The Evaluation of a Determinant The value of a determinant is found by exping it in terms of its minors. The minor of any element in a determinant is the determinant that remains after the row column occupied by the element have been deleted. For example, the minor of the element 6 in Eq. A.7 is 2-33 -2 5 22 2,

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 762 The Solution of Linear Simultaneous Equations while the minor of the element 22 in Eq. A.7 is The cofactor of an element is its minor multiplied by the signcontrolling factor where i j denote the row column, respectively, occupied by the element. Thus the cofactor of the element 6 in Eq. A.7 is - (2 + 2) 2 the cofactor of the element 22 is - (3 + 3) 2 The cofactor of an element is also referred to as its signed minor. (i + j) The sign-controlling factor - will equal + or - depending on whether i + j is an even or odd integer. Thus the algebraic sign of a cofactor alternates between + - as we move along a row or column. For a 3 * 3 determinant, the plus minus signs form the checkerboard pattern illustrated here: 3 2-33 -9 3 6 2. - (i + j), -33-2 5 22 2, -33-9 3 6 2. + - + - + - 3 + - + A determinant can be exped along any row or column. Thus the first step in making an expansion is to select a row i or a column j. Once a row or column has been selected, each element in that row or column is multiplied by its signed minor, or cofactor. The value of the determinant is the sum of these products. As an example, let us evaluate the determinant in Eq. A.5 by exping it along its first column. Following the rules just explained, we write the expansion as =2() 2 6-2 -4 22 2-3(-) 2-9 -2-4 22 2-8() 2-9 -2 6-2 2 (A.3) The 2 * 2 determinants in Eq. A.3 can also be exped by minors. The minor of an element in a 2 * 2 determinant is a single element. It follows that the expansion reduces to multiplying the upper-left element by the lower-right element then subtracting from this product the product

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.5 The Evaluation of a Determinant 763 of the lower-left element times the upper-right element. Using this observation, we evaluate Eq. A.3 to =2(32-8) + 3(-98-48) - 8(8 + 72) = 264-738 - 72 = 46. (A.4) Had we elected to exp the determinant along the second row of elements, we would have written =-3(-) 2-9 -2-4 22 2 +6(+) 2 2-2 -8 22 2-2(-) 2 2-9 -8-4 2 = 3(-98-48) + 6(462-96) + 2(-84-72) = -738 + 296-32 = 46. (A.5) The numerical values of the determinants,, given by Eqs. A.7, A.8, A.9 are N N 2 N 3 N = 46, (A.6) N 2 = 2292, (A.7) N 3 = 3438. (A.8) i i 2 It follows from Eqs. A.5 through A.8 that the solutions for,, in Eq. A. are i 3 i = N = A, i 2 = N 2 = 2 A, (A.9) i 3 = N 3 = 3 A.

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 764 The Solution of Linear Simultaneous Equations We leave you to verify that the solutions for,, in Eqs. A.3 are v = 49-5 v = -9.8 V, v 2 v 3 v 2 = 8-5 = -23.6 V, (A.2) v 3 = -84-5 = 36.8 V. A.6 Matrices A system of simultaneous linear equations can also be solved using matrices. In what follows, we briefly review matrix notation, algebra, terminology. A matrix is by definition a rectangular array of elements; thus a a 2 a 3 Á an a A = D 2 a 22 a 23 Á a2n Á Á Á Á Á T (A.2) a m a m2 a m3 Á amn is a matrix with m rows n columns.we describe A as being a matrix of order m by n, or m * n, where m equals the number of rows n the number of columns.we always specify the rows first the columns second. The elements of the matrix a, a 2, a 3,... can be real numbers, complex numbers, or functions. We denote a matrix with a boldface capital letter. The array in Eq. A.2 is frequently abbreviated by writing A = [a ij ] mn, (A.22) a ij where is the element in the ith row the jth column. If m =, A is called a row matrix, that is, A = [a a 2 a 3 Á an ]. (A.23) An excellent introductory-level text in matrix applications to circuit analysis is Lawrence P. Huelsman, Circuits, Matrices, Linear Vector Spaces (New York: McGraw-Hill, 963).

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 765 If n =, A is called a column matrix, that is, a a 2 A = E a 3 U. o (A.24) If m = n, A is called a square matrix. For example, if m = n = 3, the square 3 by 3 matrix is a m a a 2 a 3 A = C a 2 a 22 a 23 S. a 3 a 32 a 33 (A.25) Also note that we use brackets [] to denote a matrix, whereas we use vertical lines ƒƒ to denote a determinant. It is important to know the difference. A matrix is a rectangular array of elements. A determinant is a function of a square array of elements.thus if a matrix A is square, we can define the determinant of A. For example, if then A = c 2 6 5 d, det A = 2 2 6 5 2 = 3-6 = 24. A.7 Matrix Algebra The equality, addition, subtraction of matrices apply only to matrices of the same order. Two matrices are equal if, only if, their corresponding elements are equal. In other words, A = B if, only if, a ij = b ij for all i j. For example, the two matrices in Eqs. A.26 A.27 are equal because a = b, a 2 = b 2, a 2 = b 2, a 22 = b 22 : 36-2 A = c 4 6 d, (A.26) 36-2 B = c 4 6 d. (A.27)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 766 The Solution of Linear Simultaneous Equations If A B are of the same order, then C = A + B (A.28) implies c ij = a ij + b ij. (A.29) For example, if 4-6 A = c 8 2-4 d, (A.3) 6-3 B = c -2 8 5 d, (A.3) then 2 4-2 C = c -2 2 d. (A.32) The equation D = A - B (A.33) implies d ij = a ij - b ij. (A.34) For the matrices in Eqs. A.3 A.3, we would have -2-6 4 D = c 28 4-9 d. (A.35) Matrices of the same order are said to be conformable for addition subtraction. Multiplying a matrix by a scalar k is equivalent to multiplying each element by the scalar. Thus A = kb if, only if, a ij = kb ij. It should be noted that k may be real or complex. As an example, we will multiply the matrix D in Eq. A.35 by 5. The result is -6-8 2 5D = c 4 2-95 d. (A.36)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 767 Matrix multiplication can be performed only if the number of columns in the first matrix is equal to the number of rows in the second matrix. In other words, the product AB requires the number of columns in A to equal the number of rows in B. The order of the resulting matrix will be the number of rows in A by the number of columns in B. Thus if C = AB, where A is of order m * p B is of order p * n, then C will be a matrix of order m * n. When the number of columns in A equals the number of rows in B, we say A is conformable to B for multiplication. An element in C is given by the formula c ij = a p k = a ik b kj. (A.37) The formula given by Eq. A.37 is easy to use if one remembers that matrix multiplication is a row-by-column operation. Hence to get the ith, jth term in C, each element in the ith row of A is multiplied by the corresponding element in the jth column of B, the resulting products are summed. The following example illustrates the procedure. We are asked to find the matrix C when A = c 6 3 2 4 6 d (A.38) 4 2 B = C 3S. -2 (A.39) First we note that C will be a 2 * 2 matrix that each element in C will require summing three products. To find C we multiply the corresponding elements in row of matrix A with the elements in column of matrix B then sum the products. We can visualize this multiplication summing process by extracting the corresponding row column from each matrix then lining them up element by element. So to find we have therefore C Row of A Column of B 6 4 3 2 ; C = 6 * 4 + 3 * + 2 * = 26. To find C 2 we visualize Row of A Column 2 of B 6 2 3 3 2-2 ;

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 768 The Solution of Linear Simultaneous Equations thus C 2 = 6 * 2 + 3 * 3 + 2 * (-2) = 7. For C 2 we have Row 2 of A Column of B 4 4 6 ; C 2 = * 4 + 4 * + 6 * =. Finally, for C 22 we have Row 2 of A Column 2 of B 2 4 3 6-2 ; from which C 22 = * 2 + 4 * 3 + 6 * (-2) = 2. It follows that 26 7 C = AB = B 2 R. (A.4) In general, matrix multiplication is not commutative, that is, AB Z BA. As an example, consider the product BA for the matrices in Eqs. A.38 A.39. The matrix generated by this multiplication is of order 3 * 3, each term in the resulting matrix requires adding two products. Therefore if D = BA, we have 26 2 2 D = C 3 2 8S. 4-5 - (A.4) Obviously, C Z D. We leave you to verify the elements in Eq. A.4. Matrix multiplication is associative distributive. Thus (AB)C = A(BC), (A.42) A(B + C) = AB + AC, (A.43) (A + B)C = AC + BC. (A.44)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 769 In Eqs. A.42, A.43, A.44, we assume that the matrices are conformable for addition multiplication. We have already noted that matrix multiplication is not commutative. There are two other properties of multiplication in scalar algebra that do not carry over to matrix algebra. First, the matrix product AB = does not imply either A = or B =.(Note: A matrix is equal to zero when all its elements are zero.) For example, if then A = c d B = c 2 4 8 d, Hence the product is zero, but neither A nor B is zero. Second, the matrix equation AB = AC does not imply B = C. For example, if then AB = c d =. A = c 4 4 d, B = c3 d, C = c3 2 7 8 5 6 d, AB = AC = c 3 4 d, but B Z C. 6 8 The transpose of a matrix is formed by interchanging the rows columns. For example, if 2 3 4 7 A = C 4 5 6S, then A T = C 2 5 8S. 7 8 9 3 6 9 The transpose of the sum of two matrices is equal to the sum of the transposes, that is, (A + B) T = A T + B T. (A.45) The transpose of the product of two matrices is equal to the product of the transposes taken in reverse order. In other words, [AB] T = B T A T. (A.46)

77 The Solution of Linear Simultaneous Equations Equation A.46 can be extended to a product of any number of matrices. For example, [ABCD] T = D T C T B T A T. (A.47) If A = A T, the matrix is said to be symmetric. Only square matrices can be symmetric. A.8 Identity, Adjoint, Inverse Matrices An identity matrix is a square matrix where a ij = for i Z j, a ij = for i = j. In other words, all the elements in an identity matrix are zero except those along the main diagonal, where they are equal to. Thus c d, C S, D T are all identity matrices. Note that identity matrices are always square. We will use the symbol U for an identity matrix. The adjoint of a matrix A of order n * n is defined as adj A = [ ji ] n * n, (A.48) ij a ij where is the cofactor of. (See Section A.5 for the definition of a cofactor.) It follows from Eq. A.48 that one can think of finding the adjoint of a square matrix as a two-step process. First construct a matrix made up of the cofactors of A, then transpose the matrix of cofactors. As an example we will find the adjoint of the 3 * 3 matrix 2 3 A = C 3 2 S. - 5 The cofactors of the elements in A are = ( - ) = 9, 2 = -(5 + ) = -6, 3 = (3 + 2) = 5, 2 = -( - 3) = -7, 22 = (5 + 3) = 8, 23 = -( + 2) = -3, 3 = (2-6) = -4, 32 = -( - 9) = 8, 33 = (2-6) = -4. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

A.8 Identity, Adjoint, Inverse Matrices 77 The matrix of cofactors is It follows that the adjoint of A is 9-6 5 B = C -7 8-3 S. -4 8-4 9-7 -4 adj A = B T = C -6 8 8 S. 5-3 -4 One can check the arithmetic of finding the adjoint of a matrix by using the theorem adj A # A = det A # U. (A.49) Equation A.49 tells us that the adjoint of A times A equals the determinant of A times the identity matrix, or for our example, det A = (9) + 3(-7) - (-4) = -8. If we let C = adj A # A use the technique illustrated in Section A.7, we find the elements of C to be c = 9-2 + 4 = -8, c 2 = 8-4 - 4 =, c 3 = 27-7 - 2 =, c 2 = -6 + 24-8 =, c 22 = -32 + 6 + 8 = -8, c 23 = -48 + 8 + 4 =, c 3 = 5-9 + 4 =, c 32 = - 6-4 =, c 33 = 5-3 - 2 = -8. Therefore -8 C = C -8 S = -8C S -8 = det A # U. A square matrix A has an inverse, denoted as A - A = AA - = U. A -, if Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. (A.5)

2 You can learn alternative methods for finding the inverse in any introductory text on matrix theory. See, for example, Franz E. Hohn, Elementary Matrix Algebra (New York: Macmillan, 973). Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 772 The Solution of Linear Simultaneous Equations Equation A.5 tells us that a matrix either premultiplied or postmultiplied by its inverse generates the identity matrix U. For the inverse matrix to exist, it is necessary that the determinant of A not equal zero. Only square matrices have inverses, the inverse is also square. A formula for finding the inverse of a matrix is A - = adj A det A. (A.5) The formula in Eq. A.5 becomes very cumbersome if A is of an order larger than 3 by 3. 2 Today the digital computer eliminates the drudgery of having to find the inverse of a matrix in numerical applications of matrix algebra. It follows from Eq. A.5 that the inverse of the matrix A in the previous example is 9-7 -4 A - = ->8C -6 8 8 S 5-3 -4 -.25.875.5 = C 2 - - S. -.625.375.5 You should verify that A - A = AA - = U. A.9 Partitioned Matrices It is often convenient in matrix manipulations to partition a given matrix into submatrices. The original algebraic operations are then carried out in terms of the submatrices. In partitioning a matrix, the placement of the partitions is completely arbitrary, with the one restriction that a partition must dissect the entire matrix. In selecting the partitions, it is also necessary to make sure the submatrices are conformable to the mathematical operations in which they are involved. For example, consider using submatrices to find the product C = AB, where 2 3 4 5 5 4 3 2 A = E - 2-3 U - 2-2

A.9 Partitioned Matrices 773 2 B = E - U. 3 Assume that we decide to partition B into two submatrices, B B 2 ; thus Now since B has been partitioned into a two-row column matrix, A must be partitioned into at least a two-column matrix; otherwise the multiplication cannot be performed. The location of the vertical partitions of the A matrix will depend on the definitions of. For example, if A B = c B B 2 d. B 2 B = C S B 2 = c 3 d, - then must contain three columns, must contain two columns. Thus the partitioning shown in Eq. A.52 would be acceptable for executing the product AB: B 2 A 2 2 3 4 5 5 4 3 2 C = E - 2-3 U F - 2-2 2 - Á 3 V. (A.52) If, on the other h, we partition the B matrix so that A B = c 2 - d B 2 = C 3 S, then must contain two columns, must contain three columns. In this case the partitioning shown in Eq.A.53 would be acceptable in executing the product C = AB: A 2 2 2 3 4 5 5 4 3 2 Á C = E - 2-3 U F V. (A.53) - - 3 2-2 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 774 The Solution of Linear Simultaneous Equations For purposes of discussion, we will focus on the partitioning given in Eq. A.52 leave you to verify that the partitioning in Eq. A.53 leads to the same result. From Eq. A.52 we can write C = [A A 2 ] c B B 2 d = A B + A 2 B 2. (A.54) It follows from Eqs. A.52 A.54 that 2 3-5 4 3 2 7 A B = E - 2U C S = E -4 U, - - 2-4 5 2 2 A 2 B 2 = E -3 U c 3 6 d = E -9 U, -2-6 3 C = E -3 U. -7 The A matrix could also be partitioned horizontally once the vertical partitioning is made consistent with the multiplication operation. In this simple problem, the horizontal partitions can be made at the discretion of the analyst. Therefore C could also be evaluated using the partitioning shown in Eq. A.55: 2 3 4 5 5 4 3 2 Á C = F Á Á Á Á Á V F - 2-3 - 2-2 2 - Á 3 V. (A.55)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.9 Partitioned Matrices 775 From Eq. A.55 it follows that C = c A A 2 A 2 A 22 d c B B 2 d = c C C 2 d, (A.56) where C = A B + A 2 B 2, C 2 = A 2 B + A 22 B 2. You should verify that C = c 2 3 2 5 4 3 d C S + c 4 5 2 d c3 d - = c - 7 d + c2 6 d = c 3 d, - 2 2-3 C 2 = C - S C S + C S c 3 d 2 - -2-4 -9-3 = C S + C S = C S, - -6-7 3 C = E -3 U. -7 We note in passing that the partitioning in Eqs. A.52 A.55 is conformable with respect to addition.

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 776 The Solution of Linear Simultaneous Equations A. Applications The following examples demonstrate some applications of matrix algebra in circuit analysis. Example A. Use the matrix method to solve for the node voltages in Eqs. 4.5 4.6. Solution The first step is to rewrite Eqs. 4.5 4.6 in matrix notation. Collecting the coefficients of v v 2 at the same time shifting the constant terms to the right-h side of the equations gives us (A.57) It follows that in matrix notation, Eq. A.57 becomes or where (A.58) (A.59) To find the elements of the V matrix, we premultiply both sides of Eq. A.59 by the inverse of A; thus or v v 2.7v -.5v 2 =, -.5v +.6v 2 = 2..7 -.5 c -.5.6 d cv d = c v 2 2 d, AV = I,.7 -.5 A = c -.5.6 d, V = c v v 2 d, I = c 2 d. A - AV = A - I. Equation A.6 reduces to UV = A - I, V = A - I. (A.6) (A.6) (A.62) It follows from Eq. A.62 that the solutions for v v 2 are obtained by solving for the matrix product A - I. To find the inverse of A, we first find the cofactors of A. Thus The matrix of cofactors is the adjoint of A is The determinant of A is det A = 2 = (-) 2 (.6) =.6, 2 = (-) 3 (-.5) =.5, 2 = (-) 3 (-.5) =.5, 22 = (-) 4 (.7) =.7..6.5 B = c.5.7 d, adj A = B T.6.5 = c.5.7 d. (A.63) (A.64) (A.65).7 -.5 -.5.6 2 = (.7)(.6) - (.25) =.77. (A.66) From Eqs. A.65 A.66, we can write the inverse of the coefficient matrix, that is, Now the product A - I is found: It follows directly that A - =.5 c.6.77.5.7 d. A - I = 77 c.6.5.5.7 d c 2 d = 77 c 7 8.4 d = c 9.9.9 d. c v 9.9 d = c v 2.9 d, or v = 9.9 V v 2 =.9 V. (A.67) (A.68) (A.69)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A. Applications 777 Example A.2 Use the matrix method to find the three mesh currents in the circuit in Fig. 4.24. Solution The mesh-current equations that describe the circuit in Fig. 4.24 are given in Eq. 4.34. The constraint equation imposed by the current-controlled voltage source is given in Eq. 4.35. When Eq. 4.35 is substituted into Eq. 4.34, the following set of equations evolves: In matrix notation, Eqs. A.7 reduce to where 25i i - 5i 2-2i 3 = 5, -5i i + i 2-4i 3 =, -5i - 4i 2 + 9i 3 =. 25-5 -2 A = C -5-4 S, -5-4 9 i I = C i 2 S, i 3 AI = V, 5 V = C S. (A.7) (A.7) It follows from Eq.A.7 that the solution for I is I = A - V. (A.72) We find the inverse of A by using the relationship A - = adj A det A. (A.73) To find the adjoint of A, we first calculate the cofactors of A. Thus = (-) 2 (9-6) = 74, 2 = (-) 3 (-45-2) = 65, 3 = (-) 4 (2 + 5) = 7, 2 = (-) 3 (-45-8) = 25, 22 = (-) 4 (225 - ) = 25, 23 = (-) 5 (- - 25) = 25, 3 = (-) 4 (2 + 2) = 22, 32 = (-) 5 (- - ) = 2, 33 = (-) 6 (25-25) = 225. The cofactor matrix is from which we can write the adjoint of A: 74 25 22 adj A = B T = C 65 25 2 S. 7 25 225 The determinant of A is 25-5 -2 det A = 3-5 -4 3-5 -4 9 74 65 7 B = C 25 25 25 S, 22 2 225 It follows from Eq. A.73 that The solution for I is A - = 74 25 22 25 C 65 25 2 S. 7 25 225 (A.74) (A.75) = 25(9-6) + 5(-45-8) - 5(2 + 2) = 25. (A.76) I = 74 25 22 5 29.6 (A.77) 25 C 65 25 2 S C S = C 26. S. 7 25 225 28. The mesh currents follow directly from Eq.A.77.Thus i i 29.6 C i 2 S = C 26. S i 3 28. (A.78) or i = 29.6 A, i 2 = 26 A, i 3 = 28 A. Example A.3 illustrates the application of the matrix method when the elements of the matrix are complex numbers.

778 The Solution of Linear Simultaneous Equations Example A.2 A.3 Use the matrix method to find the phasor mesh currents in the circuit in Fig. 9.37. Solution Summing the voltages around mesh generates the equation (A.79) Summing the voltages around mesh 2 produces the equation (2 - j6)(i 2 - I ) + ( + j3)i 2 + 39I x =. (A.8) The current controlling the dependent voltage source is (A.8) After substituting Eq. A.8 into Eq. A.8, the equations are put into a matrix format by first collecting, in each equation, the coefficients of ; thus (3 - j4)i - (2 - j6)i 2 = 5l, (27 + j6)i - (26 + j3)i 2 =. Now, using matrix notation, Eq. A.82 is written where I I 2 ( + j2)i + (2 - j6)(i - I 2 ) = 5l. I x = (I - I 2 ). AI = V, 3 - j4 -(2 - j6) A = c 27 + j6 -(26 + j3) d, I = c I d, V = c 5l d. I 2 It follows from Eq. A.83 that I = A - V. (A.82) (A.83) (A.84) The inverse of the coefficient matrix A is found using Eq. A.73. In this case, the cofactors of A are = (-) 2 (-26 - j3) = -26 - j3, 2 = (-) 3 (27 + j6) = -27 - j6, 2 = (-) 3 (-2 + j6) = 2 - j6, 22 = (-) 4 (3 - j4) = 3 - j4. I I 2 The cofactor matrix B is (-26 - j3) (-27 - j6) B = c (2 - j6) (3 - j4) d. The adjoint of A is adj A = B T (-26 - j3) (2 - j6) = c (-27 - j6) (3 - j4) d. The determinant of A is det A = 2 (3 - j4) -(2 - j6) (27 + j6) -(26 + j3) 2 = 6 - j45. (A.85) (A.86) = -(3 - j4)(26 + j3) + (2 - j6)(27 + j6) The inverse of the coefficient matrix is Equation A.88 can be simplified to A - = (-26 - j3) (2 - j6) c A - (-27 - j6) (3 - j4) d =. (6 - j45) 6 + j45 5625 Substituting Eq. A.89 into A.84 gives us (-26 - j52) = c (-24 - j58) d. It follows from Eq. A.9 that (-26 - j3) (2 - j6) c (-27 - j6) (3 - j4) d = -65 - j3 96 - j28 c 375-6 - j45 94 - j7 d. I = (-26 - j52) = 58.4l -6.57 A, I 2 = (-24 - j58) = 62.77l -22.48 A. (A.87) (A.88) (A.89) c I d = - j3) (96 - j28) c(-65 I 2 375 (-6 - j45) (94 - j7) d c5l d (A.9) (A.9) In the first three examples, the matrix elements have been numbers real numbers in Examples A. A.2, complex numbers in Example A.3. It is also possible for the elements to be functions. Example A.4 illustrates the use of matrix algebra in a circuit problem where the elements in the coefficient matrix are functions. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A. Applications 779 Example A.4 Use the matrix method to derive expressions for the node voltages in the circuit in Fig. A.. Solution Summing the currents away from nodes 2 generates the following set of equations: Letting G = >R collecting the coefficients of V gives us Writing Eq. A.93 in matrix notation yields where V - V g + V R sc + (V - V 2 )sc =, (A.92) V 2 R + (V 2 - V )sc + (V 2 - V g )sc =. V 2 V -scv + (G + 2sC)V 2 = scv g. A = c G + 2sC -sc -sc G + 2sC d, V = c V V 2 d, I = c GV g scv g d. It follows from Eq. A.94 that (A.93) (A.94) (A.95) As before, we find the inverse of the coefficient matrix by first finding the adjoint of A the determinant of A. The cofactors of A are = (-) 2 [G + 2sC] = G + 2sC, 2 = (-) 3 (-sc) = sc, 2 = (-) 3 (-sc) = sc, 22 = (-) 4 [G + 2sC] = G + 2sC. The cofactor matrix is V 2 (G + 2sC)V - scv 2 = GV g, AV = I, V = A - I. B = c G + 2sC sc sc G + 2sC d, (A.96) therefore the adjoint of the coefficient matrix is adj A = B T = c G + 2sC sc (A.97) sc G + 2sC d. R v g v The determinant of A is sc det A = 2 G + 2sC sc sc G + 2sC 2 = G 2 + 4sCG + 3s 2 C 2. sc Figure A. The circuit for Example A.4. sc The inverse of the coefficient matrix is A - = It follows from Eq. A.95 that c G + 2sC sc sc G + 2sC d (G 2 + 4sCG + 3s 2 C 2 ). c V c G + 2sC sc sc G + 2sC d c GV g d scv g d = V 2 (G 2 + 4sCG + 3s 2 C 2 ) v 2 R (A.98) (A.99) (A.) Carrying out the matrix multiplication called for in Eq. A. gives c V V 2 d = (G 2 + 4sCG + 3s 2 C 2 ) c(g2 + 2sCG + s 2 C 2 )V g (2sCG + 2s 2 C 2 )V g d.. (A.) Now the expressions for V V 2 can be written directly from Eq. A.; thus V = (G2 + 2sCG + s 2 C 2 )V g (G 2 + 4sCG + 3s 2 C 2 ), V 2 = 2(sCG + s 2 C 2 )V g (G 2 + 4sCG + 3s 2 C 2 ). (A.2) (A.3)

Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 78 The Solution of Linear Simultaneous Equations In our final example, we illustrate how matrix algebra can be used to analyze the cascade connection of two two-port circuits. Example A.5 Show by means of matrix algebra how the input variables V I can be described as functions of the output variables V 2 I 2 in the cascade connection shown in Fig. 8.. Solution We begin by expressing, in matrix notation, the relationship between the input output variables of each two-port circuit. Thus c V d = c a I c V fl I d = ca fl (A.4) (A.5) Now the cascade connection imposes the constraints V 2 = V a 2 a 2 -a 2 -a 22 fl -a 2 -a 22 I 2 d c V 2 I 2 d, fl d cv 2 d, I 2 = -I. (A.6) These constraint relationships are substituted into Eq. A.4. Thus c V d = c a I (A.7) The relationship between the input variables (, I ) the output variables ( V 2, I 2 ) is obtained by substituting Eq. A.5 into Eq. A.7. The result is c V d = c a I a 2 a 2 = c a a 2 a 2 a 2 a 22 -a 2 -a 22 a 22 d c a fl fl fl -a 2 a 2 d c -I d d c V I d. fl -a d cv 2 d. 22 I 2 (A.8) After multiplying the coefficient matrices, we have V c V d = c (a a fl + a 2 a fl 2 ) -(a a fl 2 + a 2 I (a 2 a fl + a 22 a fl 2 ) -(a 2 a fl 2 + a 22 V a fl 22 ) a fl 22 ) d cv 2 (A.9) I 2 d. Note that Eq. A.9 corresponds to writing Eqs. 8.72 8.73 in matrix form.