The Solution of Linear Simultaneous Equations

Size: px
Start display at page:

Download "The Solution of Linear Simultaneous Equations"

Transcription

1 Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve such a set of equations. The theory of determinants (with applications) can be found in most intermediate-level algebra texts. (A particularly good reference for engineering students is Chapter of E.A. Guillemin s The Mathematics of Circuit Analysis [New York: Wiley, 949]. In our review here, we will limit our discussion to the mechanics of solving simultaneous equations with determinants. A. Preliminary Steps The first step in solving a set of simultaneous equations by determinants is to write the equations in a rectangular (square) format. In other words, we arrange the equations in a vertical stack such that each variable occupies the same horizontal position in every equation. For example, in Eqs. A., the variables i, i 2, i 3 occupy the first, second, third position, respectively, on the left-h side of each equation: 2i - 9i 2-2i 3 = -33, -3i + 6i 2-2i 3 = 3, (A.) -8i - 4i i 3 = 5. Alternatively, one can describe this set of equations by saying that i occupies the first column in the array, i 2 the second column, i 3 the third column. If one or more variables are missing from a given equation, they can be inserted by simply making their coefficient zero. Thus Eqs. A.2 can be squared up as shown by Eqs. A.3: 2v - v 2 = 4, 4v 2 + 3v 3 = 6, (A.2) 7v + 2v 3 = 5; 2v - v 2 + v 3 = 4, v + 4v 2 + 3v 3 = 6, (A.3) 7v + v 2 + 2v 3 = 5. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 759

2 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 76 The Solution of Linear Simultaneous Equations A.2 Cramer s Method The value of each unknown variable in the set of equations is expressed as the ratio of two determinants. If we let N, with an appropriate subscript, represent the numerator determinant represent the denominator determinant, then the kth unknown x k is x k = N k. (A.4) The denominator determinant is the same for every unknown variable is called the characteristic determinant of the set of equations. The numerator determinant N k varies with each unknown. Equation A.4 is referred to as Cramer s method for solving simultaneous equations. A.3 The Characteristic Determinant Once we have organized the set of simultaneous equations into an ordered array, as illustrated by Eqs. A. A.3, it is a simple matter to form the characteristic determinant. This determinant is the square array made up from the coefficients of the unknown variables. For example, the characteristic determinants of Eqs. A. A.3 are = (A.5) 2 - = , 7 2 (A.6) respectively. A.4 The Numerator Determinant The numerator determinant N k is formed from the characteristic determinant by replacing the kth column in the characteristic determinant with the column of values appearing on the right-h side of the equations.

3 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.5 The Evaluation of a Determinant 76 For example, the numerator determinants for evaluating,, in Eqs. A. are i i 2 i N = , (A.7) N 2 = , (A.8) N 3 = (A.9) The numerator determinants for the evaluation of,, in Eqs. A.3 are v v 2 v N = , 5 2 (A.) 2 4 N 2 = , (A.) 2-4 N 3 = (A.2) A.5 The Evaluation of a Determinant The value of a determinant is found by exping it in terms of its minors. The minor of any element in a determinant is the determinant that remains after the row column occupied by the element have been deleted. For example, the minor of the element 6 in Eq. A.7 is ,

4 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 762 The Solution of Linear Simultaneous Equations while the minor of the element 22 in Eq. A.7 is The cofactor of an element is its minor multiplied by the signcontrolling factor where i j denote the row column, respectively, occupied by the element. Thus the cofactor of the element 6 in Eq. A.7 is - (2 + 2) 2 the cofactor of the element 22 is - (3 + 3) 2 The cofactor of an element is also referred to as its signed minor. (i + j) The sign-controlling factor - will equal + or - depending on whether i + j is an even or odd integer. Thus the algebraic sign of a cofactor alternates between + - as we move along a row or column. For a 3 * 3 determinant, the plus minus signs form the checkerboard pattern illustrated here: (i + j), , A determinant can be exped along any row or column. Thus the first step in making an expansion is to select a row i or a column j. Once a row or column has been selected, each element in that row or column is multiplied by its signed minor, or cofactor. The value of the determinant is the sum of these products. As an example, let us evaluate the determinant in Eq. A.5 by exping it along its first column. Following the rules just explained, we write the expansion as =2() (-) () (A.3) The 2 * 2 determinants in Eq. A.3 can also be exped by minors. The minor of an element in a 2 * 2 determinant is a single element. It follows that the expansion reduces to multiplying the upper-left element by the lower-right element then subtracting from this product the product

5 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.5 The Evaluation of a Determinant 763 of the lower-left element times the upper-right element. Using this observation, we evaluate Eq. A.3 to =2(32-8) + 3(-98-48) - 8(8 + 72) = = 46. (A.4) Had we elected to exp the determinant along the second row of elements, we would have written =-3(-) (+) (-) = 3(-98-48) + 6(462-96) + 2(-84-72) = = 46. (A.5) The numerical values of the determinants,, given by Eqs. A.7, A.8, A.9 are N N 2 N 3 N = 46, (A.6) N 2 = 2292, (A.7) N 3 = (A.8) i i 2 It follows from Eqs. A.5 through A.8 that the solutions for,, in Eq. A. are i 3 i = N = A, i 2 = N 2 = 2 A, (A.9) i 3 = N 3 = 3 A.

6 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 764 The Solution of Linear Simultaneous Equations We leave you to verify that the solutions for,, in Eqs. A.3 are v = 49-5 v = -9.8 V, v 2 v 3 v 2 = 8-5 = V, (A.2) v 3 = = 36.8 V. A.6 Matrices A system of simultaneous linear equations can also be solved using matrices. In what follows, we briefly review matrix notation, algebra, terminology. A matrix is by definition a rectangular array of elements; thus a a 2 a 3 Á an a A = D 2 a 22 a 23 Á a2n Á Á Á Á Á T (A.2) a m a m2 a m3 Á amn is a matrix with m rows n columns.we describe A as being a matrix of order m by n, or m * n, where m equals the number of rows n the number of columns.we always specify the rows first the columns second. The elements of the matrix a, a 2, a 3,... can be real numbers, complex numbers, or functions. We denote a matrix with a boldface capital letter. The array in Eq. A.2 is frequently abbreviated by writing A = [a ij ] mn, (A.22) a ij where is the element in the ith row the jth column. If m =, A is called a row matrix, that is, A = [a a 2 a 3 Á an ]. (A.23) An excellent introductory-level text in matrix applications to circuit analysis is Lawrence P. Huelsman, Circuits, Matrices, Linear Vector Spaces (New York: McGraw-Hill, 963).

7 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 765 If n =, A is called a column matrix, that is, a a 2 A = E a 3 U. o (A.24) If m = n, A is called a square matrix. For example, if m = n = 3, the square 3 by 3 matrix is a m a a 2 a 3 A = C a 2 a 22 a 23 S. a 3 a 32 a 33 (A.25) Also note that we use brackets [] to denote a matrix, whereas we use vertical lines ƒƒ to denote a determinant. It is important to know the difference. A matrix is a rectangular array of elements. A determinant is a function of a square array of elements.thus if a matrix A is square, we can define the determinant of A. For example, if then A = c d, det A = = 3-6 = 24. A.7 Matrix Algebra The equality, addition, subtraction of matrices apply only to matrices of the same order. Two matrices are equal if, only if, their corresponding elements are equal. In other words, A = B if, only if, a ij = b ij for all i j. For example, the two matrices in Eqs. A.26 A.27 are equal because a = b, a 2 = b 2, a 2 = b 2, a 22 = b 22 : 36-2 A = c 4 6 d, (A.26) 36-2 B = c 4 6 d. (A.27)

8 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 766 The Solution of Linear Simultaneous Equations If A B are of the same order, then C = A + B (A.28) implies c ij = a ij + b ij. (A.29) For example, if 4-6 A = c d, (A.3) 6-3 B = c d, (A.3) then C = c -2 2 d. (A.32) The equation D = A - B (A.33) implies d ij = a ij - b ij. (A.34) For the matrices in Eqs. A.3 A.3, we would have D = c d. (A.35) Matrices of the same order are said to be conformable for addition subtraction. Multiplying a matrix by a scalar k is equivalent to multiplying each element by the scalar. Thus A = kb if, only if, a ij = kb ij. It should be noted that k may be real or complex. As an example, we will multiply the matrix D in Eq. A.35 by 5. The result is D = c d. (A.36)

9 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 767 Matrix multiplication can be performed only if the number of columns in the first matrix is equal to the number of rows in the second matrix. In other words, the product AB requires the number of columns in A to equal the number of rows in B. The order of the resulting matrix will be the number of rows in A by the number of columns in B. Thus if C = AB, where A is of order m * p B is of order p * n, then C will be a matrix of order m * n. When the number of columns in A equals the number of rows in B, we say A is conformable to B for multiplication. An element in C is given by the formula c ij = a p k = a ik b kj. (A.37) The formula given by Eq. A.37 is easy to use if one remembers that matrix multiplication is a row-by-column operation. Hence to get the ith, jth term in C, each element in the ith row of A is multiplied by the corresponding element in the jth column of B, the resulting products are summed. The following example illustrates the procedure. We are asked to find the matrix C when A = c d (A.38) 4 2 B = C 3S. -2 (A.39) First we note that C will be a 2 * 2 matrix that each element in C will require summing three products. To find C we multiply the corresponding elements in row of matrix A with the elements in column of matrix B then sum the products. We can visualize this multiplication summing process by extracting the corresponding row column from each matrix then lining them up element by element. So to find we have therefore C Row of A Column of B ; C = 6 * * + 2 * = 26. To find C 2 we visualize Row of A Column 2 of B ;

10 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 768 The Solution of Linear Simultaneous Equations thus C 2 = 6 * * * (-2) = 7. For C 2 we have Row 2 of A Column of B ; C 2 = * * + 6 * =. Finally, for C 22 we have Row 2 of A Column 2 of B ; from which C 22 = * * * (-2) = 2. It follows that 26 7 C = AB = B 2 R. (A.4) In general, matrix multiplication is not commutative, that is, AB Z BA. As an example, consider the product BA for the matrices in Eqs. A.38 A.39. The matrix generated by this multiplication is of order 3 * 3, each term in the resulting matrix requires adding two products. Therefore if D = BA, we have D = C 3 2 8S (A.4) Obviously, C Z D. We leave you to verify the elements in Eq. A.4. Matrix multiplication is associative distributive. Thus (AB)C = A(BC), (A.42) A(B + C) = AB + AC, (A.43) (A + B)C = AC + BC. (A.44)

11 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.7 Matrix Algebra 769 In Eqs. A.42, A.43, A.44, we assume that the matrices are conformable for addition multiplication. We have already noted that matrix multiplication is not commutative. There are two other properties of multiplication in scalar algebra that do not carry over to matrix algebra. First, the matrix product AB = does not imply either A = or B =.(Note: A matrix is equal to zero when all its elements are zero.) For example, if then A = c d B = c d, Hence the product is zero, but neither A nor B is zero. Second, the matrix equation AB = AC does not imply B = C. For example, if then AB = c d =. A = c 4 4 d, B = c3 d, C = c d, AB = AC = c 3 4 d, but B Z C. 6 8 The transpose of a matrix is formed by interchanging the rows columns. For example, if A = C 4 5 6S, then A T = C 2 5 8S The transpose of the sum of two matrices is equal to the sum of the transposes, that is, (A + B) T = A T + B T. (A.45) The transpose of the product of two matrices is equal to the product of the transposes taken in reverse order. In other words, [AB] T = B T A T. (A.46)

12 77 The Solution of Linear Simultaneous Equations Equation A.46 can be extended to a product of any number of matrices. For example, [ABCD] T = D T C T B T A T. (A.47) If A = A T, the matrix is said to be symmetric. Only square matrices can be symmetric. A.8 Identity, Adjoint, Inverse Matrices An identity matrix is a square matrix where a ij = for i Z j, a ij = for i = j. In other words, all the elements in an identity matrix are zero except those along the main diagonal, where they are equal to. Thus c d, C S, D T are all identity matrices. Note that identity matrices are always square. We will use the symbol U for an identity matrix. The adjoint of a matrix A of order n * n is defined as adj A = [ ji ] n * n, (A.48) ij a ij where is the cofactor of. (See Section A.5 for the definition of a cofactor.) It follows from Eq. A.48 that one can think of finding the adjoint of a square matrix as a two-step process. First construct a matrix made up of the cofactors of A, then transpose the matrix of cofactors. As an example we will find the adjoint of the 3 * 3 matrix 2 3 A = C 3 2 S. - 5 The cofactors of the elements in A are = ( - ) = 9, 2 = -(5 + ) = -6, 3 = (3 + 2) = 5, 2 = -( - 3) = -7, 22 = (5 + 3) = 8, 23 = -( + 2) = -3, 3 = (2-6) = -4, 32 = -( - 9) = 8, 33 = (2-6) = -4. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

13 A.8 Identity, Adjoint, Inverse Matrices 77 The matrix of cofactors is It follows that the adjoint of A is B = C S adj A = B T = C S One can check the arithmetic of finding the adjoint of a matrix by using the theorem adj A # A = det A # U. (A.49) Equation A.49 tells us that the adjoint of A times A equals the determinant of A times the identity matrix, or for our example, det A = (9) + 3(-7) - (-4) = -8. If we let C = adj A # A use the technique illustrated in Section A.7, we find the elements of C to be c = = -8, c 2 = =, c 3 = =, c 2 = =, c 22 = = -8, c 23 = =, c 3 = =, c 32 = =, c 33 = = -8. Therefore -8 C = C -8 S = -8C S -8 = det A # U. A square matrix A has an inverse, denoted as A - A = AA - = U. A -, if Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. (A.5)

14 2 You can learn alternative methods for finding the inverse in any introductory text on matrix theory. See, for example, Franz E. Hohn, Elementary Matrix Algebra (New York: Macmillan, 973). Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 772 The Solution of Linear Simultaneous Equations Equation A.5 tells us that a matrix either premultiplied or postmultiplied by its inverse generates the identity matrix U. For the inverse matrix to exist, it is necessary that the determinant of A not equal zero. Only square matrices have inverses, the inverse is also square. A formula for finding the inverse of a matrix is A - = adj A det A. (A.5) The formula in Eq. A.5 becomes very cumbersome if A is of an order larger than 3 by 3. 2 Today the digital computer eliminates the drudgery of having to find the inverse of a matrix in numerical applications of matrix algebra. It follows from Eq. A.5 that the inverse of the matrix A in the previous example is A - = ->8C S = C S You should verify that A - A = AA - = U. A.9 Partitioned Matrices It is often convenient in matrix manipulations to partition a given matrix into submatrices. The original algebraic operations are then carried out in terms of the submatrices. In partitioning a matrix, the placement of the partitions is completely arbitrary, with the one restriction that a partition must dissect the entire matrix. In selecting the partitions, it is also necessary to make sure the submatrices are conformable to the mathematical operations in which they are involved. For example, consider using submatrices to find the product C = AB, where A = E U - 2-2

15 A.9 Partitioned Matrices B = E - U. 3 Assume that we decide to partition B into two submatrices, B B 2 ; thus Now since B has been partitioned into a two-row column matrix, A must be partitioned into at least a two-column matrix; otherwise the multiplication cannot be performed. The location of the vertical partitions of the A matrix will depend on the definitions of. For example, if A B = c B B 2 d. B 2 B = C S B 2 = c 3 d, - then must contain three columns, must contain two columns. Thus the partitioning shown in Eq. A.52 would be acceptable for executing the product AB: B 2 A C = E U F Á 3 V. (A.52) If, on the other h, we partition the B matrix so that A B = c 2 - d B 2 = C 3 S, then must contain two columns, must contain three columns. In this case the partitioning shown in Eq.A.53 would be acceptable in executing the product C = AB: A Á C = E U F V. (A.53) Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

16 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 774 The Solution of Linear Simultaneous Equations For purposes of discussion, we will focus on the partitioning given in Eq. A.52 leave you to verify that the partitioning in Eq. A.53 leads to the same result. From Eq. A.52 we can write C = [A A 2 ] c B B 2 d = A B + A 2 B 2. (A.54) It follows from Eqs. A.52 A.54 that A B = E - 2U C S = E -4 U, A 2 B 2 = E -3 U c 3 6 d = E -9 U, C = E -3 U. -7 The A matrix could also be partitioned horizontally once the vertical partitioning is made consistent with the multiplication operation. In this simple problem, the horizontal partitions can be made at the discretion of the analyst. Therefore C could also be evaluated using the partitioning shown in Eq. A.55: Á C = F Á Á Á Á Á V F Á 3 V. (A.55)

17 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A.9 Partitioned Matrices 775 From Eq. A.55 it follows that C = c A A 2 A 2 A 22 d c B B 2 d = c C C 2 d, (A.56) where C = A B + A 2 B 2, C 2 = A 2 B + A 22 B 2. You should verify that C = c d C S + c d c3 d - = c - 7 d + c2 6 d = c 3 d, C 2 = C - S C S + C S c 3 d = C S + C S = C S, C = E -3 U. -7 We note in passing that the partitioning in Eqs. A.52 A.55 is conformable with respect to addition.

18 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 776 The Solution of Linear Simultaneous Equations A. Applications The following examples demonstrate some applications of matrix algebra in circuit analysis. Example A. Use the matrix method to solve for the node voltages in Eqs Solution The first step is to rewrite Eqs in matrix notation. Collecting the coefficients of v v 2 at the same time shifting the constant terms to the right-h side of the equations gives us (A.57) It follows that in matrix notation, Eq. A.57 becomes or where (A.58) (A.59) To find the elements of the V matrix, we premultiply both sides of Eq. A.59 by the inverse of A; thus or v v 2.7v -.5v 2 =, -.5v +.6v 2 = c d cv d = c v 2 2 d, AV = I, A = c d, V = c v v 2 d, I = c 2 d. A - AV = A - I. Equation A.6 reduces to UV = A - I, V = A - I. (A.6) (A.6) (A.62) It follows from Eq. A.62 that the solutions for v v 2 are obtained by solving for the matrix product A - I. To find the inverse of A, we first find the cofactors of A. Thus The matrix of cofactors is the adjoint of A is The determinant of A is det A = 2 = (-) 2 (.6) =.6, 2 = (-) 3 (-.5) =.5, 2 = (-) 3 (-.5) =.5, 22 = (-) 4 (.7) = B = c.5.7 d, adj A = B T.6.5 = c.5.7 d. (A.63) (A.64) (A.65) = (.7)(.6) - (.25) =.77. (A.66) From Eqs. A.65 A.66, we can write the inverse of the coefficient matrix, that is, Now the product A - I is found: It follows directly that A - =.5 c d. A - I = 77 c d c 2 d = 77 c d = c d. c v 9.9 d = c v 2.9 d, or v = 9.9 V v 2 =.9 V. (A.67) (A.68) (A.69)

19 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A. Applications 777 Example A.2 Use the matrix method to find the three mesh currents in the circuit in Fig Solution The mesh-current equations that describe the circuit in Fig are given in Eq The constraint equation imposed by the current-controlled voltage source is given in Eq When Eq is substituted into Eq. 4.34, the following set of equations evolves: In matrix notation, Eqs. A.7 reduce to where 25i i - 5i 2-2i 3 = 5, -5i i + i 2-4i 3 =, -5i - 4i 2 + 9i 3 = A = C -5-4 S, i I = C i 2 S, i 3 AI = V, 5 V = C S. (A.7) (A.7) It follows from Eq.A.7 that the solution for I is I = A - V. (A.72) We find the inverse of A by using the relationship A - = adj A det A. (A.73) To find the adjoint of A, we first calculate the cofactors of A. Thus = (-) 2 (9-6) = 74, 2 = (-) 3 (-45-2) = 65, 3 = (-) 4 (2 + 5) = 7, 2 = (-) 3 (-45-8) = 25, 22 = (-) 4 (225 - ) = 25, 23 = (-) 5 (- - 25) = 25, 3 = (-) 4 (2 + 2) = 22, 32 = (-) 5 (- - ) = 2, 33 = (-) 6 (25-25) = 225. The cofactor matrix is from which we can write the adjoint of A: adj A = B T = C S The determinant of A is det A = B = C S, It follows from Eq. A.73 that The solution for I is A - = C S (A.74) (A.75) = 25(9-6) + 5(-45-8) - 5(2 + 2) = 25. (A.76) I = (A.77) 25 C S C S = C 26. S The mesh currents follow directly from Eq.A.77.Thus i i 29.6 C i 2 S = C 26. S i (A.78) or i = 29.6 A, i 2 = 26 A, i 3 = 28 A. Example A.3 illustrates the application of the matrix method when the elements of the matrix are complex numbers.

20 778 The Solution of Linear Simultaneous Equations Example A.2 A.3 Use the matrix method to find the phasor mesh currents in the circuit in Fig Solution Summing the voltages around mesh generates the equation (A.79) Summing the voltages around mesh 2 produces the equation (2 - j6)(i 2 - I ) + ( + j3)i I x =. (A.8) The current controlling the dependent voltage source is (A.8) After substituting Eq. A.8 into Eq. A.8, the equations are put into a matrix format by first collecting, in each equation, the coefficients of ; thus (3 - j4)i - (2 - j6)i 2 = 5l, (27 + j6)i - (26 + j3)i 2 =. Now, using matrix notation, Eq. A.82 is written where I I 2 ( + j2)i + (2 - j6)(i - I 2 ) = 5l. I x = (I - I 2 ). AI = V, 3 - j4 -(2 - j6) A = c 27 + j6 -(26 + j3) d, I = c I d, V = c 5l d. I 2 It follows from Eq. A.83 that I = A - V. (A.82) (A.83) (A.84) The inverse of the coefficient matrix A is found using Eq. A.73. In this case, the cofactors of A are = (-) 2 (-26 - j3) = j3, 2 = (-) 3 (27 + j6) = j6, 2 = (-) 3 (-2 + j6) = 2 - j6, 22 = (-) 4 (3 - j4) = 3 - j4. I I 2 The cofactor matrix B is (-26 - j3) (-27 - j6) B = c (2 - j6) (3 - j4) d. The adjoint of A is adj A = B T (-26 - j3) (2 - j6) = c (-27 - j6) (3 - j4) d. The determinant of A is det A = 2 (3 - j4) -(2 - j6) (27 + j6) -(26 + j3) 2 = 6 - j45. (A.85) (A.86) = -(3 - j4)(26 + j3) + (2 - j6)(27 + j6) The inverse of the coefficient matrix is Equation A.88 can be simplified to A - = (-26 - j3) (2 - j6) c A - (-27 - j6) (3 - j4) d =. (6 - j45) 6 + j Substituting Eq. A.89 into A.84 gives us (-26 - j52) = c (-24 - j58) d. It follows from Eq. A.9 that (-26 - j3) (2 - j6) c (-27 - j6) (3 - j4) d = j j28 c j j7 d. I = (-26 - j52) = 58.4l A, I 2 = (-24 - j58) = 62.77l A. (A.87) (A.88) (A.89) c I d = - j3) (96 - j28) c(-65 I (-6 - j45) (94 - j7) d c5l d (A.9) (A.9) In the first three examples, the matrix elements have been numbers real numbers in Examples A. A.2, complex numbers in Example A.3. It is also possible for the elements to be functions. Example A.4 illustrates the use of matrix algebra in a circuit problem where the elements in the coefficient matrix are functions. Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel.

21 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. A. Applications 779 Example A.4 Use the matrix method to derive expressions for the node voltages in the circuit in Fig. A.. Solution Summing the currents away from nodes 2 generates the following set of equations: Letting G = >R collecting the coefficients of V gives us Writing Eq. A.93 in matrix notation yields where V - V g + V R sc + (V - V 2 )sc =, (A.92) V 2 R + (V 2 - V )sc + (V 2 - V g )sc =. V 2 V -scv + (G + 2sC)V 2 = scv g. A = c G + 2sC -sc -sc G + 2sC d, V = c V V 2 d, I = c GV g scv g d. It follows from Eq. A.94 that (A.93) (A.94) (A.95) As before, we find the inverse of the coefficient matrix by first finding the adjoint of A the determinant of A. The cofactors of A are = (-) 2 [G + 2sC] = G + 2sC, 2 = (-) 3 (-sc) = sc, 2 = (-) 3 (-sc) = sc, 22 = (-) 4 [G + 2sC] = G + 2sC. The cofactor matrix is V 2 (G + 2sC)V - scv 2 = GV g, AV = I, V = A - I. B = c G + 2sC sc sc G + 2sC d, (A.96) therefore the adjoint of the coefficient matrix is adj A = B T = c G + 2sC sc (A.97) sc G + 2sC d. R v g v The determinant of A is sc det A = 2 G + 2sC sc sc G + 2sC 2 = G 2 + 4sCG + 3s 2 C 2. sc Figure A. The circuit for Example A.4. sc The inverse of the coefficient matrix is A - = It follows from Eq. A.95 that c G + 2sC sc sc G + 2sC d (G 2 + 4sCG + 3s 2 C 2 ). c V c G + 2sC sc sc G + 2sC d c GV g d scv g d = V 2 (G 2 + 4sCG + 3s 2 C 2 ) v 2 R (A.98) (A.99) (A.) Carrying out the matrix multiplication called for in Eq. A. gives c V V 2 d = (G 2 + 4sCG + 3s 2 C 2 ) c(g2 + 2sCG + s 2 C 2 )V g (2sCG + 2s 2 C 2 )V g d.. (A.) Now the expressions for V V 2 can be written directly from Eq. A.; thus V = (G2 + 2sCG + s 2 C 2 )V g (G 2 + 4sCG + 3s 2 C 2 ), V 2 = 2(sCG + s 2 C 2 )V g (G 2 + 4sCG + 3s 2 C 2 ). (A.2) (A.3)

22 Electric Circuits, Eighth Edition, by James A. Nilsson Susan A. Riedel. 78 The Solution of Linear Simultaneous Equations In our final example, we illustrate how matrix algebra can be used to analyze the cascade connection of two two-port circuits. Example A.5 Show by means of matrix algebra how the input variables V I can be described as functions of the output variables V 2 I 2 in the cascade connection shown in Fig. 8.. Solution We begin by expressing, in matrix notation, the relationship between the input output variables of each two-port circuit. Thus c V d = c a I c V fl I d = ca fl (A.4) (A.5) Now the cascade connection imposes the constraints V 2 = V a 2 a 2 -a 2 -a 22 fl -a 2 -a 22 I 2 d c V 2 I 2 d, fl d cv 2 d, I 2 = -I. (A.6) These constraint relationships are substituted into Eq. A.4. Thus c V d = c a I (A.7) The relationship between the input variables (, I ) the output variables ( V 2, I 2 ) is obtained by substituting Eq. A.5 into Eq. A.7. The result is c V d = c a I a 2 a 2 = c a a 2 a 2 a 2 a 22 -a 2 -a 22 a 22 d c a fl fl fl -a 2 a 2 d c -I d d c V I d. fl -a d cv 2 d. 22 I 2 (A.8) After multiplying the coefficient matrices, we have V c V d = c (a a fl + a 2 a fl 2 ) -(a a fl 2 + a 2 I (a 2 a fl + a 22 a fl 2 ) -(a 2 a fl 2 + a 22 V a fl 22 ) a fl 22 ) d cv 2 (A.9) I 2 d. Note that Eq. A.9 corresponds to writing Eqs in matrix form.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2 MATHEMATICS FO ENGINEES BASIC MATIX THEOY TUTOIAL This is the second of two tutorials on matrix theory. On completion you should be able to do the following. Explain the general method for solving simultaneous

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Unit 18 Determinants

Unit 18 Determinants Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or

More information

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: 2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Some Lecture Notes and In-Class Examples for Pre-Calculus: Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Excel supplement: Chapter 7 Matrix and vector algebra

Excel supplement: Chapter 7 Matrix and vector algebra Excel supplement: Chapter 7 atrix and vector algebra any models in economics lead to large systems of linear equations. These problems are particularly suited for computers. The main purpose of this chapter

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

5.5. Solving linear systems by the elimination method

5.5. Solving linear systems by the elimination method 55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

More information

Matrix algebra for beginners, Part I matrices, determinants, inverses

Matrix algebra for beginners, Part I matrices, determinants, inverses Matrix algebra for beginners, Part I matrices, determinants, inverses Jeremy Gunawardena Department of Systems Biology Harvard Medical School 200 Longwood Avenue, Cambridge, MA 02115, USA jeremy@hmsharvardedu

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Cramer s Rule for Solving Simultaneous Linear Equations 8.1 Introduction The need to solve systems of linear equations arises frequently in engineering. The analysis of electric circuits and the control

More information

The Basics of FEA Procedure

The Basics of FEA Procedure CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

Lecture notes on linear algebra

Lecture notes on linear algebra Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

GENERATING SETS KEITH CONRAD

GENERATING SETS KEITH CONRAD GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

A Concrete Introduction. to the Abstract Concepts. of Integers and Algebra using Algebra Tiles

A Concrete Introduction. to the Abstract Concepts. of Integers and Algebra using Algebra Tiles A Concrete Introduction to the Abstract Concepts of Integers and Algebra using Algebra Tiles Table of Contents Introduction... 1 page Integers 1: Introduction to Integers... 3 2: Working with Algebra Tiles...

More information

Circuit Analysis using the Node and Mesh Methods

Circuit Analysis using the Node and Mesh Methods Circuit Analysis using the Node and Mesh Methods We have seen that using Kirchhoff s laws and Ohm s law we can analyze any circuit to determine the operating conditions (the currents and voltages). The

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0 has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra 4 Introduction to Matrix Algebra In the previous chapter, we learned the algebraic results that form the foundation for the study of factor analysis and structural equation modeling. These results, powerful

More information

MBA Jump Start Program

MBA Jump Start Program MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Online Appendix: Basic Mathematical Concepts 2 1 The Number Spectrum Generally we depict numbers increasing from left to right

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Properties of Real Numbers

Properties of Real Numbers 16 Chapter P Prerequisites P.2 Properties of Real Numbers What you should learn: Identify and use the basic properties of real numbers Develop and use additional properties of real numbers Why you should

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Chapter 2 Determinants, and Linear Independence

Chapter 2 Determinants, and Linear Independence Chapter 2 Determinants, and Linear Independence 2.1 Introduction to Determinants and Systems of Equations Determinants can be defined and studied independently of matrices, though when square matrices

More information

Lecture 20: Transmission (ABCD) Matrix.

Lecture 20: Transmission (ABCD) Matrix. Whites, EE 48/58 Lecture 0 Page of 7 Lecture 0: Transmission (ABC) Matrix. Concerning the equivalent port representations of networks we ve seen in this course:. Z parameters are useful for series connected

More information

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

More information

26 Integers: Multiplication, Division, and Order

26 Integers: Multiplication, Division, and Order 26 Integers: Multiplication, Division, and Order Integer multiplication and division are extensions of whole number multiplication and division. In multiplying and dividing integers, the one new issue

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA MATHEMATICS FOR ENGINEERING BASIC ALGEBRA TUTORIAL 3 EQUATIONS This is the one of a series of basic tutorials in mathematics aimed at beginners or anyone wanting to refresh themselves on fundamentals.

More information

Vocabulary Words and Definitions for Algebra

Vocabulary Words and Definitions for Algebra Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms

More information

This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

This is a square root. The number under the radical is 9. (An asterisk * means multiply.) Page of Review of Radical Expressions and Equations Skills involving radicals can be divided into the following groups: Evaluate square roots or higher order roots. Simplify radical expressions. Rationalize

More information

1.6 The Order of Operations

1.6 The Order of Operations 1.6 The Order of Operations Contents: Operations Grouping Symbols The Order of Operations Exponents and Negative Numbers Negative Square Roots Square Root of a Negative Number Order of Operations and Negative

More information

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information