Introduction to Matrices
|
|
- Rosalind Ford
- 7 years ago
- Views:
Transcription
1 $ Introduction to Matrices Tom Davis Octoer 25, Definitions A matrix (plural: matrices) is simply a rectangular array of things. For now, we ll assume the things are numers, ut as you go on in mathematics, you ll find that matrices can e arrays of very general ojects. Pretty much all that s required is that you e ale to add, sutract, and multiply the things. Here are some examples of matrices. Notice that it is sometimes useful to have variales as entries, as long as the variales represent the same sorts of things as appear in the other slots. In our examples, we ll always assume that all the slots are filled with numers. All our examples contain only real numers, ut matrices of complex numers are very common.! "" ## %$&(') The first example is a square *+ matrix; the next is a *+ matrix (2 rows and 4 columns if we talk aout a matrix that is, *.- we mean it has, rows and - columns). The final two examples consist of a single column matrix, and a single row matrix. These final two examples are often called vectors the first is called a column vector and the second, a row vector. We ll use only column vectors in this introduction. Often we are interested in representing a general, *.- matrix with variales in every location, and that is usually done as follows: """ // 20 01/ 2023/ / ### / 890:/ 8 2;/ </ 87 The numer in row= and column> is represented y, where /?@ BA = A, BA and > AC-. Sometimes when there is no question aout the dimensions of a matrix, the entire matrix can simply e referred to as: )ED /?@ 1.1 Addition and Sutraction of Matrices E F G 3H I D JK ONP RQ L %M D L BK S(I M As long as you can add and sutract the things in your matrices, you can add and sutract the matrices themselves. The addition and sutraction occurs in the ovious way element y element. Here are a couple of examples: 1
2 TUEVFWGX Y3 [ \ Y V ] ^`_a [ TUW Y V ] ]%W [c Y ]%d Y _aoe TU[9Y V [ W ]1W c9[.\ f V f ^ [ d Y _a To find what goes in rowg and columnh of the sum or difference, just add or sutract the entries in rowg and column h of the matrices eing added or sutracted. In order to make sense, oth of the matrices in the sum or difference must have the same numer of rows and columns. It makes no sense, for example, to add ayij\ matrix to a W ij\ matrix. 1.2 Multiplication of Matrices When you add or sutract matrices, the two matrices that you add or sutract must have the same numer of rows and the same numer of columns. In other words, oth must have the same shape. For matrix multiplication, all that is required is that the numer of columns of the first matrix e the same as the numer of rows of the second matrix. In other words, you can multiply ank il matrix y al`ì m matrix, with thek il matrix on the left and thel`ì m matrix on the right. The example on the left elow represents a legal multiplication since there are three columns in the left multiplicand and three rows in the right one; the example on the right doesn t make sense the left matrix has three columns, ut the right one has only 2 rows. If the matrices on the Y i W right were written in the reverse order with the matrix on the left, it would represent a valid matrix multiplication. nntuv;w] \ o XV Y TU V(X fyf W%X TUEV f%f a_pp X%V _a ff V%V f VV V f _arq Y V;W s So now we know what shapes of matrices it is legal to multiply, ut how do we do the actual multiplication? Here is the method: If we are multiplying ank il matrix y al.ì m matrix, the result will e ank ì m matrix. The element in the product in rowg and columnh is gotten y multiplying term-wise all the elements in rowg of the matrix on the left y all the elements in columnh of the matrix on the right and adding them together. Here is an example: TUEV(W Y ] f X %ot _a TU\ V V V f ] o _aoe TUuW Y ] o ] ] V V t V V ty Y t _a To find what goes in the first row and first column of the product, take the numer from the first row of the matrix on the left: vv w W wy x, and multiply them, in order, y the numers in the first column of the matrix on the right: v\ w w ] x. Add the results: V9y \z Wy zcy y ] e \Bz V t9z V f e W Y. To get the 228 in the third row and second column of the product, the use the numers in the third row of the left matrix: v wo w t x and the numers in the second column of the right matrix:vv V w V f wo x to get y V V zo y V f9zt y o e 9zo f9z X Y e Y Y t. Check your understanding y verifying that the other elements in the product matrix are correct. In general, if we multiply a generalk i.l matrix y a generalluijm matrix to get ank ium matrix as follows: y y y nnntu{ } y y y y { } 1{ }} {{ } ~ ~ nnntu } y yy yy y... y y.. y. a_ppp } ( }}. }..... _ nnntubƒ 3ƒ } y yy yy y y y a ppp e ƒ } 3ƒ }} ƒ. ƒ }.. y y.. y. a_ppp { 9 :{ } { ~ ~ ; ~ } ~ ƒ :ƒ } ƒ 2
3 ž ž ž ž ž ž Ÿ Then we can write (the numer in row, columnˆ ) as: 1.3 Square Matrices and Column Vectors 9 Š Œ Ž Œ Œ Although everything aove has een stated in terms of general rectangular matrices, for the rest of this tutorial, we ll consider only two kinds of matrices (ut of any dimension): square matrices, where the numer of rows is equal to the numer of columns, and column matrices, where there is only one column. These column matrices are often called vectors, and there are many applications where they correspond exactly to what you commonly use as sets of coordinates for points in space. In the two-dimensional - plane, the coordinates represent a point that is one unit to the right of the origin (in the direction of the -axis), and three units aove the origin (in the direction of the -axis). That same point can e written as the following column vector: š If you wish to work in three dimensions, you ll need three coordinates to locate a point relative to the (threedimensional) origin an -coordinate, a -coordinate, and a œ -coordinate. So the point you d normally write as œ can e represented y the column vector: ž Ÿ œ Quite often we will work with a comination of square matrices and column matrices, and in that case, if the square matrix has dimensions ` u, the column vectors will have dimension ` ( rows and 1 column) Properties of Matrix Arithmetic Matrix arithmetic (matrix addition, sutraction, and multiplication) satisfies many, ut not all of the properties of normal arithmetic that you are used to. All of the properties elow can e formally proved, and it s not too difficult, ut we will not do so here. In what follows, we ll assume that different matrices are represented y upper-case letters: j, and that column vectors are represented y lower-case letters:. We will further assume that all the matrices are square matrices or column vectors, and that all are the same size, either C. or C C. Further, we ll assume that the matrices contain numers (real or complex). Most of the properties listed elow apply equally well to non-square matrices, assuming that the dimensions make the various multiplications and addtions/sutractions valid. Perhaps the first thing to notice is that we can always multiply two C ` matrices, and we can multiply an C ` matrix y a column vector, ut we cannot multiply a column vector y the matrix, nor a column vector y another. In other words, of the three matrix multiplications elow, only the first one makes sense. Be sure you understand why. Ÿ ª%«; ( Ÿ Ÿ Ÿ ª%«; ( 1 We could equally well use row vectors to correspond to coordinates, and this convention is used in many places. However, the use of column matrices for vectors is more common Ÿ 3
4 Ò» Finally, an extremely useful matrix is called the identity matrix, and it is a square matrix that is filled with zeroes except for ones in the diagonal elements (having the same row and column numer). Here, for example, is the±u²j± identity matrix: ³ µ ; % ; % ( The identity matrix is usually called ¼ for any size square matrix. Usually you can tell the dimensions of the identity matrix from the surrounding context. ½ Associative laws: ¹¹ º» ¾ ÁÀ.ÂÃÁÄÅ ¾À.ྠÈÇÀ. ÇÃÁÄÅ ÈÇžÀÉÇÃB ¾+Ç`Æ ¾ ÁÀ.ÂÆ+ÄÅ ¾ÀjÆ Â ÇËÌÄÍ+Ç;ÆBÇË9  ½ Commutative laws for addition: ÈÇÀÎÄÅÀÉÇ ½ Distriutive laws: ¾ ÆBÏ`Ë ÄÍ ÌÆBÏC ÌË ÀÉÏÃBÂEÄÅ ÁÀÉÏC Áà ½ The identity matrix: À ¼ Ä ¼ ÀOÄÅÀ ÆÇËÌÄÍËÍÇÆ ¾ ÈÏÀ. Æ+ÄÅ ÌÆBÏÀjÆ ÃÁÄÅ ÁÃÍÏCÀ.à ¼ Æ+ÄÍÆ Proaly the most important thing to notice aout the laws aove is one that s missing multiplication of matrices is not in general commutative. It is easy to find examples of matrices andà where ÁÀÑÐ ÄÁÀ.. In fact, matrices almost never commute under multiplication. Here s an example of a pair that don t: ÓÔÒ Ó Ä ÒÖÕ ( % Ó ØÒ ÓÔÒ Ó Ä Ò ( % % Õ Ó So the order of multiplication is very important; that s why you may have noticed the care that has een taken so far in descriing multiplication of matrices in terms of the matrix on the left, and the matrix on the right. The associative laws aove are extremely useful, and to take one simple example, consider computer graphics. As we ll see later, operations like rotation, translation, scaling, perspective, and so on, can all e represented y a matrix multiplication. Suppose you wish to rotate all the vectors in your drawing and then to translate the results. Suppose Ù andú are the rotation and translation matrices that do these jos. If your picture has a million points in it, you can take each of those million pointsæ and rotate them, calculatingù Æ for each vectoræ. Then, the result of that rotation can e translated:ú ¾Ù Æ Â, so in total, there are two million matrix multiplications to make your picture. But the associative law tells us we can just multiplyú yù once to get the matrixú Ù, and then multiply all million points yú Ù to get¾ú Ù ÂÆ, so all in all, there are only 1,000,001 matrix multiplications one to produceú Ù and a million multiplications ofú Ù y the individual vectors. That s quite a savings of time. The other thing to notice is that the identity matrix ehaves just like 1 under multiplication if you multiply any numer y 1, it is unchanged; if you multiply any matrix y the identity matrix, it is unchanged. 4
5 õ õ ä ä ä ä ä å ä Û Þ å á è å à ò è å è è 2 Applications of Matrices This section illustrates a tiny numer of applications of matrices to real-world prolems. Some details in the solutions have een omitted, ut that s ecause entire ooks are written on some of the techniques. Our goal is to make it clear how a matrix formulation may simplify the solution. 2.1 Systems of Linear Equations Let s start y taking a look at a prolem that may seem a it oring, ut in terms of practical applications is perhaps the most common use of matrices: the solution of systems of linear equations. Following is a typical prolem (although real-world prolems may have hundreds of variales). Solve the following system of equations: Û+Ǜ Û+Üå Ý ÞBÜCß ÞBÜÝ à%áãâ Ûuç`ß Þç à%á æ à%á3å æ è éêæëýìß æç9ßç åíý ä îï The key oservation is this: the prolem aove can e converted to matrix notation as follows: éêûþà á éê æâæ The numers in the square matrix are just the coefficients of,, and in the system of equations. Check to see that the two forms the matrix form and the system of equations form represent exactly the same prolem. éêûþà ápéê æâæ Ignoring all the difficult details, here is how such systems can e solved. Let ð e the square matrix in equation (1) aove, so the equation looks like this: ð (2) áró Suppose we can somehow find another matrixò such thatò.ð. If we can, we can multiply oth sides of equation (2) yò to otain: ò.ð éêûþà áåó.éê Û Þ à ápéê ÛÞà éêâ æ æ îïô so we can simply multiply our matrix ò y the column matrix containing the numers 7, 11, and 5 to get our solution. éêä çbææ çæ æöâ çß ç9å éêä çbææ çæ æöâ çß ç9å éêæëýìß æç9ßç åíý ápéê øæ(ø æ;ø%ø ä îï øøæ éêâ æ æ áréê çæ æõæå ápéê Û ÞÞ Without explaining where we got it, the matrix on the left elow is just such a matrix ò. Check that the multiplication elow does yield the identity matrix: So we just need to multiply that matrixò y the column vector containing 7, 11, and 5 to get our solution: 5 ß.ñÍß (1)
6 ú From this last equation we conclude thatùjúåû,ü+úáý ý, andþúìÿbý is a solution to the original system of equations. You can plug them in to check that they do indeed form a solution. Although it doesn t happen all that often, sometimes the same system of equations needs to e solved for a variety of column vectors on the right not just one. In that case, the solution to every one can e otained y a single multiplication y the matrix. The matrix is usually written as, called -inverse. It is a multiplicative inverse in just the same way that ý is the inverse of : ý úôý, andý is the multiplicative identity, just as is in matrix multiplication. Entire ooks are written that descrie methods of finding the inverse of a matrix, so we won t go into that here. Rememer that for numers, zero has no inverse; for matrices, it is much worse many, many matrices do not have an inverse. Matrices without inverses are called singular. Those with an inverse are called non-singular. Just as an example, the matrix on the left of the multiplication elow can t possily have an inverse, as we can see from the matrix on the right. No matter what the values are of, it is impossile to get anything ut zeroes in certain spots in the diagonal, and we need ones in all the diagonal spots: ý!" % #%$ If the set of linear equations has no solution, then it will e impossile to invert the associated matrix. For example, the following system of equations cannot possily have a solution, since ù'&uü(&jþ cannot possily add to two different numers (7 and 11) as would e required y the first two equations: ù)&`ü*&þú,+ ù+ÿ- ù)&`ü*&þúãý üÿ/. þú0 ý ýýìý ýýìý ýÿ( ÿ'. So oviously the associated matrix elow cannot e inverted: 2.2 Computer Graphics Some computers can only draw straight lines on the screen, ut complicated drawings can e made with a long series of line-drawing instructions. For example, the letter F could e drawn in its normal orientation at the origin with the following set of instructions: 1. Draw a line from to. 2. Draw a line from to. 3. Draw a line from to.. Imagine that you have a drawing that s far more complicated than the F aove consisting of thousands of instructions similar to those aove. Let s take a look at the following sorts of prolems: 1. How would you convert the coordinates so that the drawing would e twice as ig? How aout stretched twice as high (ü -direction) and three times as wide (ù -direction)? 2. Could you draw the mirror image through theü -axis? 6
7 3. How would you shift the drawing 4 units to the right and 5 units down? 4. Could you rotate it counter-clockwise aout the origin? Could you rotate it y an angle 4 counter-clockwise aout the origin? 5. Could you rotate it y an angle4 aout the point56 7 8'9 :? 6. Ignoring the prolem of making the drawing on the screen, what if your drawing were in three dimensions? Could you solve prolems similar to those aove to find the new (3-dimensional) coordinates after your oject has een translated, scaled, rotated, et cetera? It turns out that the answers to all of the prolems aove can e achieved y multiplying your vectors y a matrix. Of course a different matrix will solve each one. Here are the solutions: Graphics Solution 1: To scale in the; -direction y a factor of 2, we need to multiply all the; coordinates y 2. To scale in the< -direction, we similarly need to multiply the < coordinates y the same scale factor of 2. The solution to scale any drawing y a factor = > in the; -direction and =? in the < -direction is to multiply all the input vectors y a general scaling matrix as follows: = 20= >02? A ;< ACB = = >? ; < AED To uniformly scale everything to twice as ig, let = > B =?. To scale y a factor of 2 in BGF the; -direction and 3 in the < -direction, let= > BGF and=? B 9. We ll illustrate the general procedure with the drawing instructions for the F that appeared earlier. The drawing commands are descried in terms of a few points: :,52 7H :,59 7H :,52 79 :, and5f 79 :. If we write all five of those points as column vectors and multiply all five y the same matrix (either of the two aove), we ll get five new sets of coordinates for the points. For example, in the case of the second example where the scaling is 2 times in; and 3 times in<, the five points will e converted y matrix multiplication to:52 72 :,52 7 I H :,5J 7 I H :,52 71 : and5k71 :. If we rewrite the drawing instructions using these transformed points, we get: 1. Draw a line from52 72 : to52 7 I H :. 2. Draw a line from52 7 I H : to5j 7 I H :. 3. Draw a line from52 71 : to5j 71 :. Follow the instructions aove and see that you draw an appropriately stretched F. In fact, do the same thing for each of the matrix solutions in this set to verify that the drawing is transformed appropriately. Notice that if= > or=? is smaller than 1, the drawing will e shrunk not expanded. Graphics Solution 2: A mirror image is just a scaling y8li. To mirror through the< -axis means that each; -coordinate will e replaced with its negative. Here s a matrix multiplication that will do the jo: Graphics Solution 3: 8I2 2MI A ;< ACB = = >? ; < AED To translate points 4 to the right and 5 units down, you essentially need to add 4 to every; coordinate and to sutract 5 from every< coordinate. If you try to solve this exactly as in the examples aove, you ll find it is impossile. To 7
8 nm nm nm nm a a a a e nm nm nm e nm convince yourself it s impossile with any N)OPN matrix, consider what will happen to the origin: QR S R T. You want to move it toqus V(W T, ut look what happens if you multiply it y anyn*oxn matrix (Y,,[, and\ can e any numers): ] Y^ ] [_\ ` R ] R `Ca R R È In other words, no matter whatys S[ S and\ are, the matrix will map the origin ack to the origin, so translation using this scheme is impossile. But there s a great trick 2. For every one of your two-dimensional vectors, add an artificial third component of 1. So the pointqc Sd T will ecomeqc Sd S e T, the origin will ecomeqr SR S e T, et cetera. The column vectors will now have three rows, so the transformation matrix will need to eco)c. To translate yfg in theh -direction andfi in thej -direction, multiply the artificially-enlarged vector y a matrix as follows: kle_r_fg mnokl R^ef RR0ei klh)p-fg hje jlp/f e i The resulting vector is just what you want, and it also has the artificial 1 on the end that you can just throw away. To get the particular solution to the prolem proposed aove, simply letfg a U andfi a V(W. But now you re proaly thinking, That s a neat trick, ut what happens to the matrices we had for scaling? What a pain to have to convert to the artificial 3-dimensional form and ack if we need to mix scaling and translation. The nice thing is that we can always use the artificially extended form. Just use a slightly different form of the scaling matrix: kl q grr0r R q ir RsRte mnokl hje kl qq gi hj In the solutions that follow, we ll always add a 1 as the artificial third component 3. Graphics Solution 4: Convince yourself (y drawing a few examples, if necessary) that to rotate a point counter-clockwise y u R v aout the origin, you will asically make the original h coordinate into a j coordinate, and vice-versa. But not quite. Anything that had a positivej coordinate will, after rotation yu R v, have a negativeh coordinate and vice-versa. In other words, the newj coordinate is the oldh coordinate, and the newh coordinate is the negative of the oldj coordinate. Convince yourself that the following matrix does the trick (and notice that we ve added the 1 as an artificial third component): klr%vler mnkl RyRse ewrxr klvzj hje he The general solution for a rotation counter-clockwise y an angle{ is given y the following matrix multiplication: kl} ~ {^V { ~ {sr {%R mnkl klh ~ {LVPj { R Rƒe hje h {p/j ~ { If you ve never seen anything like this efore, you might consider trying it for a couple of simple angles, or{ like{ a U W v a c R v and put in the drawing coordinates for the letter F given earlier to see that it s transformed properly. 2 In fact, it s a lot more than a trick it is really part of projective geometry. 3 But in the world of computer graphics or projective geometry, it is often useful to allow values other than 1 in perspective transformations, for example 8
9 š š š š Graphics Solution 5: Here is where the power of matrices really comes through. Rather than solve the prolem from scratch as we have aove, let s just solve it using the information we already have. Why not translate the point (ˆ to the origin, then do a rotation aout the origin, and finally, translate the result ack to 'ˆ? Each of those operations can e achieved y a matrix multiplication. Here is the final solution: Š Œ_Ž Ž^Œ ŽŽMŒ* (ˆ Š } ( z Ž Ž Ž Œ MŽ Š zœž Ž%Œ Š š ŽŽMŒ* ˆ Œ Notice carefully the order of the matrix multiplication. The matrix closest to the Œ column vector is the first one that s applied to it it should move 'ˆ the origin. To do that, we need to translate coordinates y ' and coordinates yˆ. The next operation to e done is the rotation y an aritrary angle, using the matrix form from the previous prolem. Finally, to translate ack to (ˆ we have to translate in the opposite direction from what we did originally, and the matrix on the far left aove does just that. Rememer that for any particular value of, z and are just numers, so if you knew the exact rotation angle, you could just plug the numers in to the middleˆ œ-ˆ matrix and multiply together the three matrices on the left. Then to transform any particular point, there would e only one matrix multiplication involved. To convince yourself that we ve got the right answer, why not put in a particular (simple) rotation ofž Ž Ÿ into the matrices and work it out? ž Ž Ÿ' Ž and ž Ž Ÿ' Œ, so the product of the three matrices on the left is: Š Œ_Ž Ž^Œ ŽŽMŒ (ˆ Š Ž Œ ŽyŽ ŒŽ Š ŒŽ ŽxŽMŒ Ž%Œ Š Ž ŽŽMŒ ˆ Œ Ž Œ ŽxŽ Œ Œ Ž Try multiplying all the vectors from the F example y the single matrix on the right aove and convince yourself that you ve succeeded in rotating the F y ž Ž Ÿ counter-clockwise aout the point (ˆ. Graphics Solution 6: The answer is yes. Of course you ll have to add an artificial fourth dimension which is always 1 to your threedimensional coordinates, ut the form of the matrices will e similar. On the left elow is the mechanism for scaling y,, and ª in the -, multiplication that translates y,, and ª in the three directions. Š Ž ŽMŽrŽ ŽrŽ Š š Š ŒŽŽ Ž ŽMŽ,Œ ª Ž «Œ Ž%Œ_Ž ŽŽŽrŒ ŽŽ^Œ ª and«-axis: Š Ž Œ Ž Ž±Ž Ž% ( ( Ž Ž²Ž ŽƒŒ MŽ -, and«-directions; on the right is a Finally, to rotate y an angle of counter-clockwise aout the three axes, multiply your vector on the left y the appropriate one of the following three matrices (left, middle, and right correspond to rotation aout the -axis, -axis, 2.3 Gamler s Ruin Prolem Consider the following questions: Š ŽƒŒ Ž²Ž sž% ( ^Ž z %Ž Ž±Ž²Ž³Œ Ž Š š «Œ Š ( ( ŽŽ Ž ŽƒŒŽ sžž Ž Ž±Ž%Œ 9
10 µ ÃÂÂÂÂÂÂÁ ÃÂÂÂÂÂÂÁ» ÃÂÂÂÂÂÂÁ 1. Suppose you have $20 and need $50 for a us ride home. Your only chance to get more money is y gamling in a casino. There is only one possile et you can make at the casino you must et $10, and then a fair coin is flipped. If heads results, you win $10 (in other words, you get your original $10 ack plus an additional $10), and if it s tails, you lose the $10 et. The coin has exactly the same µ proaility of coming up heads or tails. What is the chance that you will get the $50 you need? What if you had egun with $10? What if you had egun with $30? 2. Same as aove, except this time you start with $40 and need $100 for the us ticket. But this time, you can et either $10 or $20 on each flip. What s your est etting strategy? 3. Real casinos don t give you a fair et. Suppose the prolem is the same as aove, except that you have to make a $10 et on red or lack on a roulette wheel. A roulette wheel in the United States has 18 red numers, 18 lack numers and 2 green numers. Winning a red or lack et doules your money if you win (and you lose it all if you lose), ut oviously you now have a slightly smaller chance of winning. A red or lack et has a proaility of ¹ º Ļ» ¼ ¹ ¼ of winning on each spin of the wheel. Answer all the questions aove if your ets must e made on a roulette wheel. All of these prolems are known as the Gamler s Ruin Prolem a gamler keeps etting until he goes roke, or until he reaches a certain goal, and there are no other possiilities. The prolem has many elegant solutions which we will ignore. Let s just assume we re stupid and lazy, ut we have a computer availale to simulate the prolem. Here is a nice matrix-ased approach for stupid lazy people. At any stage in the process, there are six possile states, depending on your fortune. You have either $0 (and you ve lost), or you have $10, $20, $30, $40 (and you re still playing), or you have $50 (and you ve won). You know that when you egin playing, you are in a certain state (having $20 in the case of the very first prolem). After you ve played a while, lots of different things could have happened, so depending on how long you ve een going, you have various proailities of eing in the various states. Call the state with $0 state 0, and so on, up to state 5 that represents having $50. At any particular time, let s let ½ ¾ represent the proaility of having $0, ½ the proaility of having $10, and so on, up to½à of having won with $50. We can give an entire proailistic description of your state as a column vector like this: ½½ ¾ Ç ½Ä ½Å ÈÈÈÈÈÈ ½½À Æ ÉË Now look at the individual situations. If you re in state 0, or in state 5, you will remain there for sure. If you re in any other state, there s a 50% chance of moving up a state and a 50% chance of moving down a state. Look at the following matrix ̵,µtµÍµ multiplication:áââââââã µíµ ÍµtµÍµ ½½ ¾ ½ ¾(Î ½Ä ½ ̵ ͵ ͵͵ ͵͵ µ µíµtµ,µ % Ç ÈÈÈÈÈÈ É ½Ä ½Å ½½À Æ Ç ÈÈÈÈÈÈ É ½ }Î ½Ä Î ½Å ½Å ½ Æ ½ ÆzÎP½À Clearly, multiplication y the matrix aove represents the change in proailities of eing in the various states, given an initial proailistic distriution. The chance of eing in state 0 after a coin flip is the chance you were there efore, plus half of the chance that you were in state 1. Check that the others make sense as well. Ç ÈÈÈÈÈÈ ÉË (3) 10
11 ÞÝÝÝÝÝÝÜ ÞÝÝÝÝÝÝÜ ÞÝÝÝÝÝÝÜ ÞÝÝÝÝÝÝÜ So if the matrix on the left in equation (3) is calledï, each time you multiply the vector corresponding to your initial situation yï, you ll find the proailities of eing in the various states. So after 1000 coin-flips, your state will e represented yïlðñññò, whereò is the vector representing your original situation (Ó ÑLÔ Ó Ð'Ô ÓÕ Ô Ó Ö Ô Ó and ÓÙ Ô Ú ÔCØ ). But multiplyingï y itself 1000 times is a pain, even with a computer. Here s a nice trick: If you multiplyï y itself, you getï Ù. But now, you can multiplyï Ù y itself to getï Ö. Then multiplyï Ö y itself to getïlû, and so on. With just 10 iterations of this technique, we can work outï*ðñ ÙÖ, which could even e done y hand, if you were desperate enough. Here s what the computer says we get when we calculateïlðñùö to ten digits of accuracy: Ø%Ú Ú ßä â ålæ-ú ßà Ø çè ßá Ø ã ßä Ø álæ-ú ßâ Ø çè ßã Ø ØØ é Øã Ø ßä Ø álæ-ú Ø Ø çè âßø älæ-ú Ø Ø çè âßø ä*æ/ú Ø Ø çè ã ßä ØØá çè Ø êêêêêê ØØ ßã Ø ã ßä Ø á*æ/ú ßâ Ø çè ßá Ø Ú ßä â ålæ-ú ßà Ø çè Ø ë ß Ú For all practical purposes, we have: Úßà%ßá%ßâ^ßãØ é Ï ÐÑ ÙÖ Ô êêêêêê ØÍØtØtØtØÍØ ë ß Ø^ßã%ßâ^ßá%ßà%Ú Basically, if you start with $0, $10, ß ß ß, $50, you have proailities of 0,.2,.4,.6,.8, and 1.0 of winning (reaching a goal of $50), and the complementary proailities of losing. As we said aove, this can e proven rigorously, ut just multiplying the transition matrix y itself repeatedly strongly implies the result. To solve the second prolem, simply do the same calculation with either 6 states (aáxæ-á matrix) or 11 states (an Ú ÚLæEÚ Ú matrix) and find a high power of the matrix. You ll find it doesn t make any difference which strategy you use. On the final prolem, you can use exactly the same technique, ut this time the original matrixï will not have entries ofßä, ut rather ofå ì Ú å andú Ø ì Ú å. But after you know whatï is, the process of finding a high power ofï is exactly the same. In general, if you have a proaility Ó of winning each et and a proaility of í ÔËÚ(î Ó of losing, here s the transition matrix calculation: ÜÝÝÝÝÝÝÞ Ú í ØØ ØØØØ é Ø Ó í Ó Ø ØØØ Ó Ø í ØØ Ó í êêêêêê Ñ é Ó Ó ÓÙÐ ØØØ ØØØØ Ó ë ÓÕ êêêêêê Ñ(ï íó Ð é íóù Ó Ö Ú Ó ë Ô ÓÓ ÓÓÙÐ ïï íóõ Ó ÓÕ Ö êêêêêê Ó Ó Ö ï Ó ë ß 2.4 Board Game Design Imagine that you are designing a oard game for little children and you want to make sure that the game doesn t take too long to finish or the children will get ored. How many moves will it take, on average, for a child to complete the following games: 11
12 ø ö ø ö ø ö ø ö 1. The game has 100 squares, and your marker egins on square 1. You roll a single die which turns up a numer 1, 2, 3, 4, 5, or 6, each with proaility 1/6. You advance your marker that many squares. When you reach square 100, the game is over. You do not have to hit 100 exactly for example if you are on square 98 and throw a 3, the game is over. 2. Same as aove, ut now you must hit square 100 exactly. Is there any limit to how long this game could take? 3. Same as aove, ut certain of the squares have special markings: ð Square 7 says advance to square 55. ð Square 99 says go ack to square 11. ð Square 58 says go ack to square 45. ð Squares 33 and 72 say, lose a turn. ð Square 50 says lose two turns. In a sense, this is exactly the same prolem as gamler s ruin aove, ut not quite so uniform, at least in the third example. For the first prolem, there are 100 states, representing the square you are currently upon. So a column vector 100 items long can represent the proaility of eing in any of the 100 states, and a transition matrix of size ñ ò ò*óeñ ò ò can represent the transition proailities. Rather than write out the entireñ ò ò)óeñ ò ò matrix for the game initially specified, let s write out the matrix for a game that s only 8 squares long. You egin on square 1, and win if you reach square 8. letôõ e the proaility of eing on square 1, et cetera. Here s the transition matrix: ò^ñ ò ò ñ ù úñ ùù ú%ñ ùù ùù ùù ùù ú ò ò òûò ñ ù ú%ñ ù ù ù úñ ú_ü ùù ú ôõ ò òûòûò ñ ù ú%ñ ù ù ú_ý ù ú ô ò òûòûòûò ñ ù ú%ñ ù úþ ù ú ô ò òûòûòûòûò ñ ù ú_ÿ ù ú ú ô ô ô ò òûòûòûòûò ò ñ ô ô If you insist on landing exactly on square 8 to win, the transformation matrix changes to this: ò^ñ ù úñ ùù ùù ùù ùù ùù ú ò ò ò ñ ù ú%ñ ù ù ù ù ùù ú ò òûò ü ù ú%ñ ù ù ù ù ú ôõ ô ò òûòûò ý ù ú%ñ ù ù ù ú ô ò òûòûòûò þ ù ú%ñ ù ù ú ò òûòûòûòûò ÿ ù úñ ù ú ô ô ò ñ ú ô ô In the situation where there are various go to squares, there is no chance of landing on them. As the prolem is originally stated, it is impossile to land on squares 7, 99, and 58, so there appear to e only 97 possile places to stop. But there are really two states for each of squares 33 and 72 you either just landed there, or you landed there and have waited a turn. There are 3 states for square 50 just landed there, waited one turn, and waited two turns. Thus, the transition matrix contains 101 terms: ñ ò ò-ýgñ ñ-ücñ ò ñ. 2.5 Graph Routes Imagine a situation where there are 7 possile locations: 1, 2,, 7. There are one-way streets connecting various pairs. For example, if you can get from location 3 to location 7, there will e a street laeled ý. Here is a complete list of the streets: 12 ô
13 @ / / = / / / =,,,,,,,,,, and,. If you egin on location 1, and take 16 steps, how many different routes are there that put you on each of the locations? (The discussion that follows will e much easier to follow if you draw a picture. Make a circle of 7 dots, laeled 1 through 7, and for each of the pairs aove, draw an arrow from the first dot of the pair to the second.) This time, we can represent the numer of paths to each location y a column vector: $&% -. $ ' $ ( """"""""! $ )... $ * # $ + $, $ 0 where is the numer of paths to location 1 $&%2. The initial vector for time=0 has $ '324$ (325$ )625$ *2 and $ +728$,925:. In other words, after zero steps, there is exactly one path to location 1, and no paths to other locations. You can see that if you multiply that initial vector y the matrix once, it will show exactly one path to 2 and one path to 7 and no other paths. The transition matrix looks like this: """"""""! # :;:;:<:;:;: :;:<:;:;:;: : :<:;:;:;: : :<:;:;:;: : : :;:;: :;: <; :;: :;:<: ; : """"""""! # $&% -. $ ' $ ( $ )... $ * $ + $, = (4) which will generate the count of the numer of paths in >6? steps if the input is the numer of paths at step >. If the matrix on the left of equation (4) is then after 16 steps, the counts will e given y: From a computer, here % + : """"""""! # % + """"""""! # $&% -. $ ' $ ( $ )... $ * $ + $, A B : : ; A : C D A A; A A -. A A; C E F C D D : G A A A AD A H AF D : I : A AD A H AF D : I : C ; : : J : D C < :... A ; : G : K A K A : ; A; D A AK A : ; A AL A 13
14 O O g Q O Q Q gq e O l k Since initially M&NPORQ and M SOTM UOTM V7OTM WOTM XOTM YO[, we have: M&N ` a c d Q c c <Q c<e d e f e Q d dle d d ` a M S e d d;e f geh e Q f e Q h i Q c Q d d \ N X M U Q d d Q d chc d Q e j Q e h e Q c ^^^^^^^^] M V aaaaaaa ^^^^^^^^] Q d d Q d chc d Q e j Q e h e Q c M W e f e;g h hje g Q f h Q e <Q h i aaaaaaa _ M X _ g d c;c c Q hle d Q e d DQ h gle i c M Y h e d;i h i Q d dlc d c h i;e d dkc d Q so there are 481 routes to location 1, 288 routes to location 2, 188 routes to location 3, and so on. \ U This is, of course, very difficult to check. Why don t you check the results for three step routes. The corresponding equation for only 3 steps is: M&N DQ;Q e QKDQ \ N X ^^^^^^^^] _ ` a M S M U M V aaaaaaa M W M X M Y ^^^^^^^^] _ ` a QK;<DQ;QK ;;<;;DQ ;;<;;DQ aaaaaaa QK;<;;DQ elcqk<;;; GQ;Q;QK So there is no way to get to location 1 in three steps, one way to get to location 2: m Qn h noqn e p, no ways to get to locations 3 or 4, one way to get to location 5: m Q7n e n c n j p, three ways to get to location 6: m Q7n e n g n i p, m Qqn e n c n i p, and m Qqn e n j n i p, and finally, two ways to get to location 7: m Qqn e n j n h p and m Qn h noqn h p. Oviously, there is nothing special aout the set of paths and numer of locations in the prolem aove. For any given setup, you just need to work out the associated transition matrix and you re in usiness. Even if there are loops paths that lead from a location to itself there is no prolem. A loop will simply generate an entry of 1 in the diagonal of the transition matrix. ^^^^^^^^] _ ` a aaaaaaa ^^^^^^^^] _ ^^^^^^^^] _ ` a aaaaaaa ` a aaaaaaa ^^^^^^^^] _ c d e d dq Q d d Q d d e f e g d c h e d ` a aaaaaaa 14
Introduction to Matrices
Introduction to Matrices Tom Davis tomrdavis@earthlinknet 1 Definitions A matrix (plural: matrices) is simply a rectangular array of things For now, we ll assume the things are numbers, but as you go on
More information10.1 Systems of Linear Equations: Substitution and Elimination
726 CHAPTER 10 Systems of Equations and Inequalities 10.1 Systems of Linear Equations: Sustitution and Elimination PREPARING FOR THIS SECTION Before getting started, review the following: Linear Equations
More informationVector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationMOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC
MOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC DR. LESZEK GAWARECKI 1. The Cartesian Coordinate System In the Cartesian system points are defined by giving their coordinates. Plot the following points:
More information6.3 Conditional Probability and Independence
222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationMagnetometer Realignment: Theory and Implementation
Magnetometer Realignment: heory and Implementation William Premerlani, Octoer 16, 011 Prolem A magnetometer that is separately mounted from its IMU partner needs to e carefully aligned with the IMU in
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationSession 7 Fractions and Decimals
Key Terms in This Session Session 7 Fractions and Decimals Previously Introduced prime number rational numbers New in This Session period repeating decimal terminating decimal Introduction In this session,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More informationNon-Linear Regression 2006-2008 Samuel L. Baker
NON-LINEAR REGRESSION 1 Non-Linear Regression 2006-2008 Samuel L. Baker The linear least squares method that you have een using fits a straight line or a flat plane to a unch of data points. Sometimes
More informationKenken For Teachers. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 27, 2010. Abstract
Kenken For Teachers Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles June 7, 00 Abstract Kenken is a puzzle whose solution requires a combination of logic and simple arithmetic skills.
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationPigeonhole Principle Solutions
Pigeonhole Principle Solutions 1. Show that if we take n + 1 numbers from the set {1, 2,..., 2n}, then some pair of numbers will have no factors in common. Solution: Note that consecutive numbers (such
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationIntroduction. Teacher s lesson notes The notes and examples are useful for new teachers and can form the basis of lesson plans.
KS3 Mathematics Pack A: Level 4 Introduction Introduction The Key Stage 3 Mathematics series covers the new National Curriculum for Mathematics (SCAA: The National Curriculum Orders, DFE, January 1995,
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationMath 202-0 Quizzes Winter 2009
Quiz : Basic Probability Ten Scrabble tiles are placed in a bag Four of the tiles have the letter printed on them, and there are two tiles each with the letters B, C and D on them (a) Suppose one tile
More informationMath Matters: Why Do I Need To Know This? 1 Probability and counting Lottery likelihoods
Math Matters: Why Do I Need To Know This? Bruce Kessler, Department of Mathematics Western Kentucky University Episode Four 1 Probability and counting Lottery likelihoods Objective: To demonstrate the
More informationGraph Theory Problems and Solutions
raph Theory Problems and Solutions Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is
More information3.2 Roulette and Markov Chains
238 CHAPTER 3. DISCRETE DYNAMICAL SYSTEMS WITH MANY VARIABLES 3.2 Roulette and Markov Chains In this section we will be discussing an application of systems of recursion equations called Markov Chains.
More informationPermutation Groups. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles April 2, 2003
Permutation Groups Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles April 2, 2003 Abstract This paper describes permutations (rearrangements of objects): how to combine them, and how
More informationQUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE
MODULE - 1 Quadratic Equations 6 QUADRATIC EQUATIONS In this lesson, you will study aout quadratic equations. You will learn to identify quadratic equations from a collection of given equations and write
More informationSection V.3: Dot Product
Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,
More informationA Quick Algebra Review
1. Simplifying Epressions. Solving Equations 3. Problem Solving 4. Inequalities 5. Absolute Values 6. Linear Equations 7. Systems of Equations 8. Laws of Eponents 9. Quadratics 10. Rationals 11. Radicals
More information5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationNumber Who Chose This Maximum Amount
1 TASK 3.3.1: MAXIMIZING REVENUE AND PROFIT Solutions Your school is trying to oost interest in its athletic program. It has decided to sell a pass that will allow the holder to attend all athletic events
More informationLinear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
More information9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of
More informationLesson 26: Reflection & Mirror Diagrams
Lesson 26: Reflection & Mirror Diagrams The Law of Reflection There is nothing really mysterious about reflection, but some people try to make it more difficult than it really is. All EMR will reflect
More informationLinear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information1 Symmetries of regular polyhedra
1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationLecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More informationChapter 11 Number Theory
Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationMATHEMATICS FOR ENGINEERING BASIC ALGEBRA
MATHEMATICS FOR ENGINEERING BASIC ALGEBRA TUTORIAL 3 EQUATIONS This is the one of a series of basic tutorials in mathematics aimed at beginners or anyone wanting to refresh themselves on fundamentals.
More informationVieta s Formulas and the Identity Theorem
Vieta s Formulas and the Identity Theorem This worksheet will work through the material from our class on 3/21/2013 with some examples that should help you with the homework The topic of our discussion
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More informationPrinting Letters Correctly
Printing Letters Correctly The ball and stick method of teaching beginners to print has been proven to be the best. Letters formed this way are easier for small children to print, and this print is similar
More informationFigure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
More informationChapter 16: law of averages
Chapter 16: law of averages Context................................................................... 2 Law of averages 3 Coin tossing experiment......................................................
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationLab 11. Simulations. The Concept
Lab 11 Simulations In this lab you ll learn how to create simulations to provide approximate answers to probability questions. We ll make use of a particular kind of structure, called a box model, that
More information3 Some Integer Functions
3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite
ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationIntroduction to Matrices for Engineers
Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11
More informationMA 1125 Lecture 14 - Expected Values. Friday, February 28, 2014. Objectives: Introduce expected values.
MA 5 Lecture 4 - Expected Values Friday, February 2, 24. Objectives: Introduce expected values.. Means, Variances, and Standard Deviations of Probability Distributions Two classes ago, we computed the
More information(Refer Slide Time: 2:03)
Control Engineering Prof. Madan Gopal Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 11 Models of Industrial Control Devices and Systems (Contd.) Last time we were
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationUnified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
More informationClick on the links below to jump directly to the relevant section
Click on the links below to jump directly to the relevant section What is algebra? Operations with algebraic terms Mathematical properties of real numbers Order of operations What is Algebra? Algebra is
More informationSection 7C: The Law of Large Numbers
Section 7C: The Law of Large Numbers Example. You flip a coin 00 times. Suppose the coin is fair. How many times would you expect to get heads? tails? One would expect a fair coin to come up heads half
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationHandout for three day Learning Curve Workshop
Handout for three day Learning Curve Workshop Unit and Cumulative Average Formulations DAUMW (Credits to Professors Steve Malashevitz, Bo Williams, and prior faculty. Blame to Dr. Roland Kankey, roland.kankey@dau.mil)
More informationAMS 5 CHANCE VARIABILITY
AMS 5 CHANCE VARIABILITY The Law of Averages When tossing a fair coin the chances of tails and heads are the same: 50% and 50%. So if the coin is tossed a large number of times, the number of heads and
More informationRotation Matrices and Homogeneous Transformations
Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n
More informationDiscrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice,
More informationEasy Casino Profits. Congratulations!!
Easy Casino Profits The Easy Way To Beat The Online Casinos Everytime! www.easycasinoprofits.com Disclaimer The authors of this ebook do not promote illegal, underage gambling or gambling to those living
More informationRegions in a circle. 7 points 57 regions
Regions in a circle 1 point 1 region points regions 3 points 4 regions 4 points 8 regions 5 points 16 regions The question is, what is the next picture? How many regions will 6 points give? There's an
More informationSession 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
More informationK80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013
Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational
More informationVectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.
Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type
More informationPYTHAGOREAN TRIPLES KEITH CONRAD
PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationTWO-DIMENSIONAL TRANSFORMATION
CHAPTER 2 TWO-DIMENSIONAL TRANSFORMATION 2.1 Introduction As stated earlier, Computer Aided Design consists of three components, namely, Design (Geometric Modeling), Analysis (FEA, etc), and Visualization
More informationDIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents
DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition
More informationA linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationElements of a graph. Click on the links below to jump directly to the relevant section
Click on the links below to jump directly to the relevant section Elements of a graph Linear equations and their graphs What is slope? Slope and y-intercept in the equation of a line Comparing lines on
More informationThe Math in Laser Light Math
The Math in Laser Light Math When graphed, many mathematical curves are eautiful to view. These curves are usually rought into graphic form y incorporating such devices as a plotter, printer, video screen,
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationMultiplication and Division with Rational Numbers
Multiplication and Division with Rational Numbers Kitty Hawk, North Carolina, is famous for being the place where the first airplane flight took place. The brothers who flew these first flights grew up
More informationRSA Encryption. Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles October 10, 2003
RSA Encryption Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles October 10, 2003 1 Public Key Cryptography One of the biggest problems in cryptography is the distribution of keys.
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationPlaying with Numbers
PLAYING WITH NUMBERS 249 Playing with Numbers CHAPTER 16 16.1 Introduction You have studied various types of numbers such as natural numbers, whole numbers, integers and rational numbers. You have also
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationGrade 5 Math Content 1
Grade 5 Math Content 1 Number and Operations: Whole Numbers Multiplication and Division In Grade 5, students consolidate their understanding of the computational strategies they use for multiplication.
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More information5544 = 2 2772 = 2 2 1386 = 2 2 2 693. Now we have to find a divisor of 693. We can try 3, and 693 = 3 231,and we keep dividing by 3 to get: 1
MATH 13150: Freshman Seminar Unit 8 1. Prime numbers 1.1. Primes. A number bigger than 1 is called prime if its only divisors are 1 and itself. For example, 3 is prime because the only numbers dividing
More informationTo give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
More informationVectors Math 122 Calculus III D Joyce, Fall 2012
Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors in the plane R 2. A vector v can be interpreted as an arro in the plane R 2 ith a certain length and a certain direction. The same vector can be
More information