Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Size: px
Start display at page:

Download "Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)"

Transcription

1 Markov chains and the number of occurrences of a word in a sequence ( ,.,2,4,6) Prof. Tesler Math 283 Fall 206 Prof. Tesler Markov Chains Math 283 / Fall 206 / 4

2 Locating overlapping occurrences of a word Consider a (long) single-stranded nucleotide sequence τ = τ... τ N and a (short) word w = w... w k, e.g., w = AA. for i = to N-3 { if (τ i τ i+ τ i+2 τ i+3 == AA) {... } } The above scan takes up to 4N comparisons to locate all occurrences of AA (kn comparisons for w of length k). A finite state automaton (FSA) is a machine that can locate all occurrences while only examining each letter of τ once. Prof. Tesler Markov Chains Math 283 / Fall / 4

3 Overlapping occurrences of AA A,C,T M 0 C,T A,C,T A C,T A A,C,T A A AA The states are the nodes,, A, A, AA (prefixes of w). For w = w w 2 w k, there are k + states (one for each prefix). Start in the state (shown on figure as 0). Scan τ = τ τ 2... τ N one character at a time left to right. Transition edges: When examining τ j, move from the current state to the next state according to which edge τ j is on. For each node w w r and each letter x = A, C,, T, determine the longest suffix s (possibly ) of w w r x that equals one of the states. Draw an edge out of that node to s with label x. The number of times we are in the state AA is the desired count of number of occurrences. Prof. Tesler Markov Chains Math 283 / Fall / 4

4 Overlapping occurrences of AA in τ = CAATCAAT... A,C,T 0 C,T A,C,T A C,T M A A,C,T A A AA t τ t C A A T C A A T... Time t State at t τ t 0 C 2 0 A A 5 A 6 A 7 T 8 0 C Time t State at t τ t A A 2 A A 3 AA 4 A T 5 0 Prof. Tesler Markov Chains Math 283 / Fall / 4

5 Non-overlapping occurrences of AA M = A,C,T 0 C,T A,C,T A C,T A A,C,T A A AA M2 = A,C,T 0 C,T A,C,T A A A A AA C,T A,C,T For non-overlapping occurrences of w: Replace the outgoing edges from w by copies of the outgoing edges from. On previous slide, the time 3 4 transition AA A changes to AA. Prof. Tesler Markov Chains Math 283 / Fall / 4

6 Motif {AA, TA}, overlaps permitted C A,C,T 0 C C T A,C,T A A A T A A,C,T AA A,C,T T T T A A,C,T TA States: All prefixes of all words in the motif. If a prefix occurs multiple times, only create one node for it. Transition edges: they may jump from one word of the motif to another. TA A. Count the number of times we reach the states for any words in the motif (AA or TA). Prof. Tesler Markov Chains Math 283 / Fall / 4

7 Markov chains A Markov chain is similar to a finite state machine, but incorporates probabilities. Let S be a set of states. We will take S to be a discrete finite set, such as S = {, 2,..., s}. Let t =, 2,... denote the time. Let X, X 2,... denote a sequence of random variables, values S. The X t s form a (first order) Markov chain if they obey these rules The probability of being in a certain state at time t + only depends on the state at time t, not on any earlier states: P(X t+ = x t+ X = x,..., X t = x t ) = P(X t+ = x t+ X t = x t ) 2 The probability of transitioning from state i at time t to state j at time t + only depends on i and j, but not on the time t: P(X t+ = j X t = i) = p ij at all times t for some values p ij, which form an s s transition matrix. Prof. Tesler Markov Chains Math 283 / Fall / 4

8 Transition matrix The transition matrix, P, of the Markov chain M is From state To state : 0 p A + p C + p T p : p C + p T p p A 0 0 3: A p A + p C + p T 0 0 p 0 4: A p C + p T p 0 0 p A = 5: AA p A + p C + p T 0 0 p 0 P P 2 P 3 P 4 P 5 P 2 P 22 P 23 P 24 P 25 P 3 P 32 P 33 P 34 P 35 P 4 P 42 P 43 P 44 P 45 P 5 P 52 P 53 P 54 P 55 Notice that the entries in each row sum up to p A + p C + p + p T =. A matrix with all entries 0 and all row sums equal to is called a stochastic matrix. The transition matrix of a Markov chain is always stochastic. State machines in other contexts than Markov Chains still have weights = 0 for no edge or 0 for an edge, but the nonzero weights may be defined differently, and the matrices are usually not stochastic. Prof. Tesler Markov Chains Math 283 / Fall / 4

9 Transition matrices for AA pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p A pa p AA P 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 0 0 /4 0 pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M2 A p pa+pc+pt p p A pa AA P2 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 / Edge labels are replaced by probabilities, e.g., p C + p T. The matrices are shown for the case that all nucleotides have equal probabilities /4. P2 (no overlaps) is obtained from P (overlaps allowed) by replacing the last row with a copy of the first row. Prof. Tesler Markov Chains Math 283 / Fall / 4

10 Other Markov chain examples A Markov chain is kth order if the probability of X t = i depends on the values of X t,..., X t k. It can be converted to a first order Markov chain by making new states that record more history. Positional independence: Instead of a null hypothesis that a DNA sequence is generated by repeated rolls of a biased four-sided die, we could use a Markov chain. The simplest is a one-step transition matrix p AA p AC p A p AT p P = CA p CC p C p CT p A p C p p T p TA p TC p T p TT P could be the same at all positions. In a coding region, it could be different for the first, second, and third positions of codons. Nucleotide evolution: There are models of random point mutations over the course of evolution concerning Markov chains with the form P (same as above) in which X t is the state A, C,, T of the nucleotide at a given position in a sequence at time (generation) t. Prof. Tesler Markov Chains Math 283 / Fall / 4

11 Questions about Markov chains What is the probability of being in a particular state after n steps? 2 What is the probability of being in a particular state as n? 3 What is the reverse Markov chain? 4 If you are in state i, what is the expected number of time steps until the next time you are in state j? What is the variance of this? What is the complete probability distribution? 5 Starting in state i, what is the expected number of visits to state j before reaching state k? Prof. Tesler Markov Chains Math 283 / Fall 206 / 4

12 Transition probabilities after two steps Time t t + t + 2 P i P j P i2 2 P 2j i P i3 3 P 3j j P i4 P 4j P i5 4 P 5j To compute the probability for going from state i at time t to state j at time t + 2, consider all the states it could go through at time t + : 5 P(X t+2 = j X t = i) = r P(X t+ = r X t = i)p(x t+2 = j X t+ = r, X t = i) = r P(X t+ = r X t = i)p(x t+2 = j X t+ = r) = r P irp rj = (P 2 ) ij Prof. Tesler Markov Chains Math 283 / Fall / 4

13 Transition probabilities after n steps For n 0, the transition matrix from time t to time t + n is P n : P(X t+n = j X t = i) = P(X t+ = r X t = i)p(x t+2 = r 2 X t+ = r ) r,...,r n = P i r P r r 2 P rn j = (P n ) ij r,...,r n (sum over possible states r,..., r n at times t +,..., t + (n )) Prof. Tesler Markov Chains Math 283 / Fall / 4

14 State probability vector α i (t) = P(X t = i) is the probability of being in state i at time t. α (t) Column vector α(t) =. α s (t) or transpose it to get a row vector α(t) = (α (t),..., α s (t)) The probabilities at time t + n are α j (t + n) = P(X t+n = j α(t)) = i P(X t+n = j X t = i)p(x t = i) = i α i (t)(p n ) ij = ( α(t) P n ) j so α(t + n) = α(t) P n (row vector times matrix) or equivalently, (P ) n α(t) = α(t + n) (matrix times column vector). Prof. Tesler Markov Chains Math 283 / Fall / 4

15 Transition probabilities after n steps for AA; P = P P 0 = I = P 3 = P = P 4 = P 2 = P n = P 4 for n 5 Regardless of the starting state, the probabilities of being in states,, 5 at time t (when t is large enough) are 6, 5 64, 5 256, 64, 256. Usually P n just approaches a limit asymptotically as n increases, rather than reaching it. We ll see other examples later (like P2). Prof. Tesler Markov Chains Math 283 / Fall / 4

16 Matrix powers in Matlab and R Matlab >> P = [ [ 3/4, /4, 0, 0, 0 ]; % [ 2/4, /4, /4, 0, 0 ]; % [ 3/4, 0, 0, /4, 0 ]; % A [ 2/4, /4, 0, 0, /4 ]; % A [ 3/4, 0, 0, /4, 0 ]; % AA ] P = >> P * P % or P^2 ans = R > P = rbind( + c(3/4,/4, 0, 0, 0), # + c(2/4,/4,/4, 0, 0), # + c(3/4, 0, 0,/4, 0), # A + c(2/4,/4, 0, 0,/4), # A + c(3/4, 0, 0,/4, 0) # AA + ) > P [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > P %*% P [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] Note: R doesn t have a built-in matrix power function. The > and + symbols above are prompts, not something you enter. Prof. Tesler Markov Chains Math 283 / Fall / 4

17 Stationary distribution, a.k.a. steady state distribution If P is irreducible and aperiodic (these will be defined soon) then P n approaches a limit with this format as n : ] lim n Pn = [ ϕ ϕ 2 ϕ s ϕ ϕ 2 ϕ s ϕ ϕ 2 ϕ s In other words, no matter what the starting state, the probability of being in state j after n steps approaches ϕ j. The row vector ϕ = (ϕ,..., ϕ s ) is called the stationary distribution of the Markov chain. It is stationary because these probabilities stay the same from one time to the next; in matrix notation, ϕ P = ϕ, or P ϕ = ϕ. So ϕ is a left eigenvector of P with eigenvalue. Since it represents probabilities of being in each state, the components of ϕ add up to. In a Markov chain, all eigenvalues have absolute value. If P is irreducible and aperiodic, exactly one eigenvalue equals. Prof. Tesler Markov Chains Math 283 / Fall / 4

18 Stationary distribution computing it for example M pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p A pa p AA Solve ϕ P = ϕ, or (ϕ,..., ϕ 5 )P = (ϕ,..., ϕ 5 ): ϕ = 3 4 ϕ + 2 ϕ ϕ ϕ ϕ 5 ϕ 2 = 4 ϕ + 4 ϕ 2 + 0ϕ ϕ 4 + 0ϕ 5 ϕ 3 = 0ϕ + 4 ϕ 2 + 0ϕ 3 + 0ϕ 4 + 0ϕ 5 ϕ 4 = 0ϕ + 0ϕ ϕ 3 + 0ϕ ϕ 5 ϕ 5 = 0ϕ + 0ϕ 2 + 0ϕ ϕ 4 + 0ϕ 5 P 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 0 0 /4 0 and the total probability equation ϕ + ϕ 2 + ϕ 3 + ϕ 4 + ϕ 5 =. This is 6 equations in 5 unknowns, so it is overdetermined. Actually, the first 5 equations are underdetermined; they add up to ϕ + + ϕ 5 = ϕ + + ϕ 5. Knock out the ϕ 5 = equation and solve the rest of them to get ϕ = ( 6, 5 64, 5 256, 64, 256 ) (0.6875, , , 0.056, ). Prof. Tesler Markov Chains Math 283 / Fall / 4

19 Solving equations in Matlab or R (this method doesn t use the functions for eigenvectors) Matlab >> eye(5) # identity >> P - eye(5) # transpose minus identity >> [P - eye(5) ; ] >> sstate=[p -eye(5); ] \ [ ] sstate = R > diag(,5) % identity [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > t(p) - diag(,5) % transpose minus identity [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] > rbind(t(p) - diag(,5), c(,,,,)) [,] [,2] [,3] [,4] [,5] [,] [2,] [3,] [4,] [5,] [6,] > sstate = qr.solve(rbind(t(p) - diag(,5), + c(,,,,)),c(0,0,0,0,0,)) > sstate [] [5] Prof. Tesler Markov Chains Math 283 / Fall / 4

20 Eigenvalues of P A transition matrix is stochastic: all entries are 0 and its row sums are all, so P = where =. Thus, λ = is an eigenvalue of P and is a right eigenvector. There is also a left eigenvector of P with eigenvalue : w P = w where w is a row vector. Normalize it so its entries add up to, to get the stationary distribution ϕ. All eigenvalues λ of a stochastic matrix have 0 λ. An irreducible aperiodic Markov chain has just one eigenvalue =. The second largest λ determines how fast P n converges (using the spectral decomposition). See the old handout for examples with complications: periodic Markov chains, complex eigenvalues, etc. Prof. Tesler Markov Chains Math 283 / Fall / 4.

21 Technicalities reducibility A Markov chain is irreducible if every state can be reached from every other state after enough steps. The above example is reducible since there are states that cannot be reached from each other: after sufficient time, you are either stuck in state 3, the component {4, 5, 6, 7}, or the component {8, 9, 0}. Prof. Tesler Markov Chains Math 283 / Fall / 4

22 Technicalities period State i has period d if the Markov chain can only go from state i to itself in multiples of d steps, where d is the maximum number that satisfies that. If d > then state i is periodic. A Markov chain is periodic if at least one state is periodic and is aperiodic if no states are periodic. All states in a component have the same period. Component {4, 5, 6, 7} has period 2 and component {8, 9, 0} has period 3, so the Markov chain is periodic. Prof. Tesler Markov Chains Math 283 / Fall / 4

23 Technicalities absorbing states An absorbing state has all its outgoing edges going to itself; e.g., state 3 above. An irreducible Markov chain with two or more states cannot have any absorbing states. Prof. Tesler Markov Chains Math 283 / Fall / 4

24 Technicalities summary There are generalizations to infinite numbers of discrete or continuous states and to continuous time. We will work with Markov chains that are finite, discrete, irreducible, and aperiodic, unless otherwise stated. For a finite discrete Markov chain on two or more states: irreducible and aperiodic with no absorbing states is equivalent to P or a power of P has all entries greater than 0 and in this case, lim n P n exists and all its rows are the stationary distribution. Prof. Tesler Markov Chains Math 283 / Fall / 4

25 Reverse Markov Chain iven a Markov chain that accurately models the forwards progression of time, a reverse version of it can be constructed to make predictions about the past. For example, this is done in models of nucleotide evolution. The graph of the reverse Markov chain has the same nodes as the forwards chain; the same edges but reversed directions and new probabilities. The transition matrix P of the forwards Markov chain was defined so that P(X t+ = j X t = i) = p ij at all times t. Assume the forwards machine has run long enough to reach the stationary distribution, P(X t = i) = ϕ i. The reverse Markov chain has transition matrix Q, where q ij = P(X t = j X t+ = i) = P(X t+ = i X t = j)p(x t = j) P(X t+ = i) = p jiϕ j ϕ i Prof. Tesler Markov Chains Math 283 / Fall / 4

26 Reverse Markov Chain of M pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M A p pa+pc+pt p Reverse of M A pa p AA 0 A A AA P Q Stationary distribution of P is ϕ = ( 6, 5 64, 5 256, 64, 256 ) Example of one entry: The edge 0 A in the reverse chain has q 3 = p 3 ϕ 3 /ϕ = ( )( 256 )/( 6 ) = This means that in the steady state of the forwards chain, when 0 is entered, there is a probability that the previous state was A. Prof. Tesler Markov Chains Math 283 / Fall / 4

27 Matlab and R Matlab >> d_sstate = diag(sstate) d_sstate = >> Q = inv(d_sstate) * P * d_sstate Q = R > d_sstate = diag(sstate) > Q = solve(d_sstate) %*% t(p) %*% d_sstate Prof. Tesler Markov Chains Math 283 / Fall / 4

28 Expected time from state i till next time in state j pa+pc+pt p 0 p pc+pt pa+pc+pt pa pc+pt A p pa+pc+pt p A pa p AA If M is in state, what is the expected number of steps until the next time it is in state AA? More generally, what s the expected # steps from state i to state j? Fix the end state j once and for all. Simultaneously solve for expected # steps from all start states i. For i =,..., s, let N i be a random variable for the number of steps from state i to the next time in state j. Next time means that if i = j, we count until the next time at state j, with N j ; we don t count it as already there in 0 steps. We ll develop systems of equations for E(N i ), Var(N i ), and P Ni (x). Prof. Tesler Markov Chains Math 283 / Fall / 4

29 Expected time from state i till next time in state j Time t t + t ++N i #Steps Probability N + P N 2 N 2 N 2 + P N 3 N 4 5 N 3 + N 4 + P 3 P 4 Fix j = 5. 5 Random variable N r = # steps from state r to next time in state j. Dotted paths have no occurrences of state j in the middle. Expected # steps from state i = to j = 5 (repeat this for all i): (time t) (time t + ) E(N ) = P E(N + ) + P 2 E(N 2 + ) + P 3 E(N 3 + ) + P 4 E(N 4 + ) + P 5 E() Both N s have same distribution, and we can expand E() s: E(N ) = P r = ( P r E(N r ) ) + r : r j P r E(N r ) + r r : r j Prof. Tesler Markov Chains Math 283 / Fall / 4 P 5

30 Expected time from state i till next time in state j Recall we fixed j, and defined N i relative to it. Start in state i. There is a probability P ir of going one step to state r. If r = j, we are done in one step: E(N i st step is i j) = If r j, the expected number of steps after the first step is E(N r ): E(N i st step is i r) = E(N r + ) = E(N r ) + Combine with the probability of each value of r: s s E(N i ) = P ij + P ir E(N r + ) = P ij + P ir (E(N r ) + ) = s P ir + r= r=, r j s P ir E(N r ) = + r=, r j r=, r j s P ir E(N r ) r=, r j Doing this for all s states, i =,..., s, gives s equations in the s unknowns E(N ),..., E(N s ). Prof. Tesler Markov Chains Math 283 / Fall / 4

31 Expected times between states in M: times to state 5 E(N ) = (E(N ) + ) + 4 (E(N 2) + ) = E(N ) + 4 E(N 2) E(N 2 ) = (E(N ) + ) + 4 (E(N 2) + ) + 4 (E(N 3) + ) = + 2 E(N ) + 4 E(N 2) + 4 E(N 3) E(N 3 ) = (E(N ) + ) + 4 (E(N 4) + ) = E(N ) + 4 E(N 4) E(N 4 ) = (E(N ) + ) + 4 (E(N 2) + ) = + 2 E(N ) + 4 E(N 2) E(N 5 ) = (E(N ) + ) + 4 (E(N 4) + ) = E(N ) + 4 E(N 4) This is 5 equations in 5 unknowns E(N ),..., E(N 5 ). Matrix format: /4 / E(N ) /2 3/4 /4 0 0 E(N 2 ) 3/4 0 /4 0 E(N 3 ) /2 /4 0 0 E(N 4 ) = 3/4 0 0 /4 E(N 5 ) } {{ } Zero out j th column of P. Then subtract from each diagonal entry. } {{ } All s. E(N ) = 272, E(N 2 ) = 268, E(N 3 ) = 256, E(N 4 ) = 204, E(N 5 ) = 256. Matlab and R: Enter matrix C and vector r. Solve C x = r with Matlab: x=c\r or x=inv(c)*r R: x=solve(c,r) Prof. Tesler Markov Chains Math 283 / Fall / 4

32 Variance and PF of number of steps between states We may compute E(g(N i )) for any function g by setting up recurrences in the same way. For each i =,..., s: E(g(N i )) = P ij g()+ P ir E(g(N r +)) = expansion depending on g r j Variance of N i s: Var(N i ) = E(N 2 i ) (E(N i )) 2 s s E(N 2 i ) = P ij 2 + P ir E((N r +) 2 ) = +2 P ir E(N r )+ r=, r j r=, r j s P ir E(N 2 r ) r=, r j Plug in the previous solution of E(N ),..., E(N s ). Then solve the s equations for the s unknowns E(N 2 ),..., E(N s 2 ). PF: P Ni (x) = E(x N i ) = n=0 P(N i = n)x n s E(x N i ) = P ij x + P ir E(x Nr+ ) = P ij x + r=, r j s P ir x E(x N r ) r=, r j Solve the s equations for s unknowns E(x N ),..., E(x N s ). See the old handout for a worked out example. Prof. Tesler Markov Chains Math 283 / Fall / 4

33 Powers of matrices (see separate slides) Sample [ matrix: ] 8 P = 6 3 Diagonalization: [ ] P [ = VDV ] V = D = V = P n = (VDV )(VDV ) (VDV ) = VD n V = [ ][ 5 n ][ n [ 2 ] 3/2 / When a square (s s) matrix P has distinct eigenvalues, it can be diagonalized P = VDV where D is a diagonal matrix of the eigenvalues of P (any order); the columns of V are right eigenvectors of P (in same order as D); the rows of V are left eigenvectors of P (in same order as D); If any eigenvalues are equal, it may or may not be diagonalizeable, but there is a generalization called Jordan Canonical Form, P = VJV giving P n = VJ n V. J has eigenvalues on the diagonal and s and 0 s just above it, and is also easy to raise to powers. Prof. Tesler Markov Chains Math 283 / Fall / 4 ]

34 Matrix powers spectral decomposition (distinct eigenvalues) Powers of P: P n = (VDV [ ] )(VDV ) [ = VD n V P n = VD n V 5 n 0 = V 0 6 n V 5 = V n ] [ ] 0 V V n V [ 5 V n ] [ ][ 0 V 2 5 = n ][ ] [ 0 2 ()(5 = n )( 2) ()(5 n ] )() (3)(5 n )( 2) (3)(5 n )() [ ] [ ] = 5 n [ 2 ] = n λ r 3 l = 5 n [ ] [ ] [ ] [ ] [ 0 0 V 0 6 n V (6 = n = n )(.5) 2(6 n ] )(.5).5.5 4(6 n )(.5) 4(6 n )(.5) [ ] [ ] = 6 n 2 [.5 ].5 = n λ2 r 4 2 l 2 = 6 n Spectral decomposition of P n : [ ] P n = VD n V = λ n r l + λ n 2 r 2 l 2 = 5 n [ ] + 6 n Prof. Tesler Markov Chains Math 283 / Fall / 4

35 Jordan Canonical Form Matrices with two or more equal eigenvalues cannot necessarily be diagonalized. Matlab and R do not give an error or warning. The Jordan Canonical Form is a generalization that turns into diagonalization when possible, and still works otherwise: P = VJV J = B B B P n = VJ n V where B n 0 0 J n 0 B = n B 3n B i n = B i = λ i λ i λ i λ i λ i n λ ( ) n n ( n n 2 i λi ( 2) λi n 0 λ n ) n i λi n λ ( n i n λ i In applications when repeated eigenvalues are a possibility, it s best to use the Jordan Canonical Form. ) λi n Prof. Tesler Markov Chains Math 283 / Fall / 4

36 Jordan Canonical Form for P in Matlab (R doesn t currently have JCF available either built-in or as an add-on)» P = [ [ 3/4, /4, 0, 0, 0 ]; % [ 2/4, /4, /4, 0, 0 ]; % [ 3/4, 0, 0, /4, 0 ]; % A [ 2/4, /4, 0, 0, /4 ]; % A [ 3/4, 0, 0, /4, 0 ]; % AA ] P = » [V,J] = jordan(p) V = » Vi = inv(v) Vi = » V * J * Vi ans = J = Prof. Tesler Markov Chains Math 283 / Fall / 4

37 Powers of P using JCF P = VJV gives P n = VJ n V, and J n is easy to compute:» J J = » Jˆ2 ans = » Jˆ3 ans = » Jˆ4 ans = » Jˆ5 ans = For this matrix, J n = J 4 when n 4, so P n = VJ n V = VJ 4 V = P 4 for n 4. Prof. Tesler Markov Chains Math 283 / Fall / 4

38 Non-overlapping occurrences of AA pa+pc+pt 0 p pc+pt p pa+pc+pt pa pc+pt M2 A p pa+pc+pt p p A pa AA P2 3/4 / /2 /4 / /4 0 0 /4 0 /2 /4 0 0 /4 3/4 / » [V2,J2] = jordan(p2) V2 = i i i i i i i i i i J2 = i i» V2i = inv(v2) V2i = i i i i Prof. Tesler Markov Chains Math 283 / Fall / 4

39 Non-overlapping occurrences of AA JCF (J2) n = [ ] n n (i/4) n ( i/4) n [ ] = [ ] n = [ 0 0 [ ] ] [ ] = for n 2 [ ] One eigenvalue =. It s the third one listed, so the stationary distribution is the third row of (V2) normalized:» V2i(3,:) / sum(v2i(3,:)) ans = Two eigenvalues = 0. The interpretation of one of them is that the first and last rows of P2 are equal, so (, 0, 0, 0, ) is a right eigenvector of P2 with eigenvalue 0. Two complex eigenvalues, 0 ± i/4. Since P2 is real, all complex eigenvalues must come in conjugate pairs. The eigenvectors also come in conjugate pairs (last 2 columns of V2; last 2 rows of (V2). Prof. Tesler Markov Chains Math 283 / Fall / 4

40 Spectral decomposition with JCF and complex eigenvalues (J2) n = [ ] n n (i/4) n ( i/4) n [ ] = [ ] n = [ 0 0 [ ] ] [ ] = for n 2 [ ] (P2) n = (V2)(J2) n (V2) = [ ] [ ] n [ 0 l r r ] 2 + r () n l 3 + r 4 (i/4) n l 4 + r 5 ( i/4) n l 5 l 2 The first term vanishes when n 2, so when n 2 the format is = n S3 + (i/4) n S4 + ( i/4) n S5 = S3 + (i/4) n S4 + ( i/4) n S5 Prof. Tesler Markov Chains Math 283 / Fall / 4

41 Spectral decomposition with JCF and complex eigenvalues For n 2, (P2) n = S3 + (i/4) n S4 + ( i/4) n S5 where» S3 = V2(:,3) * V2i(3,:) S3 = » S4 = V2(:,4) * V2i(4,:) S4 = i i i i i i i i i i i i i i i i i i i i» S5 = V2(:,5) * V2i(5,:) S5 = i i i i i i i i i i i i i i i i i i i i S3 corresponds to the stationary distribution. S4 and S5 are complex conjugates, so (i/4) n S4 + ( i/4) n S5 is a sum of two complex conjugates; thus, it is real-valued, even though complex numbers are involved in the computation. Prof. Tesler Markov Chains Math 283 / Fall / 4

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

6.3 Conditional Probability and Independence

6.3 Conditional Probability and Independence 222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié. Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

More information

Impact of Remote Control Failure on Power System Restoration Time

Impact of Remote Control Failure on Power System Restoration Time Impact of Remote Control Failure on Power System Restoration Time Fredrik Edström School of Electrical Engineering Royal Institute of Technology Stockholm, Sweden Email: fredrik.edstrom@ee.kth.se Lennart

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

1/3 1/3 1/3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0 1 2 3 4 5 6 7 8 0.6 0.6 0.6 0.6 0.6 0.6 0.6

1/3 1/3 1/3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0 1 2 3 4 5 6 7 8 0.6 0.6 0.6 0.6 0.6 0.6 0.6 HOMEWORK 4: SOLUTIONS. 2. A Markov chain with state space {, 2, 3} has transition probability matrix /3 /3 /3 P = 0 /2 /2 0 0 Show that state 3 is absorbing and, starting from state, find the expected

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Base Conversion written by Cathy Saxton

Base Conversion written by Cathy Saxton Base Conversion written by Cathy Saxton 1. Base 10 In base 10, the digits, from right to left, specify the 1 s, 10 s, 100 s, 1000 s, etc. These are powers of 10 (10 x ): 10 0 = 1, 10 1 = 10, 10 2 = 100,

More information

Linear Algebra and TI 89

Linear Algebra and TI 89 Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3

u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3 MATLAB Tutorial You need a small numb e r of basic commands to start using MATLAB. This short tutorial describes those fundamental commands. You need to create vectors and matrices, to change them, and

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

The PageRank Citation Ranking: Bring Order to the Web

The PageRank Citation Ranking: Bring Order to the Web The PageRank Citation Ranking: Bring Order to the Web presented by: Xiaoxi Pang 25.Nov 2010 1 / 20 Outline Introduction A ranking for every page on the Web Implementation Convergence Properties Personalized

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Hidden Markov Models

Hidden Markov Models 8.47 Introduction to omputational Molecular Biology Lecture 7: November 4, 2004 Scribe: Han-Pang hiu Lecturer: Ross Lippert Editor: Russ ox Hidden Markov Models The G island phenomenon The nucleotide frequencies

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time Edward Bortnikov & Ronny Lempel Yahoo! Labs, Haifa Class Outline Link-based page importance measures Why

More information

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Modeling and Performance Evaluation of Computer Systems Security Operation 1

Modeling and Performance Evaluation of Computer Systems Security Operation 1 Modeling and Performance Evaluation of Computer Systems Security Operation 1 D. Guster 2 St.Cloud State University 3 N.K. Krivulin 4 St.Petersburg State University 5 Abstract A model of computer system

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Application of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2

Application of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2 Proceedings of 3th International Conference Mathematical Methods in Economics 1 Introduction Application of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

SECTIONS 1.5-1.6 NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES

SECTIONS 1.5-1.6 NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES SECIONS.5-.6 NOES ON GRPH HEORY NOION ND IS USE IN HE SUDY OF SPRSE SYMMERIC MRICES graph G ( X, E) consists of a finite set of nodes or vertices X and edges E. EXMPLE : road map of part of British Columbia

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Math 202-0 Quizzes Winter 2009

Math 202-0 Quizzes Winter 2009 Quiz : Basic Probability Ten Scrabble tiles are placed in a bag Four of the tiles have the letter printed on them, and there are two tiles each with the letters B, C and D on them (a) Suppose one tile

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Iterative Methods for Solving Linear Systems

Iterative Methods for Solving Linear Systems Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

More information

Notes on Probability Theory

Notes on Probability Theory Notes on Probability Theory Christopher King Department of Mathematics Northeastern University July 31, 2009 Abstract These notes are intended to give a solid introduction to Probability Theory with a

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Linear Equations ! 25 30 35$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Equations ! 25 30 35$ &  350 150% &  11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Joschka Bödecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg jboedeck@informatik.uni-freiburg.de

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information