CHAPTER III - MARKOV CHAINS

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "CHAPTER III - MARKOV CHAINS"

Transcription

1 CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of this. We shall only consider in this Chapter Markov chains on a countable (finite or infinite) state space F. In the finite case we can identify F as the set of positive integers F = {1, 2,..., m} for some m = F, and in the infinite case as all positive integers F = {n Z : n 1}. The chain is determined by a set of transition probabilities p t (i, j), i, j F, t = 0, 1, 2,..., where t denotes discrete time and p t (i, j) is the probability of moving from state i to state j at time t. Thus (1.1) p t (i, j) 0, i, j, F ; p t (i, j) = 1, i F. To define a probability space associated with these transition probabilities we need to set a distribution function π : F R for the initial state of the chain. Thus π( ) satisfies (1.2) π(j) 0, j F ; π(j) = 1. The probability space Ω is given by Ω = F, which is the infinite product of the state space F, and the σ field F is the Borel field generated by finite dimensional rectangles as usual. Random variables X n : Ω F, n = 0, 1, 2..., are defined by X n (ω) = ω n, n = 0, 1,..., where ω = (ω 0, ω 1,..) Ω. The probability measure P depends on π( ), so we shall denote it by P π. The measure P π is determined once we know the p.d.f. for every set of variables (X 0,..., X m ), m 0. This is given by the formula m 1 (1.3) P (X 0 = j 0, X 1 = j 1,..., X m = j m ) = π(j 0 ) p t (j t, j t+1 ). Note the p.d.f for X 0 is π( ), and for any subset A F, (1.4) P (X t+1 A Xt, X t 1,.., X 0 ) = P (X t+1 A Xt ) with probability 1. The so called Markov property or no memory property (1.4) characterizes a Markov chain, and can be an alternative starting point for the theory of Markov chains. Note the similarity of (1.4) to the definition of a Martingale. One big advantage of studying Markov chains is a technique is available to compute many expectation values. This is the so called backward Kolmogorov equation. Thus let f : F R be a function and suppose we want to evaluate the expectation j F j F (1.5) E [ f(x T ) X0 = i ], i F. 1 t=0

2 2 JOSEPH G. CONLON Then (1.5) is given by u(i, 0) where u(i, t), i F, t = 0,.., T, is a solution to the terminal value problem, (1.6) u(i, t) = j F p t (i, j)u(j, t + 1), i F, t < T ; u(i, T ) = f(i), i F. Note the equation (1.6) is to be solved backwards in time from time T to time 0. We turn now to Markov chains where the transition probabilities are independent of time, whence p t (i, j) = p(i, j), t = 0, 1,.., i, j F. An important property of these Markov chains is the strong Markov property. Observe from (1.4) (1.7) P (X t+1 A Xt, X t 1,.., X 0 ) = P (X t+1 A Xt ) = P (X 1 A X0 ) with probability 1. The strong Markov property is a random version of (1.7). Thus let τ : Ω Z be a nonnegative measurable function which has the property (1.8) {ω Ω : τ(ω) = T } F(X 0, X 1,.., X T ), T = 0, 1,... Such a function is called a stopping time. Proposition 1.1 (Strong Markov Property). Consider the time independent Markov chain (X 0, X 1,..) with initial distribution π( ) and let τ( ) be a stopping time for this chain. Then the sequence of variables (X τ, X τ+1, X τ+2,...) has the same distribution as the sequence (X 0, X 1,..) with initial distribution π τ ( ), where π τ ( ) is the pdf of X τ under π( ). Proof. We illustrate the proof with the simplest case, which is then easy to generalize. Thus (1.9) P (X τ = j 0, X τ+1 = j 1, X τ+2 = j 2 ) = P (τ = m, X m = j 0, X m+1 = j 1, X m+2 = j 2 ). m=0 Now from (1.8) it follows (1.10) P (τ = m, X m = j 0, X m+1 = j 1, X m+2 = j 2 ) = P (τ = m, X m = j 0 )p(j 0, j 1 )p(j 1, j 2 ). Since we have (1.11) we conclude P (τ = m, X m = j 0 ) = π τ (j 0 ), m=0 (1.12) P (X τ = j 0, X τ+1 = j 1, X τ+2 = j 2 ) = π τ (j 0 )p(j 0, j 1 )p(j 1, j 2 ), whence the result holds. Definition 1. An initial distribution π( ) for the time independent Markov chain is said to be stationary if it satisfies the equation (1.13) π(i)p(i, j) = π(j), j F. i F

3 MATH Evidently (1.13) implies if X 0 has distribution π( ) then X 1 also has distribution π( ). One can easily see further the entire sequence (X 0, X 1,..) is stationary in the sense of Definition 4 of Chapter II page 18. An obvious question to ask then is does a Markov chain have a stationary distribution, and if so is it unique? We can give a satisfactory answer to the uniqueness question by introducing the notion of indecomposability of the chain. We say the chain is decomposable if there exists disjoint subsets A 0, A 1 of F such (1.14) p(i, j) = 1, i A k, k = 0, 1. j A k Thus (1.14) states the chain allows no communication between the subsets A 0 and A 1 of F. Hence we may reduce the original chain to independent chains on the reduced state spaces A 0, A 1. A chain which is not decomposable is called indecomposable. Proposition 1.2. Suppose the Markov chain is indecomposable and a stationary distribution π( ) for it exists. Then π( ) is unique and the associated stationary sequence (X 0, X 1,...) is ergodic. Proof. We first show for stationary π( ) the corresponding stationary sequence (X 0, X 1,...) is ergodic. Thus let T be the shift operator on Ω, P = P π be the probability measure on Ω, and C F be an invariant set. We define a function φ : F R by φ(j) = P (C X 0 = j), j F. Evidently φ(j) is only uniquely defined if π(j) > 0. Otherwise we can take φ(j) to be anything we want. Note now (1.15) φ(x 0 ) = P (C X 0 ) with probability 1, and (1.16) P (C X 0 ) = E[ P (C X 1, X 0 ) X 0 ] with probability 1. By the invariance of C we have C F(X 1, X 2,...), whence it follows from the Markov property (1.4) (1.17) P (C X 1, X 0 ) = P (C X 1 ) = φ(x 1 ) with probability 1. Evidently (1.16), (1.17) imply (1.18) p(i, j)φ(j) = φ(i), i F with π(i) > 0. j F Note from (1.13) if π(i) > 0 and π(j) = 0 then p(i, j) = 0, whence the values of φ(j) for π(j) = 0 do not enter on the LHS of (1.18). Observe next for any set C F(X 0, X 1,...) one has (1.19) lim n P (C X n, X n 1,..., X 0 ) = χ C with probability 1. This intuitively obvious result follows from the Martingale convergence theorem. In fact define a sequence of random variables Z n, n = 0, 1, 2,.., by Z n = P (C X n, X n 1,..., X 0 ). Then from the definition of conditional expectation one has (1.20) E[ Z n Xn 1,..., X 0 ] = Z n 1, n = 1, 2,..., with probability 1, which implies (1.21) E[ Z n Z n 1,..., Z 0 ] = Z n 1, n = 1, 2,..., with probability 1.

4 4 JOSEPH G. CONLON Hence lim n Z n exists with probability 1, and it is easy to see the limit must be χ C. Observe now for the invariant set C, one can see as in (1.17) P (C X n, X n 1,..., X 0 ) = φ(x n ), whence we have (1.22) lim n φ(x n) = χ C with probability 1. Since we may assume 0 φ(j) 1, j F, and we have from (1.22) (1.23) E[ φ(x 0 ){1 φ(x 0 )} ] = lim n E[ φ(x n){1 φ(x n )} ] = 0, we conclude φ(j) can only take the values 0 or 1 for j F with π(j) > 0. Now let us define sets A 1, A 0 by (1.24) A 1 = {i F : π(i) > 0, φ(i) = 1}, A 0 = {i F : π(i) > 0, φ(i) = 0}. It follows from (1.18) (1.25) j A 1 p(i, j) = 1, i A 1. Note in (1.25) we are again using the fact if π(i) > 0 and π(j) = 0 then p(i, j) = 0. Replacing the function φ( ) by 1 φ( ) in (1.18), we see also from (1.25) (1.26) p(i, j) = 1, i A 0. j A 0 Evidently (1.25), (1.26) and the indecomposability assumption imply either A 0 or A 1 is empty, so let us assume A 0 is empty. It follows then from (1.15) (1.27) P (C) = E[ φ(x 0 ) ] = 1. We similarly see P (C) = 0 if A 1 is empty. We have proved the ergodicity of the stationary sequence. We can prove the uniqueness of π( ) by using Proposition 4.3 of Chapter II. Thus let π 1 ( ) and π 2 ( ) be two invariant distributions with corresponding probability measures P 1, P 2. From Proposition 4.3 of Chapter II it follows P 1 and P 2 are mutually singular, so there exists E F such P 1 (E) = 1, P 2 (E) = 0. Now let C F be the set (1.28) C = n=0t n E, whence it follows T 1 C C. Evidently P 1 (C) = 1, and P 2 (C) = 0 by the measure preserving property of P 2. We consider now the distribution π( ) defined by π( ) = [π 1 ( ) + π 2 ( )]/2, which evidently also satisfies (1.13). The corresponding probability measure is P = [P 1 + P 2 ]/2. Hence T is measure preserving and ergodic under P. Since T 1 C C, it follows from the measure preserving property of T (1.29) P ([C T 1 C] [T 1 C C]) = 0. Lemma 3.2 of Chapter II implies then P (C) = 0 or P (C) = 1. However we easily see P (C) = [P 1 (C) + P 2 (C)]/2 = 1/2, which is a contradiction.

5 MATH Definition 2. A time independent Markov chain is said to be irreducible if all states communicate. That is for any i, j F, (1.30) P (X n = j X 0 = i) > 0 for some n 1 which can depend on (i, j). Note irreducibility implies indecomposability. Lemma 1.1. Suppose a Markov chain is time independent irreducible and a stationary distribution π( ) for it exists. Then π(j) > 0 for all j F. Proof. We use equation (1.13) and choose i F such π(i) > 0. Evidently if (1.30) holds for n = 1 then π(j) > 0. Observe now (1.13) implies (1.31) π(i 1 )p(i 1, i 2 )p(i 2, j) = π(j), j F. Since i 1,i 2 F (1.32) P (X 2 = j X 0 = i) = p(i, i 2 )p(i 2, j), we conclude if (1.30) holds for n = 2 then π(j) > 0. Evidently we can generalize this argument to any n 1. We can apply Proposition 5.2 of Chapter II to obtain a relation between recurrence for the Markov chain and the invariant measure. Proposition 1.3. Suppose a time independent Markov chain is irreducible and has stationary distribution π( ). Then for all i F one has P (X n = i infinitely often X 0 = i) = 1. Furthermore if τ i is the first recurrence time to i one has E[ τ i X 0 = i] = 1/π(i). Proof. The formula for the recurrence time is immediate from Proposition 5.2 since the stationary sequence is ergodic. This result evidently also implies i 2 F (1.33) P (X n = i for some finite n X 0 = i) = 1. Recurrence i.e. return to i infinitely often follows immediately from (1.33). 2. Markov Chains with Finite State Space We consider the case when the Markov chain is time independent and the state space is finite, F = m, in which case the transition probabilities p(i, j), i, j F, form an m m transition matrix T with non-negative entries which satisfy (2.1) T1 = 1, 1 = [1, 1,..., ]. Equation (1.13) can be written in matrix form as (2.2) πt = π or alternatively T π = π, where T is the adjoint of T. Now (2.1) implies T has eigenvalue 1, and hence T has eigenvalue 1. Thus a non-trivial solution to (2.2) exists. Proposition 2.1. There exists a non-trivial solution π to (2.2) with all nonnegative entries. If the chain is indecomposable -see (1.14)- it is unique.

6 6 JOSEPH G. CONLON Proof. Evidently (1.13) implies (2.3) π(i) p(i, j) π(j), j F. i F If strict inequality holds in (2.3) for some j F, then on summing (2.3) over j F we conclude (2.4) π(i) > π(j), i F j F which is an impossibility. We have shown therefore the vector with entries π(i), i F, is an eigenvector of T with eigenvalue 1. Assume now the chain is indecomposable and π 1 ( ) and π 2 ( ) are two distinct invariant measures. Setting π( ) = π 1 ( ) π 2 ( ) and defining sets A 0, A 1 by (2.5) A 0 = {i F : π(i) > 0}, A 1 = {i F : π(i) < 0}, we see from the fact i F π(i) = 0, both A 0 and A 1 are non-empty. Using the fact π( ) is a solution to (2.2), we conclude equality holds in (2.3) for all j F. It follows if j / A 0 A 1, then p(i, j) = 0 for all i A 0 A 1. Further if j A 1 then p(i, j) = 0 for i A 0. We conclude (2.6) p(i, j) = 1, i A 0. j A 0 Since we get a similar result for A 1 the chain is decomposable, a contradiction. We have already seen in Proposition 1.2 the stationary sequence (X 0, X 1,...) associated with the Markov chain is ergodic if the chain is indecomposable. Next we wish to show under certain conditions on the transition matrix T the sequence is strong mixing. In order to do this we first prove a classical theorem in the theory of positive matrices. Theorem 2.1 (Perron-Frobenius Theorem). Suppose T = (t i,j ) is an n n matrix with entries t i,j satisfying t i,j 0 and such for some positive integer k the entries of T k are all strictly positive. Then there exists an eigenvalue r of T such : (a) r is real and strictly positive. (b) r is a simple root of the characteristic polynomial of T. (c) If λ C is a root of the characteristic polynomial of T different from r then λ < r. (d) The eigenvector of T for r can be chosen so all its entries are strictly positive. Proof. We first construct an eigenvector for T with all positive entries. To do this let S be the set (2.7) S = {x = (x 1,.., x n ) R n : x i > 0, i = 1,.., n, x = 1}, so S is the part of the unit sphere which is strictly in the positive quadrant (if n = 2). Define now a function r : S R by (Tx) i (2.8) r(x) = min. 1 i n x i

7 MATH It is clear r( ) is a non-negative continuous function on S and is bounded above as n (2.9) r(x) max t i,j. It follows from (2.9) 1 i n j=1 (2.10) r = sup r(x) <, x S Hence for all x S, (2.11) (Tx) i rx i for some i = i(x), 1 i n, depending on x S. Let x N S, N = 1, 2,..., be a sequence such lim N r(x N ) = r. Then some subsequence of x N converges to a point x S, the closure of S in R n so x = 1. Furthermore (2.8) implies (2.12) z = [T r]x is a vector with all nonnegative entries. Since the matrix T k has all strictly positive entries, it follows the vector w k = T k z has all strictly positive entries if z 0. In case (2.12) implies (2.13) r(t k x / T k x ) > r, a contradiction to the definition of r, from which we conclude z = 0. This also implies T k x is an eigenvector of T with eigenvalue r which has all strictly positive entries. We have proven (a) and (d). Next we turn to (b). Let λ C be an eigenvalue of T in which case there exists a non-zero vector x C n such Tx = λx. Letting x R n be the vector x = ( x 1,.., x n ), we see (2.14) (T x) i λ x i, i = 1,.., n, whence it follows from (2.11) λ r. Supposing now λ = r, we can argue in the same way we established x is an eigenvector of T with eigenvalue r, x is also an eigenvector of T with eigenvalue r. Thus we have (2.15) T k x = λ k x, T k x = r k x. This implies (2.16) (T k x) i = (T k x) i, 1 i n. We conclude from (2.16) using the fact all entries of T k are strictly positive (2.17) x = e iθ x, for some θ [0, 2π). Thus λ = r implies λ = r, which completes the proof of (c). Note the argument of the previous paragraph rules out the possibility of having 2 independent eigenvectors with eigenvalue r. In fact if two such existed we could construct an eigenvector x = (x 1,..., x n ) all of whose entries do not have identical phase -e iθ in (2.17). This yields a contradiction. To complete the proof of (b) we need to exclude the possibility T has any generalized eigenvectors with eigenvalue r. Let us suppose a generalized eigenvector exists, whence there exists y C d such (2.18) [T r]y = x,

8 8 JOSEPH G. CONLON where x is the unique eigenvector of T with eigenvalue r and all positive entries. From what we have already proved, there also exists a unique eigenvector x of T with eigenvalue r and all positive entries. Applying x to the left side of the equation (2.18) we obtain (2.19) x [T r]y = x x > 0. Since x [T r] = 0 we have a contradiction. Corollary 2.1. Suppose a time independent Markov chain on a finite state space F is irreducible. Then it has a unique invariant distribution π( ) and π(j) > 0 for all j F. Proof. For N = 1, 2,..., let K N be the matrix (2.20) K N = 1 N N 1 n=0 T n. Irreducibility implies for large enough N the matrix K N has all strictly positive entries. Since πk N = π it follows from Theorem 2.1 π( ) is unique and all entries of π( ) are strictly positive. Definition 3. An n n matrix T has a spectral gap if there is exactly one simple root λ C of the characteristic polynomial for T which satisfies λ = σ(t) = max{ λ C : λ eigenvalue of T}. Note the Perron-Frobenius theorem implies a matrix T with nonnegative entries such T k has strictly positive entries for some k 1 has a spectral gap. Proposition 2.2. Suppose the transition matrix T for a finite state Markov chain has a spectral gap. Then the associated stationary sequence (X 0, X 1,...) is strong mixing. Proof. From (3.26) of Chapter II we need to show for any bounded Borel measurable functions f, g : R m+1 R one has (2.21) lim n E[ f(x 0,..., X m )g(x n, X n+1,..., X n+m ) ] = E[ f(x 0, X 1,..., X m ) ]E[ g(x 0,..., X m ) ]. Observe now for n m the LHS of (2.21) can be written as (2.22) E[ f(x 0,..., X m ) z F T n m X m,z E[ g(x 0, X 1,..., X m ) X 0 = z ] ], where T n m y,z, y, z F, denotes the entries of the matrix T n m. Hence (2.21) will follow from (2.22) if we can show (2.23) lim k Tk y,z = π(z), y, z F. Using the notation of (2.1), (2.2), consider the matrix A = T 1π, where we are taking π( ) to be a row vector and 1 a column vector. By the spectral gap condition all eigenvalues of A have absolute value strictly less than 1, whence lim k A k = 0. Since A k = T k 1π the limit (2.23) holds.

9 MATH We have used the Perron-Frobenius theorem to establish some properties of time independent Markov chains, assuming certain conditions on the transfer matrix. Another condition on the transfer matrix which can help us understand the corresponding Markov chain is so called reversibility. This condition is in effect stating the transfer matrix is self-adjoint with respect to some positive definite diagonal quadratic form. In case much more is known about the transfer matrix than in the general case, in particular all its eigenvalues are real and the matrix is diagonalizable. To see what this states about the entries (t i,j ) of the n n matrix T, suppose the quadratic form is given by n (2.24) [ξ, ζ] µ = µ(i)ξ i ζ i, ξ, ζ R n, i=1 where µ R n has all positive entries. Then the self-adjointness condition (2.25) [ξ, Tζ] µ = [Tξ, ζ] µ, is equivalent to (2.26) µ(i)t i,j = µ(j)t j,i, 1 i, j n. From (2.26) we see if we normalize the vector µ( ) so the sum of its elements is 1, then µ is the invariant measure for the Markov chain. Note for a reversible indecomposable Markov chain, if T does not have eigenvalue 1 there is a spectral gap and hence the chain is strong mixing. 3. Examples of Markov Chains We first note a trivial case of a time independent Markov chain, which is a sequence of i.i.d. random variables (X 0, X 1,..). Let p(i), i F, be the probability density function of X 0. Then the transition probabilities are p(i, j) = p(j), i, j F. If F is finite then all the rows of the matrix T are identical, so T has rank 1. In particular the pdf of X 0 is the invariant measure π and the matrix A = T 1π is the 0 matrix. Hence the sequence is strong mixing by Proposition 2.2. Another example of a time independent Markov chain is the sum S n = X X n, n = 0, 1, 2,.. of i.i.d. variables. In case p(i, j) = p(j i), where p( ) is the pdf of X 1. Thus sums of i.i.d. variables give rise to chains which are spatially translation invariant. It is therefore often useful to use the Fourier transform to analyse them. We essentially did this in our proof of the CLT in Chapter I. In the case of the Bernoulli variables X 0 = ±1 with probability 1/2, for which S n is the standard random walk on Z, we have p(i, j) = p(j, i) = 1/2 if and only if i j = 1. This chain is therefore reversible, and it is also easy to see it is irreducible. We have in Chapter 1 proved it is recurrent, but no invariant measure exists (compare Proposition 1.3). To see this note if π( ) is an invariant measure then trivially τ π( ) is also an invariant measure where τπ(i) = π(i + 1), i Z. The uniqueness theorem Proposition 1.2 implies then τπ( ) = π( ), in which case π : Z R is a constant function, contradicting the normalization condition for π( ). University of Michigan, Department of Mathematics, Ann Arbor, MI address:

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

CHAPTER IV - BROWNIAN MOTION

CHAPTER IV - BROWNIAN MOTION CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time

More information

Theta Functions. Lukas Lewark. Seminar on Modular Forms, 31. Januar 2007

Theta Functions. Lukas Lewark. Seminar on Modular Forms, 31. Januar 2007 Theta Functions Lukas Lewark Seminar on Modular Forms, 31. Januar 007 Abstract Theta functions are introduced, associated to lattices or quadratic forms. Their transformation property is proven and the

More information

Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Presentation 3: Eigenvalues and Eigenvectors of a Matrix Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

THE PRIME NUMBER THEOREM

THE PRIME NUMBER THEOREM THE PRIME NUMBER THEOREM NIKOLAOS PATTAKOS. introduction In number theory, this Theorem describes the asymptotic distribution of the prime numbers. The Prime Number Theorem gives a general description

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

some algebra prelim solutions

some algebra prelim solutions some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

More information

GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G.

GROUPS SUBGROUPS. Definition 1: An operation on a set G is a function : G G G. Definition 1: GROUPS An operation on a set G is a function : G G G. Definition 2: A group is a set G which is equipped with an operation and a special element e G, called the identity, such that (i) the

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Quadratic Functions, Optimization, and Quadratic Forms

Quadratic Functions, Optimization, and Quadratic Forms Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Discrete Mathematics: Solutions to Homework (12%) For each of the following sets, determine whether {2} is an element of that set.

Discrete Mathematics: Solutions to Homework (12%) For each of the following sets, determine whether {2} is an element of that set. Discrete Mathematics: Solutions to Homework 2 1. (12%) For each of the following sets, determine whether {2} is an element of that set. (a) {x R x is an integer greater than 1} (b) {x R x is the square

More information

Math212a1010 Lebesgue measure.

Math212a1010 Lebesgue measure. Math212a1010 Lebesgue measure. October 19, 2010 Today s lecture will be devoted to Lebesgue measure, a creation of Henri Lebesgue, in his thesis, one of the most famous theses in the history of mathematics.

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

More information

x if x 0, x if x < 0.

x if x 0, x if x < 0. Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

APPLICATIONS OF THE ORDER FUNCTION

APPLICATIONS OF THE ORDER FUNCTION APPLICATIONS OF THE ORDER FUNCTION LECTURE NOTES: MATH 432, CSUSM, SPRING 2009. PROF. WAYNE AITKEN In this lecture we will explore several applications of order functions including formulas for GCDs and

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Lecture12 The Perron-Frobenius theorem.

Lecture12 The Perron-Frobenius theorem. Lecture12 The Perron-Frobenius theorem. 1 Statement of the theorem. 2 Proof of the Perron Frobenius theorem. 3 Graphology. 3 Asymptotic behavior. The non-primitive case. 4 The Leslie model of population

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Lecture 1: Univariate Time Series B41910: Autumn Quarter, 2008, by Mr. Ruey S. Tsay

Lecture 1: Univariate Time Series B41910: Autumn Quarter, 2008, by Mr. Ruey S. Tsay Lecture 1: Univariate Time Series B41910: Autumn Quarter, 2008, by Mr. Ruey S. Tsay 1 Some Basic Concepts 1. Time Series: A sequence of random variables measuring certain quantity of interest over time.

More information

Perron Frobenius theory and some extensions

Perron Frobenius theory and some extensions and some extensions Department of Mathematics University of Ioannina GREECE Como, Italy, May 2008 History Theorem (1) The dominant eigenvalue of a matrix with positive entries is positive and the corresponding

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

POWER SETS AND RELATIONS

POWER SETS AND RELATIONS POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

MINIMAL GENERATOR SETS FOR FINITELY GENERATED SHIFT-INVARIANT SUBSPACES OF L 2 (R n )

MINIMAL GENERATOR SETS FOR FINITELY GENERATED SHIFT-INVARIANT SUBSPACES OF L 2 (R n ) MINIMAL GENERATOR SETS FOR FINITELY GENERATED SHIFT-INVARIANT SUBSPACES OF L 2 (R n ) MARCIN BOWNIK AND NORBERT KAIBLINGER Abstract. Let S be a shift-invariant subspace of L 2 (R n ) defined by N generators

More information

Math 504, Fall 2013 HW 3

Math 504, Fall 2013 HW 3 Math 504, Fall 013 HW 3 1. Let F = F (x) be the field of rational functions over the field of order. Show that the extension K = F(x 1/6 ) of F is equal to F( x, x 1/3 ). Show that F(x 1/3 ) is separable

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

The Ideal Class Group

The Ideal Class Group Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

REMARKS ON ABSOLUTE CONTINUITY IN THE CONTEXT OF FREE PROBABILITY AND RANDOM MATRICES arxiv: v2 [math.pr] 8 Feb 2015

REMARKS ON ABSOLUTE CONTINUITY IN THE CONTEXT OF FREE PROBABILITY AND RANDOM MATRICES arxiv: v2 [math.pr] 8 Feb 2015 REMARKS ON ABSOLUTE CONTINUITY IN THE CONTEXT OF FREE PROBABILITY AND RANDOM MATRICES arxiv:409.227v2 [math.pr] 8 Feb 205 ARIJIT CHAKRABARTY AND RAJAT SUBHRA HAZRA Abstract. In this note, we show that

More information

Ri and. i=1. S i N. and. R R i

Ri and. i=1. S i N. and. R R i The subset R of R n is a closed rectangle if there are n non-empty closed intervals {[a 1, b 1 ], [a 2, b 2 ],..., [a n, b n ]} so that R = [a 1, b 1 ] [a 2, b 2 ] [a n, b n ]. The subset R of R n is an

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

(January 14, 2009) End k (V ) End k (V/W )

(January 14, 2009) End k (V ) End k (V/W ) (January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14 4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

More information

Introduction to Monte-Carlo Methods

Introduction to Monte-Carlo Methods Introduction to Monte-Carlo Methods Bernard Lapeyre Halmstad January 2007 Monte-Carlo methods are extensively used in financial institutions to compute European options prices to evaluate sensitivities

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Exercises with solutions (1)

Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Metric Spaces Joseph Muscat 2003 (Last revised May 2009)

Metric Spaces Joseph Muscat 2003 (Last revised May 2009) 1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Examination paper for Solutions to Matematikk 4M and 4N

Examination paper for Solutions to Matematikk 4M and 4N Department of Mathematical Sciences Examination paper for Solutions to Matematikk 4M and 4N Academic contact during examination: Trygve K. Karper Phone: 99 63 9 5 Examination date:. mai 04 Examination

More information

Recursion Theory in Set Theory

Recursion Theory in Set Theory Contemporary Mathematics Recursion Theory in Set Theory Theodore A. Slaman 1. Introduction Our goal is to convince the reader that recursion theoretic knowledge and experience can be successfully applied

More information

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12 CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

6.4 The Infinitely Many Alleles Model

6.4 The Infinitely Many Alleles Model 6.4. THE INFINITELY MANY ALLELES MODEL 93 NE µ [F (X N 1 ) F (µ)] = + 1 i

More information

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Chapter 1. Metric Spaces. Metric Spaces. Examples. Normed linear spaces

Chapter 1. Metric Spaces. Metric Spaces. Examples. Normed linear spaces Chapter 1. Metric Spaces Metric Spaces MA222 David Preiss d.preiss@warwick.ac.uk Warwick University, Spring 2008/2009 Definitions. A metric on a set M is a function d : M M R such that for all x, y, z

More information

MA651 Topology. Lecture 6. Separation Axioms.

MA651 Topology. Lecture 6. Separation Axioms. MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples

More information

PART I. THE REAL NUMBERS

PART I. THE REAL NUMBERS PART I. THE REAL NUMBERS This material assumes that you are already familiar with the real number system and the representation of the real numbers as points on the real line. I.1. THE NATURAL NUMBERS

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Introduction to Topology

Introduction to Topology Introduction to Topology Tomoo Matsumura November 30, 2010 Contents 1 Topological spaces 3 1.1 Basis of a Topology......................................... 3 1.2 Comparing Topologies.......................................

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Mathematical Background

Mathematical Background Appendix A Mathematical Background A.1 Joint, Marginal and Conditional Probability Let the n (discrete or continuous) random variables y 1,..., y n have a joint joint probability probability p(y 1,...,

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Archis Ghate a and Robert L. Smith b a Industrial Engineering, University of Washington, Box 352650, Seattle, Washington,

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

1 Review of complex numbers

1 Review of complex numbers 1 Review of complex numbers 1.1 Complex numbers: algebra The set C of complex numbers is formed by adding a square root i of 1 to the set of real numbers: i = 1. Every complex number can be written uniquely

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

Sufficient Statistics and Exponential Family. 1 Statistics and Sufficient Statistics. Math 541: Statistical Theory II. Lecturer: Songfeng Zheng

Sufficient Statistics and Exponential Family. 1 Statistics and Sufficient Statistics. Math 541: Statistical Theory II. Lecturer: Songfeng Zheng Math 541: Statistical Theory II Lecturer: Songfeng Zheng Sufficient Statistics and Exponential Family 1 Statistics and Sufficient Statistics Suppose we have a random sample X 1,, X n taken from a distribution

More information