CHAPTER IV - BROWNIAN MOTION

Size: px
Start display at page:

Download "CHAPTER IV - BROWNIAN MOTION"

Transcription

1 CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time variable of the chain i.e. n =, 1, 2,.., can be made continuous; (2) the state space can be made into a continuum. There are many such examples, but there is one which stands out-brownian motion- and we will study it here. Thus we have a probability space (Ω, F, P ) and a continuous set X t, t, of real variables which have the following properties: (a) X t is a Gaussian variable with mean and variance t. (b) For any sequence = t < t 1 < t m, the variables X tj X tj 1, j = 1,.., m, are independent Gaussian with mean and variance t j t j 1, j = 1,.., m. Evidently the properties (a), (b) determine the cdf of any finite set of variables X t1,..., X tm. Now let Q be the non-negative rational numbers. Then the Kolmogorov construction allows us to create a probability space (Ω, F, P ) and variables X t, t Q, which have the properties (a), (b). The main issue in constructing Brownian motion is to extend this set of variables to a continuous set of variables X t, t. The key to doing this is the continuity result: Proposition 1.1. Let (Ω, F, P ) be a probability space with variables X t, t Q, satisfying (a) and (b). Then for any a > the function t X t (ω), t Q [, a], is uniformly continuous with probability 1. Proof. We take a = 1 and use the notation X t = X(t). For n =, 1, 2,.., we associate to a dyadic interval I n,k = {x [, 1] : (k 1)/2 n < x k/2 n } a random variable Y n,k by (1.1) Y n,k = sup{ X(t) X((k 1)/2 n ) : t I n,k Q }. Observe that the variables Y n,k, k = 1,.., 2 n, are i.i.d., whence it follows that for any δ >, (1.2) P ( max 1 k 2 n Y n,k > δ ) 2 n k=1 P ( Y n,k > δ ) = 2 n P (Y n,1 > δ ). The key point now is to use the maximal function inequality (1.3) P (Y n,1 > δ ) 1 δ p E[ X(1/2n ) p ]. This follows from Proposition 2.1 of Chapter II since any sequence X(t j ), j = 1, 2,.., with t 1 < t 2 <, is a Martingale. One can also prove it using the reflection principle introduced in Chapter I. Since X(t) is Gaussian with mean and variance t we have that (1.4) E[ X(t) 4 ] = 3t 2. 1

2 2 JOSEPH G. CONLON Hence (1.2), (1.3) imply that (1.5) P ( max 1 k 2 n Y n,k > δ ) 3 δ 4 2 n. Evidently (1.5) implies that (1.6) P ( max n,k 1 k 2 n > δ ) <, n=1 Thus by the Borel-Cantelli lemma one has [ ] (1.7) lim sup max Y n,k δ with probability 1. n 1 k 2 n The inequality (1.7) implies uniform continuity of the function t X t (ω), t Q [, 1], with probability 1 Corollary 1.1. There exists a probability space (Ω, F, P ) and a continuous set X t, t, of real variables with the properties (a), (b). In addition the function t X t, t, is continuous on the interval [, ) with probability 1. Proof. We define the variable X t for non-rational t by X t (ω) = lim{x s (ω); s Q, s t}. From Proposition 1.1 this limit exists with probability 1 for all t and the resulting function t X t (ω) is continuous. It is easy to see that (a) and (b) hold now for all variables X t, t, since lim s t X s = X t with probability 1 implies that lim s t f(x s ) = f(x t ) for all bounded measurable functions f : R R by the dominated convergence theorem. We have constructed Brownian motion and in the process have seen that its paths t X t, t, are continuous with probability 1. The next result shows that they are however very irregular. Proposition 1.2 (Dvoretski, Erdös, Kakutani). Brownian paths t X t, t, are nowhere differentiable with probability 1. Proof. Similarly to Proposition 1.1 let us define random variable Y n,k for k = 1,.., 2 n 2, by (1.8) Y n,k = sup{ X(k/2 n ) X((k 1)/2 n ), X((k+1)/2 n ) X(k/2 n ), X((k+2)/2 n ) X((k+1)/2 n ) }. Then we have by independence of the 3 variables in the definition of Y n,k that ( ) 3 (1.9) P ( Y n,k a/2 n ) = [P ( X(1/2 n ) a/2 n )] 3 2a/2 n. 2π/2 n The inequality (1.9) implies that (1.1) P ( min Y n,k a/2 n ) k 2 n 2 k=1 Thus by the Borel-Cantelli lemma we have that P ( Y n,k a/2 n ) a n/2 π 3/2. (1.11) with probability 1 there exists N(ω) such that for n N(ω), Y n,k (ω) a/2 n for all k = 1,.., 2 n 2.

3 MATH Fix β > and suppose that a function f : [, 1] R has a derivative at the point s [, 1] with f (s) < β. Then there exists δ > such that f(t) f(s) 2β t s if t s < δ. Assuming now that s I n,k+1 and δ > 2/2 n, we have that (1.12) f((k + 1)/2 n ) f(k/2 n ) 2β/2 n, f((k + 2)/2 n ) f((k + 1)/2 n ) 6β/2 n, f(k/2 n ) f((k 1)/2 n ) 6β/2 n. Choosing a > 6β in (1.11) we conclude that with probability 1 any point s of differentiability of the function t X t, t 1, has derivative which exceeds β in absolute value. The result follows by letting β. It is not difficult to see that Brownian motion X(t), t, satisfies the Markov property i.e. for any Borel set A R, (1.13) P (X(t) A X(t 1 ) = x 1,.., X(t m ) = x m ) = P (X(t) A X(t m ) = x m ), t 1 < t 2 < t m < t. For this reason Brownian motion is a Markov process. We can seek to establish the strong Markov property for Brownian motion. First for t let F t be the σ field generated by the variables X(s), s t. Note that by the continuity property of Brownian motion F t is actually generated by a countable set of variables X(s), s t. In fact any countable dense set in [, t] will suffice. A stopping time τ : Ω [, ) for Brownian motion is a measurable function such that {τ t} F t for all t. Lemma 1.1. Let X t (ω), t, ω Ω, be Brownian motion with probability space (Ω, F, P ) and consider the function from [, ) Ω R defined by X(t, ω) = X t (ω). The function X(, ) is measurable with respect to the product σ algebra B F, where B is the σ field generated by the open sets of [, ). Proof. Consider the function F m : [, 1] R m R defined by linear interpolation, so (1.14) k F m (t, x 1,..., x m ) = [mt k] x k+1 +[k + 1 mt] x k, m t k + 1, k =,.., m 1, m where we have set x =. Evidently F m is a continuous function and hence Borel measurable on [, 1] R m. For n = 1, 2,.. define X n : [, 1] Ω R by (1.15) X n (t, ω) = F m (t, X(1/2 n, ω), X(2/2 n, ω),..., X(1, ω)), where m = 2 n. The function X n : [, 1] Ω R is then measurable with respect to B 1 F, where B 1 is the Borel field generated by open sets of the interval [, 1]. The measurability of X : [, 1] Ω R then follows from the fact that (1.16) lim n X n(t, ω) = X(t, ω) for all t [, 1] with probability 1, ω Ω. Corollary 1.2. Let τ : Ω [, ) be a stopping time for Brownian motion. Then the function ω X(τ(ω), ω) from Ω to R is measurable. Proof. Since the mapping ω (τ(ω), ω) from Ω to [, ) Ω is obviously measurable, the result follows from Lemma 1.1.

4 4 JOSEPH G. CONLON Proposition 1.3 (Strong Markov Property). Suppose X(t), t, is Brownian motion and τ : Ω [, ) a stopping time. Then the process Y (t) = X(t + τ) X(τ), t, is also a copy of Brownian motion. Rather than proving Proposition 1.3 we shall concentrate on a particular case. The same method can be applied to prove the proposition. The case we have in mind is the Brownian motion version of the reflection principle, which we first encountered for the standard random walk on Z in Lemma 5.1 of Chapter I. Proposition 1.4 (Reflection Principle). Let X t, t, be Brownian motion and M t = sup s t X s be the maximal function. Then for a > there is the identity P (M t a) = 2P (X t a). Proof. We define a stopping time τ : Ω [, ) by (1.17) τ = inf{t : X(t) > a}. We first show that τ < with probability 1. We can see this by using a method to solve problem 6 on homework I. Thus observe that for n = 1, 2,.., (1.18) X(n) = ξ 1 + ξ ξ n where the ξ j are i.i.d standard normal. Hence for any α >, ( ) X(n) (1.19) P lim sup n n α < 1 = or 1 since the LHS of (1.19) is a tail event. Assuming the probability is 1, it follows that if H( ) is the Heaviside function then (1.2) lim n E[ H(1 X(n)/nα ) ] = 1. Since X(n) is Gaussian with mean and variance n we have that (1.21) E[ H(1 X(n)/n α ) ] = 1 2π n α 1/2 e z2 /2 dz. Evidently if α < 1/2 then the RHS of (1.21) converges to 1/2 in contradiction to (1.2). We conclude that the probability in (1.19) is, whence τ < with probability 1. Next observe that (1.22) {ω Ω : τ(ω) t} = {ω Ω : M t (ω) a} F t, so τ is a stopping time and X(τ(ω), ω) = a, ω Ω. We have now from (1.17) that that (1.23) P (M t a) = P ( τ t, X(t) X(τ) )+P ( τ t, X(t) X(τ) < ). Since it is clear that (1.24) P ( τ t, X(t) X(τ) ) = P (X(t) a), we just need to establish that (1.25) P ( τ t, X(t) X(τ) < ) = P ( τ t, X(t) X(τ) ). To see this let us take t = 1 wlog and note from (1.2) that

5 (1.26) P ( τ 1, X(1) X(τ) < ) = 2 n P ( Y n,1 > δ ) + 2 n r=1 2 n P ( Y n,1 > δ ) + MATH n r=1 P ( (r 1)/2 n < τ r/2 n, X(1) X(τ) < ) P ( (r 1)/2 n < τ r/2 n, X(1) X(τ) <, Y n,r δ ) 2 n r=1 where we have used the fact that (1.27) X(1) X(r/2 n ) X(1) X(τ)+ Since P ( (r 1)/2 n < τ r/2 n, X(1) X(r/2 n ) < 2δ ), X(τ) X((r 1)/2 n ) + X(r/2 n ) X((r 1)/2 n ) 2Y n,r. (1.28) {ω Ω : (r 1)/2 n < τ(ω) r/2 n } F r/2 n, we have from the standard reflection principle for fixed time that (1.29) P ( (r 1)/2 n < τ r/2 n, X(1) X(r/2 n ) < 2δ) We conclude then from (1.26)-(1.29) that = P ( (r 1)/2 n < τ r/2 n, X(1) X(r/2 n ) > 2δ). (1.3) P ( τ 1, X(1) X(τ) < ) 2 n P ( Y n,1 > δ )+ 2 n r=1 P ( (r 1)/2 n < τ r/2 n, X(1) X(r/2 n ) > 2δ ). We can do now an exactly parallel argument as in the previous paragraph to bound the RHS of (1.3) from above in terms of P ( τ 1, X(1) X(τ) > ) and a small error. Thus we have that (1.31) P ( (r 1)/2 n < τ r/2 n, X(1) X(r/2 n ) > 2δ ) P ( (r 1)/2 n < τ r/2 n, X(1) X(τ) > 4δ, Y n,r δ ) + P ( Y n,1 > δ ). Evidently (1.3), (1.31) imply that (1.32) P ( τ 1, X(1) X(τ) < ) P ( τ 1, X(1) X(τ) > 4δ )+2 n+1 P ( Y n,1 > δ ). Letting n in (1.32) and using (1.3) with p = 4 we conclude that (1.33) P ( τ 1, X(1) X(τ) < ) P ( τ 1, X(1) X(τ) > 4δ ) for every δ >. Hence by letting δ in (1.33) we see that the LHS of (1.25) does not exceed the RHS. A symmetry argument then implies the identity (1.25). Next we show how expectations for Brownian motion can be obtained by solving differential equations. First we see that the analogue of the backward Kolmogorov

6 6 JOSEPH G. CONLON equation for the countable state space Markov chain is the heat equation. Thus let T > and u(x, t), be defined by (1.34) u(x, t) = E[ f(x T ) X t = x] = 1 2π(T t) exp [ (x y)2 2(T t) ] f(y) dy. Then (1.34) is stating that the variable X T conditioned on X t = x is Gaussian with mean x and variance T t. One easily sees that u(x, t) is the solution to the terminal value problem, (1.35) u t u xx =, x R, t < T ; lim t T u(x, t) = f(x), x R. Expectations for functions of stopping times can sometimes be computed as solutions to time independent differential equations. We illustrate how this happens with an example: Proposition 1.5. Suppose a < b and λ. Let u λ : (a, b) R be the function (1.36) u λ (x) = E[ e λτ X() = x ], where τ is the first exit time for Brownian motion X(t), t, started at x (a, b) from the interval [a, b], whence X(τ) = a or X(τ) = b. Then u λ ( ) is the unique solution to the differential equation 1 (1.37) 2 with boundary conditions d 2 u λ (x) dx 2 = λu λ (x), a < x < b, (1.38) u λ (a) = u λ (b) = 1. Proof. Defining u λ (x) for a < x < b by (1.36), we first prove that (1.39) lim x a u λ (x) = 1, This follows from the inequality lim u λ (x) = 1. x b (1.4) 1 u λ (x) e λt P (M t > b x), a < x < b. From Proposition 1.4 we have that (1.41) P (M t > a) = 2P ( Z > a/ t ), a >, where Z is standard normal. Thus the limit of the RHS of (1.4) as x b is e λt. Now on letting t we obtain (1.39). The key to proving that the equation (1.37) holds is to observe that (1.42) E[ e λτ H(τ t) X() = x ] = e λt E[ u λ (X(t)) H(τ t) X() = x ], where H( ) is the Heaviside function. This follows since the function H(τ t) is F t measurable. Thus (1.43) E[ e λτ H(τ t) F t ] = e λt H(τ t)e[ e λ(τ t) X(t) ] = H(τ t)u λ (X(t)). Hence we have that (1.44) u λ (x) = e λt 2πt exp [ ] (x y)2 2t u λ (y) dy + Error(t).

7 MATH In (1.44) we have extended the function u λ ( ) by 1 outside of the interval (a, b). The error term is bounded by (1.45) Error(t) 2P ( τ t X() = x ) 4P (Z > min{x a, b x}/ t ), where Z is standard normal. We obtain the equation (1.37) by dividing (1.44) by t and letting t. It is easy to see that lim t Error(t)/t =, whence we have that (1.46) lim t { uλ (x) e λt E[ u λ (X(t)) X( = x ] } /t =. The limit in (1.46) yields the equation (1.37) if we assume u λ (y) is C 2 for y in a neighborhood of x, by expanding u λ (X(t)) in a Taylor series about x to second order. Of course the definition (1.36) of u λ ( ) allows us to conclude only that it is a continuous function from the arguments we have been using. There is a gap here in our argument, which can really only be filled by going further into the theory of differential equations. We therefore omit it and leave the proof at this point. It is easy to see that the solution to the boundary value problem (1.37), (1.38) is given by (1.47) u λ (x) = cosh 2λ(x c) cosh 2λR, c = a + b 2, R = b a 2 so c is the center and R is the radius of the interval (a, b). Suppose now we consider the semi-infinite interval (, b) with b >. In that case it is easy to see that (1.48) u λ (x) = exp[ 2λ(b x) ], x < b. Assuming the exit time τ from the interval (, b) conditioned on X() = x, has a density ρ x (s), s >, with respect to Lebesgue measure, then (1.48) yields the formula (1.49) e λt ρ x (s) ds = exp[ 2λ(b x) ]. This is simply an explicit formula for the Laplace transform of the function ρ x (s), s >. We compare this with the formula given by the reflection principle Proposition 1.4. Thus (1.5) P ( τ t X() = ) = 2P ( X(t) > b X() = ) t 2 ρ (s) ds = e z2 /2 dz. 2π b/ t Differentiating the second equation in (1.5) with respect to t, we see that [ ] b (1.51) ρ (t) = exp b2. 2πt 3/2 2t On comparing (1.49) and (1.51) we conclude on setting b = 1/ 2 that [ 1 (1.52) Laplace transform of 2 exp 1 ] = exp[ λ]. πt3/2 4t One can of course verify (1.52) directly, but it is of some interest that the identity is a consequence of a symmetry property of Brownian motion i.e. the reflection principle.,

8 8 JOSEPH G. CONLON 2. Gaussian Processes We have so far in the study of Brownian motion noted that it is a Markov process. We shall see in this section that it is also a Gaussian Process. To see what this means we need to review some basic facts about Gaussian random variables. Let (Ω, F, P ) be a probability space and Y : Ω R n be a random variable with mean Y ( ) =. We shall think of Y (ω) R n as a column vector. Then Y is Gaussian if its pdf is determined by its covariant matrix Γ, where Γ is a positive definite symmetric n n matrix defined by (2.1) v Γv = [v Y ( )] 2, v R n. The characteristic function χ Y ( ) for Y is then given by the formula [ (2.2) χ Y (σ) = E[ e iσ Y ( ) ] = exp 1 ] 2 σ Γσ, σ R n. The probability measure associated with (2.2) is given by [ (2.3) exp 1 ] n 2 φ Γ 1 φ dφ(j) / normalization. The normalization in (2.3) can be computed explicitly as (2.4) normalization = (2π) n/2 (det Γ) 1/2, but we wish to de-emphasize this since the normalization in (2.3) is determined by the fact that the measure (2.3) is a probability measure. Since the Gaussian variable Y is determined by the symmetric positive definite covariance matrix Γ, its structure can be understood from the structure of the matrix Γ, in particular the fact that Γ has a basis of orthogonal eigenvectors, all with positive eigenvalues. Suppose the eigenvectors are an orthonormal set v 1,..., v n R n, i.e. orthogonal and with Euclidean norm v j = 1, j = 1,.., n. Then the n n matrix O = [v 1,.., v n ] is orthogonal so that (2.5) OO = O O = identity. If the eigenvalues of the v j are λ j, j = 1,.., n, then we can form the diagonal n n matrix Λ, λ 1 (2.6) Λ = λ 2 and one has the identity (2.7) Γ = OΛO. Observe now that since the v j, j = 1,., n, are an orthonormal basis for R n that (2.8) Y (ω) = n (Y (ω), v j )v j = n λj ξ j (ω)v j, ω Ω, where (, ) denotes the Euclidean inner product on R n. Thus (2.8) defines the set of random variables ξ j, j = 1,.., n. It is easy to see that the ξ j, j = 1,.., n, are i.i.d.

9 MATH standard normal. This follows from (2.2). Thus let ξ(ω) = (ξ 1 (ω),.., ξ n (ω)) R n and observe from (2.8) that for any σ R n, (2.9) σ ξ(ω) = where n (2.1) w = Noting now that (2.11) Γw = σ j λj (Y (ω), v j ) = (Y (ω), w), n σ j λj v j. n σ j λj v j, we conclude that [ (2.12) χ ξ (σ) = E[ e iσ ξ( ) ] = exp 1 ] 2 w Γw [ = exp 12 ] σ 2, σ R n, whence the ξ j, j = 1,.., n, are i.i.d. standard normal. We wish now to generalize the above considerations to Gaussian processes, by which we mean a continuous set of random variables Y (t), t R, such that any finite set of them (Y (t 1 ),.., Y (t n )) have joint distribution which is Gaussian. Its covariance matrix is now a function Γ(t, s) defined by (2.13) Γ(t, s) = Y (t)y (s) s, t R. We have already seen that Brownian motion is a Gaussian process, and its covariance is given by (2.14) Γ(t, s) = X(t)X(s) = min[s, t]. We would like to obtain an infinite dimensional version of (2.8) and write Brownian motion as a sum of i.i.d. standard normal variables. Thus we wish to write for some suitable functions a j ( ), j =, 1,.., (2.15) X(t) = a j (t)ξ j j= where the ξ j, j =, 1, 2,.., are i.i.d. standard normal. Let us try to find such a representation at least for t in a finite interval, say the interval [, π]. If we were to follow the method above for Gaussian variables Y R n, we would look to find the eigenfunctions of the self-adjoint operator Γ defined by (2.16) Γf(t) = π Γ(t, s)f(s) ds, t π. It is known from the theory of integral equations that the operator (2.16) is compact on L 2 ([, π]), and this implies that there is an orthonormal basis for L 2 ([, π]) consisting of eigenfunctions of Γ. Denoting the orthonormal basis of eigenfunctions

10 1 JOSEPH G. CONLON by v j (t), t π, j =, 1, 2,..., with corresponding non-negative eigenvalues λ j, j =, 1, 2,.. we expect following (2.8) the identity (2.17) X(t, ω) = λj ξ j (ω)v j (t), ω Ω, t π. j= We cannot however find the eigenfunctions in (2.17) explicitly. To get an explicit representation we need to use the fact that Brownian motion is the derivative of the white noise Gaussian process W (t), t R. This process is defined as the distributional derivative of Brownian motion, so (2.18) f(s)w (s) ds = f (s)x(s) ds for all C functions f : R R with compact support. Formally W (t) = dx(t)/dt or in other words white noise paths are the infinitesimal time increments for Brownian motion. We have already seen that Brownian paths are differentiable nowhere with probability 1, which is why we must define white noise as a distribution. The covariance Γ of (2.13) for white noise is Γ(t, s) = δ(t s), where δ( ) is the Dirac delta function. Thus the corresponding integral operator (2.16) is simply the identity. To see this observe from (2.13) that if X( ) denotes Brownian motion then (2.19) g(t)γ(t, s)f(s) dt ds = [X(s 2 ) X(s 1 )][X(t 2 ) X(t 1 )] = g(t)f(t) dt if g( ) is the characteristic function of the interval [s 1, s 2 ] and f( ) is the characteristic function of the interval [t 1, t 2 ]. Now any orthonormal basis of L 2 ([, π]) is an orthonormal basis of eigenfunctions for the identity operator Γ on L 2 ([, π]) with corresponding eigenvalues 1. Choosing the basis consisting of the functions (2.2) v (t) = 1 π, v j (t) = we conclude from (2.17) that (2.21) W (t) = ξ π + 2 cos jt, π j = 1, 2,..., t π, 2 π ξ j cos jt, t π, where the ξ j, j =, 1,.., are i.i.d. standard normal. We have not rigorously proven (2.21) of course, and in fact we have been rather vague about what we mean about the white noise process W ( ). Let us continue however and formally integrate equation (2.21) from to t, observing now that Brownian motion is the integral of white noise. Thus we obtain the representation (2.22) X(t) = tξ 2 sin jt + ξ j, t π, π π j for Brownian motion. The formula (2.22) is known as the Polya-Wiener representation for Brownian motion. One might ask what is the advantage of (2.22) over

11 MATH the representation (2.23) X(t) = t [ξ + ξ ξ m ], t = (m + 1) t, m =, 1,..., where the ξ j, j =, 1,.., are i.i.d. standard normal. One advantage is that the partial sums in (2.22) converge in L 2 (Ω) uniformly in t for t 1 since 2 (2.24) sin jt ( ) 2 ξ j sin jt j = C j N, j=n+1 j=n+1 for some constant C. Hence representations like (2.22) can be efficient ways of generating Brownian motion starting with i.i.d. standard normal variables. We have so far defined two Gaussian processes-brownian motion and its derivative the white noise process. We shall define other Gaussian processes by considering the infinite dimensional version of the formula (2.3) for the probability density. Consider Brownian motion φ(t), t, for which we know that if < t 1 < t 2 < t n, the joint pdf for the variables φ(t j ), j = 1,.., n, is given by n {φ(t j ) φ(t j 1 )} (2.25) exp 2 n dφ(t j ) / normalization, 2(t j t j 1 ) where t =. We can rewrite the sum in (2.25) as n {φ(t j ) φ(t j 1 )} 2 tn [ ] 2 dφ(t) (2.26) = dt 2(t j t j 1 ) dt where [t, φ(t)], t t n, is the graph obtained by linear interpolation of the points [, ] and [t j, φ(t j )], j = 1,.., n. Thus we might be tempted to write the Brownian motion measure as [ (2.27) exp 1 [ ] ] 2 dφ(t) dt dφ(t) / normalization. 2 dt Comparing (2.27) with (2.3) we expect that if Γ is the covariance operator for Brownian motion given by (2.14), then [ ] 2 dφ(t) (2.28) [φ( ), Γ 1 φ( )] = dt, dt where (2.29) [f( ), g( )] = t> f(t)g(t) dt, for functions f, g L 2 ([, )). We conclude that for Brownian motion (2.3) Γ 1 g(t) = d2 g(t), t <. dt2 We can verify this intuition directly by observing from (2.14) that (2.31) Γf(t) = t Differentiating (2.31) twice we see that sf(s) ds + t t f(s) ds. (2.32) d2 Γf(t) = f(t), t <, dt2

12 12 JOSEPH G. CONLON and hence Γ 1 is given by (2.3). It is clear now how we can naturally generalize Brownian motion by considering Gaussian processes for which Γ 1 is a second order positive definite differential operator. Differential operators come together with boundary conditions so we need to be a little careful here. Thus Γ 1 for Brownian motion is actually defined on functions f(t), t, with the Dirichlet boundary condition f() =. Thus (2.33) [f( ), Γ 1 g( )] = f(t)g (t) dt = f (t)g (t) dt, where we have used the boundary condition f() = to integrate by parts. It follows from the second integral formula on the RHS of (2.33) that Γ 1 is positive definite. Consider now Γ 1 defined on functions f : [, 1] R with Dirichlet boundary conditions f() = f(1) = by (2.34) [f( ), Γ 1 g( )] = 1 f(t)g (t) dt = 1 f (t)g (t) dt, whence Γ 1 is positive definite. The Gaussian process associated with (2.34) is called the Brownian bridge. One can easily see from (2.34) that (2.35) Γ(t, s) = s(1 t) if s < t, Γ(t, s) = t(1 s) if s > t. A realization of the Brownian bridge process α(t), t 1, can be given in terms of Brownian motion X(t), t, by the formula α(t) = X(t) tx(1). To see this all we need to do is to verify from (2.14) that the covariance of α(t), t 1, is given by (2.35). Note that α(t), t 1, is not Markovian. Next consider Γ 1 defined on functions f : (, ) R by (2.36) [f( ), Γ 1 g( )] = f(t){ g (t) + g(t)} dt = {f (t)g (t) + f(t)g(t)} dt, whence Γ 1 is positive definite. It seems that we are not imposing any boundary conditions on the functions f( ), g( ), but in the integration by parts we are implicitly assuming Dirichlet boundary condition at t = ± i.e. lim t ± f(t) =. The Gaussian process Y (t), t R, associated with (2.36) is called the Ornstein- Uhlenbeck process. One sees from (2.36) that (2.37) Γ(t, s) = 1 2 e t s. The process Y (t), t R, is Markovian and can be represented by Brownian motion X(s), s, as (2.38) Y (t) = Y ()e t/2 + 1 X(t) e (s t)/2 X(s) ds, t >. 2 In (2.38) the variable Y () is independent of the Brownian motion. Taking Y () to be Gaussian with mean and variance 1/2, we see from (2.14) that Y (t)y (s) is given by the RHS of (2.37) for s, t. In particular Y (t) is Gaussian with mean and variance 1/2 for all t. Hence the Markov process Y (t) has an invariant measure which is the Gaussian variable with mean zero and variance 1/2. University of Michigan, Department of Mathematics, Ann Arbor, MI address: conlon@umich.edu t

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

BROWNIAN MOTION 1. INTRODUCTION

BROWNIAN MOTION 1. INTRODUCTION BROWNIAN MOTION 1.1. Wiener Process: Definition. 1. INTRODUCTION Definition 1. A standard (one-dimensional) Wiener process (also called Brownian motion) is a stochastic process {W t } t 0+ indexed by nonnegative

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION

EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION Sixth Mississippi State Conference on Differential Equations and Computational Simulations, Electronic Journal of Differential Equations, Conference 5 (7), pp. 5 65. ISSN: 7-669. UL: http://ejde.math.txstate.edu

More information

Parabolic Equations. Chapter 5. Contents. 5.1.2 Well-Posed Initial-Boundary Value Problem. 5.1.3 Time Irreversibility of the Heat Equation

Parabolic Equations. Chapter 5. Contents. 5.1.2 Well-Posed Initial-Boundary Value Problem. 5.1.3 Time Irreversibility of the Heat Equation 7 5.1 Definitions Properties Chapter 5 Parabolic Equations Note that we require the solution u(, t bounded in R n for all t. In particular we assume that the boundedness of the smooth function u at infinity

More information

Pacific Journal of Mathematics

Pacific Journal of Mathematics Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000

More information

0 <β 1 let u(x) u(y) kuk u := sup u(x) and [u] β := sup

0 <β 1 let u(x) u(y) kuk u := sup u(x) and [u] β := sup 456 BRUCE K. DRIVER 24. Hölder Spaces Notation 24.1. Let Ω be an open subset of R d,bc(ω) and BC( Ω) be the bounded continuous functions on Ω and Ω respectively. By identifying f BC( Ω) with f Ω BC(Ω),

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

The Ergodic Theorem and randomness

The Ergodic Theorem and randomness The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

Conductance, the Normalized Laplacian, and Cheeger s Inequality

Conductance, the Normalized Laplacian, and Cheeger s Inequality Spectral Graph Theory Lecture 6 Conductance, the Normalized Laplacian, and Cheeger s Inequality Daniel A. Spielman September 21, 2015 Disclaimer These notes are not necessarily an accurate representation

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

ON NONNEGATIVE SOLUTIONS OF NONLINEAR TWO-POINT BOUNDARY VALUE PROBLEMS FOR TWO-DIMENSIONAL DIFFERENTIAL SYSTEMS WITH ADVANCED ARGUMENTS

ON NONNEGATIVE SOLUTIONS OF NONLINEAR TWO-POINT BOUNDARY VALUE PROBLEMS FOR TWO-DIMENSIONAL DIFFERENTIAL SYSTEMS WITH ADVANCED ARGUMENTS ON NONNEGATIVE SOLUTIONS OF NONLINEAR TWO-POINT BOUNDARY VALUE PROBLEMS FOR TWO-DIMENSIONAL DIFFERENTIAL SYSTEMS WITH ADVANCED ARGUMENTS I. KIGURADZE AND N. PARTSVANIA A. Razmadze Mathematical Institute

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions

Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions Jennifer Zhao, 1 Weizhong Dai, Tianchan Niu 1 Department of Mathematics and Statistics, University of Michigan-Dearborn,

More information

and s n (x) f(x) for all x and s.t. s n is measurable if f is. REAL ANALYSIS Measures. A (positive) measure on a measurable space

and s n (x) f(x) for all x and s.t. s n is measurable if f is. REAL ANALYSIS Measures. A (positive) measure on a measurable space RAL ANALYSIS A survey of MA 641-643, UAB 1999-2000 M. Griesemer Throughout these notes m denotes Lebesgue measure. 1. Abstract Integration σ-algebras. A σ-algebra in X is a non-empty collection of subsets

More information

Metric Spaces Joseph Muscat 2003 (Last revised May 2009)

Metric Spaces Joseph Muscat 2003 (Last revised May 2009) 1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

THE CENTRAL LIMIT THEOREM TORONTO

THE CENTRAL LIMIT THEOREM TORONTO THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Simulating Stochastic Differential Equations

Simulating Stochastic Differential Equations Monte Carlo Simulation: IEOR E473 Fall 24 c 24 by Martin Haugh Simulating Stochastic Differential Equations 1 Brief Review of Stochastic Calculus and Itô s Lemma Let S t be the time t price of a particular

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

8 Hyperbolic Systems of First-Order Equations

8 Hyperbolic Systems of First-Order Equations 8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

1 Variational calculation of a 1D bound state

1 Variational calculation of a 1D bound state TEORETISK FYSIK, KTH TENTAMEN I KVANTMEKANIK FÖRDJUPNINGSKURS EXAMINATION IN ADVANCED QUANTUM MECHAN- ICS Kvantmekanik fördjupningskurs SI38 för F4 Thursday December, 7, 8. 13. Write on each page: Name,

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

The Math Circle, Spring 2004

The Math Circle, Spring 2004 The Math Circle, Spring 2004 (Talks by Gordon Ritter) What is Non-Euclidean Geometry? Most geometries on the plane R 2 are non-euclidean. Let s denote arc length. Then Euclidean geometry arises from the

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Chapter 2. Parameterized Curves in R 3

Chapter 2. Parameterized Curves in R 3 Chapter 2. Parameterized Curves in R 3 Def. A smooth curve in R 3 is a smooth map σ : (a, b) R 3. For each t (a, b), σ(t) R 3. As t increases from a to b, σ(t) traces out a curve in R 3. In terms of components,

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

LECTURE 15: AMERICAN OPTIONS

LECTURE 15: AMERICAN OPTIONS LECTURE 15: AMERICAN OPTIONS 1. Introduction All of the options that we have considered thus far have been of the European variety: exercise is permitted only at the termination of the contract. These

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Regression With Gaussian Measures

Regression With Gaussian Measures Regression With Gaussian Measures Michael J. Meyer Copyright c April 11, 2004 ii PREFACE We treat the basics of Gaussian processes, Gaussian measures, kernel reproducing Hilbert spaces and related topics.

More information

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

More information

MA651 Topology. Lecture 6. Separation Axioms.

MA651 Topology. Lecture 6. Separation Axioms. MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples

More information

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

Mathematical Finance

Mathematical Finance Mathematical Finance Option Pricing under the Risk-Neutral Measure Cory Barnes Department of Mathematics University of Washington June 11, 2013 Outline 1 Probability Background 2 Black Scholes for European

More information

3 Contour integrals and Cauchy s Theorem

3 Contour integrals and Cauchy s Theorem 3 ontour integrals and auchy s Theorem 3. Line integrals of complex functions Our goal here will be to discuss integration of complex functions = u + iv, with particular regard to analytic functions. Of

More information

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Schrödinger operators with non-confining potentials.

Schrödinger operators with non-confining potentials. Schrödinger operators with non-confining potentials. B. Simon s Non-standard eigenvalues estimates. Fakultät für Mathematik Ruhr-Universität Bochum 21 Sept. 2011 Plan of this talk. 1 Part1: Motivation

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

THE EIGENVALUES OF THE LAPLACIAN ON DOMAINS WITH SMALL SLITS

THE EIGENVALUES OF THE LAPLACIAN ON DOMAINS WITH SMALL SLITS THE EIGENVALUES OF THE LAPLACIAN ON DOMAINS WITH SMALL SLITS LUC HILLAIRET AND CHRIS JUDGE Abstract. We introduce a small slit into a planar domain and study the resulting effect upon the eigenvalues of

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

WHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION?

WHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION? WHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION? DAVID M MASON 1 Statistics Program, University of Delaware Newark, DE 19717 email: davidm@udeledu JOEL ZINN 2 Department of Mathematics,

More information

Math 526: Brownian Motion Notes

Math 526: Brownian Motion Notes Math 526: Brownian Motion Notes Definition. Mike Ludkovski, 27, all rights reserved. A stochastic process (X t ) is called Brownian motion if:. The map t X t (ω) is continuous for every ω. 2. (X t X t

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

4. Expanding dynamical systems

4. Expanding dynamical systems 4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,

More information