# Backward Equation, duality Jonathan Goodman, Fall, Backward equation and generator

Size: px
Start display at page:

Transcription

1 Stochastic Calculus, Courant Institute, Fall Introduction Stochastic Calculus Backward Equation, duality Jonathan Goodman, Fall, 0 Backward equation and generator Suppose X t is a diffusion process described by the SDE dx t = a(x t ) dt + b(x t ) dw t. (1) Consider this to be a multi-component process with d components. This means that the SDE may be written in component form as dx j,t = a j (X t ) dt + b jk (X t ) dw k,t, for j = 1,, d. () k=1 The W k,t are independent standard Brownian motions. The coefficient b jk is the influence of Brownian motion W k on component X j of the process. These form the entries of the noise matrix b in the vector form (1). The number m is the number of sources of noise. We have seen that a non-diagonal b can be used to model correlations in the components dx j,t. In fact, a model might start with an idea of single component infinitesimal variance and cross-component correlation: σ j,t dt = var(dx j,t ), ρ jk,t = corr(dx j,t, dx k,t ). The noise coefficient matrix b then is constructed to match these modeling specifications. A modeling example with correlations It often happens that m = d, so b is a square matrix. If b is non-singular, we say the diffusion is non-degenerate. Degenerate diffusion models with m < d also are common. For example, suppose r t is the short rate, which is the interest rate paid for loans that are repaid very soon (the next day, often) and have no risk of not being repaid. A simple model of the fluctuations in the short rate might be dr t = γ(r r t ) dt + σdw t. (3) 1

2 This is an equilibrium model (because there is a steady state probability distribution) built on the idea that there is a natural short rate r that the fluctuating rate returns to with rate γ, while being taken away from r by a constant infinitesimal variance term σdw t. People used to criticize this model because it is possible to have r t < 0. In the mean time, several European countries have at some time experienced periods of a negative short rate. A money market account is an amount of money whose value changes only because of interest at the short rate. This is modeled by dm t = r t M t dt. (4) Together, the components r t and M t form a two component degenerate diffusion process that may be written in matrix/vector form as ( ) ( ) ( rt γ(r rt ) σ d = dt + dw M t r t 0) t. The matrix b is 1, with a single source of noise for a two component model. This model could be enriched by adding a risky asset S t that satisfies ds t = µs t dt + σs t dw t. (5) Of course, the σ in the r t equation (short rate volatility) is different from the σ in the risky asset equation (stock volatility). Moreover, we need to model correlations between interest rate and stock price changes. A natural to do this adapting the one factor market model that Markowitz used to model prices of different stocks. In that model, there is a market factor, which will be called dz 0,t, and there are idiosyncratic factors for each stochastic process, which will be called dz r,t and dz S,t. These three factors are taken to be independent. Correlations between market ds and dr are created by weighting factors differently. A market factor is a source of noise that effects all the financial instruments in a market in some way. Here, there is just one market factor (a one factor model), dz 0,t. The individual instrument prices have weightings β to the market, so that the σdw t in the r equation has a contribution β r dz 0,t. The σdw in the S equation has a contribution βsdz 0,t. The β factors are different but the noise dz 0,t is the same. This term is responsible for correlations between dr and ds. The instruments also have independent idiosyncratic factors, σ r and σ S. These are sources of uncertainty that affect one instrument and not the other. Idiosyncratic factors are responsible for the less than 100% correlation between the noisy instrument values. The SDEs that incorporate these modeling ideas are dr t = γ(r r t ) dt + β r dz 0,t + σ r dz r,t, (6) and ds t = µs t dt + S t ( β S dz 0,t + σ S dz S,t ). (7)

3 Using the independence of the three noise terms dz 0.t, dz r,t, and dz S,t, we find var(dr t ) = ( β r + σ r) dt (8) var(ds t ) = S t ( β S + σ S) dt (9) β r β S corr(dr t, ds t ) = β r + σr β S + σs (10) d r t M t S t γ(r r t ) β r σ r 0 = r t dt µs t S t β S 0 S t σ S dz 0,t dz r,t dz S,t. (11) The first row of this matrix equation is equivalent to the r equation (6). The second row is the noise-free M equation (4). The third row is the risky asset equation (7). This is in the form of the general SDE (1) with the identifications X t = r t M t S t a(x t ) = a(r t, M t, S t ) = γ(r r t) r t µs t dw t = dz 0,t dz r,t dz S,t β r σ r 0 b(r t, M t, S t ) = (1) S t β S 0 S t σ S The d = 3 component diffusion process (11) seems to have m = 3 sources of noise, Z 0, Z r, and Z S. But this is misleading, because the noise matrix (1) does not have full rank. The process is a degenerate diffusion. In fact, the same model can be re-written using just two independent noise sources. Infinitesimal generator The stochastic process X t has the infinitesimal generator (usually just generator) Lg(x) = 1 µ jk (x) xj xk g + a j (x) xj g. (13) j=1 k=1 Let f(x, t) be a value function of the form Then f satisfies a backward equation j=1 f(x, t) = E[ V (X T ) X t = x]. t f + Lf = 0. (14) 3

4 This has been stated before, but here is a derivation. Before the derivation, some explanation of the notation (13) used to define the generator. The numbers µ jk form a d d matrix µ(x). This is related to the noise matrix b by µ(x) = b(x) b T (x). (15) If you want to work in component form, the entries of µ are given by µ jk = b jl b kl. (16) l=1 The formulas (15) or (16) are just covariance formulas. The double sum in the generator formula (13) may be written in matrix form in several ways. These use notations for the matrix of second partial derivatives of g. This is the hessian matrix of g, which may be denoted H, with entries H jk = xj xk. The hessian matrix is also written as H = D g. The double sum in (13) is the trace of the matrix product µ and D g j=1 k=1 µ jk (x) xj xk g = Tr(µD g). (17) If A is a square matrix, then the trace is the sum of the diagonal entries Tr(A) = A jj. It may seem surprising that this seemingly arbitrary sum is so useful. One explanation is the fact that the trace commutes even if the matrices do not. If A is d n and B is n d, then AB is d d and BA is n n. If n d, then AB BA, just because they are different size. Even in that case, the trace of the n n matrix is equal to the trace of the d d matrix Tr(AB) = Tr(BA). In particular, if A is diagonalizable with RAR 1 = Λ (here, Λ is the diagonal matrix with eigenvalues on the diagonal), then Tr(Λ) = Tr(RAR 1 ) = Tr [ R ( AR 1) ] = Tr [ ( AR 1) R ] = Tr [ A ( R 1 R ) ] Tr(Λ) = Tr(A). 4

5 This shows that the trace of A is the trace of Λ, which is the sum of the eigenvalues of A. If A and B are compatible for multiplication, then (AB) jl = l A jk B kl. Thus, Tr(AB) = j ( ) A jk B kj. If B is symmetric, we can replace B kj with B jk. This justifies the trace formula (17). Another notation for the double sum in (17) is µ::d g. You get the inner product of two vectors by multiplying corresponding components and adding the results. You can think of doing the same thing with two matrices. This is written A::B = A jk B jk. jk These are related by k A::B = Tr(AB T ). This is the same as AB if B is symmetric. Thus, the generator may be written in matrix/vector form as Lg(x) = 1 µ(x)::d g + a(x) g(x). Here is a derivation of the generator formula (13) from the general definition of generator E[ g(x t ) X 0 = x ] g(x) Lg(x) = lim. t 0 t For this, we use the Euler approximation X j, t = x j + a j (x) t + t b jk (x) ξ k, [ ξ k N (0, 1), i.i.d. ]. k=1 For the upcoming calculations, we write this in the form X j, t = x j + X j, X j = a j (x) t + t b jk (x) ξ k, (18) We expand g(x t ) in a Taylor series around x. We simplify by leaving out the argument x, writing g for g(x), j g for j g(x), etc. The result is g(x + X) = g + j g X j + 1 j=1 j=1 k=1 k=1 xj xk g X j X k + O( X 3 ). 5

6 The calculation is done to second order because X is on the order of t so X (the quadratic terms in the sum) are on the order of t. We do not calculate the X 3 terms because they are of order t 3 and vanish in the limit t 0 even when divided by t. We now find the expectation values needed to evaluate the generator. We start with the simpler terms: E[ X j ] = a j (x) t + O( t 3 ). This is because the terms ξ k have mean value zero. Thus E[ xj g X j ] = xj g a j (x) t + O( t 3 ). Next, we look at the quadratic terms. For these, use the fact that X j X k has a contribution of order t that involves products ξ i ξ l, but that all other terms in X j X k have powers t 3 or t, which do not contribute as t after dividing by t. Therefore, we calculate keeping only terms of order t: E[ X j X k ] = t i=1 l=1 b ji b kl E[ξ i ξ l ] + O( t 3 ). Now, use the fact that ξ i and ξ l are independent and mean zero if i l, while E[ξ l ξ l ] = 1. This leads to (using the relation µ = bb T ) E[ X j X k ] = t b ji b kl + O( t 3 ) = t µjk + O( t 3 ). l=1 This evaluates the hard part of the expectation E 1 xj xk g X j X k = 1 xj xk g µ jk t + O( t 3 ). jk This is the derivation of the generator formula (13) from the definition of the generator. It is one of the fundamental calculations of the subject of Stochastic Calculus. Backward equation, tower property The backward equation can be derived in generator form but here is a slightly different presentation that emphasizes the tower property. The tower property is that the conditional expectation of the conditional expectation is the conditional expectation. Specifically, suppose (U, V, W ) is a three component random variable. Let F (u, v, w) be some function of these variables and consider the the conditional expectations G(u, v) = E[ F (u, v, W ) U = u, V = v ] jk H(u) = E[ F (u, V, W ) U = u ]. 6

7 Then H(u) is the conditional expectation of F, conditioned on U = u. It is also the conditional expectation of G, which is the conditional expectation of F, conditioned on U = u and V = v. In formulas, H(u) = E[ G(u, V ) U = u ]. We apply this generic tower property to the three variables X t, X t+s, and X T, and to the function F = V (X T ). We assume t < t + s < T. We are interested in the conditional expectation of a function that depends on one of the three variables f9x) = E [V (X T ) X t = x ]. 7

### Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### F Matrix Calculus F 1

F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### A Primer on Index Notation

A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

### Lecture. S t = S t δ[s t ].

Lecture In real life the vast majority of all traded options are written on stocks having at least one dividend left before the date of expiration of the option. Thus the study of dividends is important

### The Method of Partial Fractions Math 121 Calculus II Spring 2015

Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

ARBITRAGE-FREE OPTION PRICING MODELS Denis Bell University of North Florida Modelling Stock Prices Example American Express In mathematical finance, it is customary to model a stock price by an (Ito) stochatic

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

### CS229 Lecture notes. Andrew Ng

CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually

### Linear Algebra Review and Reference

Linear Algebra Review and Reference Zico Kolter (updated by Chuong Do) September 30, 2015 Contents 1 Basic Concepts and Notation 2 11 Basic Notation 2 2 Matrix Multiplication 3 21 Vector-Vector Products

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### Linear and quadratic Taylor polynomials for functions of several variables.

ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### The Black-Scholes-Merton Approach to Pricing Options

he Black-Scholes-Merton Approach to Pricing Options Paul J Atzberger Comments should be sent to: atzberg@mathucsbedu Introduction In this article we shall discuss the Black-Scholes-Merton approach to determining

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

LECTURE 7: BLACK SCHOLES THEORY 1. Introduction: The Black Scholes Model In 1973 Fisher Black and Myron Scholes ushered in the modern era of derivative securities with a seminal paper 1 on the pricing

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

### 3.1 Least squares in matrix form

118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

### Simulating Stochastic Differential Equations

Monte Carlo Simulation: IEOR E473 Fall 24 c 24 by Martin Haugh Simulating Stochastic Differential Equations 1 Brief Review of Stochastic Calculus and Itô s Lemma Let S t be the time t price of a particular

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### Diusion processes. Olivier Scaillet. University of Geneva and Swiss Finance Institute

Diusion processes Olivier Scaillet University of Geneva and Swiss Finance Institute Outline 1 Brownian motion 2 Itô integral 3 Diusion processes 4 Black-Scholes 5 Equity linked life insurance 6 Merton

### Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem Gagan Deep Singh Assistant Vice President Genpact Smart Decision Services Financial

### Mathematical Finance

Mathematical Finance Option Pricing under the Risk-Neutral Measure Cory Barnes Department of Mathematics University of Washington June 11, 2013 Outline 1 Probability Background 2 Black Scholes for European

### Multiple regression - Matrices

Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### (a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Pricing Formula for 3-Period Discrete Barrier Options

Pricing Formula for 3-Period Discrete Barrier Options Chun-Yuan Chiu Down-and-Out Call Options We first give the pricing formula as an integral and then simplify the integral to obtain a formula similar

### Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n + 3 2 lim m 2 1

. Calculate the sum of the series Answer: 3 4. n 2 + 4n + 3. The answer in decimal form (for the Blitz):, 75. Solution. n 2 + 4n + 3 = (n + )(n + 3) = (n + 3) (n + ) = 2 (n + )(n + 3) ( 2 n + ) = m ( n

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

### Matrix Differentiation

1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### How To Price A Call Option

Now by Itô s formula But Mu f and u g in Ū. Hence τ θ u(x) =E( Mu(X) ds + u(x(τ θ))) 0 τ θ u(x) E( f(x) ds + g(x(τ θ))) = J x (θ). 0 But since u(x) =J x (θ ), we consequently have u(x) =J x (θ ) = min

### On Market-Making and Delta-Hedging

On Market-Making and Delta-Hedging 1 Market Makers 2 Market-Making and Bond-Pricing On Market-Making and Delta-Hedging 1 Market Makers 2 Market-Making and Bond-Pricing What to market makers do? Provide

### 1 Short Introduction to Time Series

ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

### Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

### DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

### 5 Numerical Differentiation

D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

### CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

### A note on companion matrices

Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### 1 Lecture: Integration of rational functions by decomposition

Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

### ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

### RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

### 1.7. Partial Fractions. 1.7.1. Rational Functions and Partial Fractions. A rational function is a quotient of two polynomials: R(x) = P (x) Q(x).

.7. PRTIL FRCTIONS 3.7. Partial Fractions.7.. Rational Functions and Partial Fractions. rational function is a quotient of two polynomials: R(x) = P (x) Q(x). Here we discuss how to integrate rational

### expression is written horizontally. The Last terms ((2)( 4)) because they are the last terms of the two polynomials. This is called the FOIL method.

A polynomial of degree n (in one variable, with real coefficients) is an expression of the form: a n x n + a n 1 x n 1 + a n 2 x n 2 + + a 2 x 2 + a 1 x + a 0 where a n, a n 1, a n 2, a 2, a 1, a 0 are

### Partial Fractions Examples

Partial Fractions Examples Partial fractions is the name given to a technique of integration that may be used to integrate any ratio of polynomials. A ratio of polynomials is called a rational function.

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Brief Introduction to Vectors and Matrices

CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

### Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

### 7. DYNAMIC LIGHT SCATTERING 7.1 First order temporal autocorrelation function.

7. DYNAMIC LIGHT SCATTERING 7. First order temporal autocorrelation function. Dynamic light scattering (DLS) studies the properties of inhomogeneous and dynamic media. A generic situation is illustrated

### Integrals of Rational Functions

Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

### DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER. William J. Bernstein. David Wilkinson

DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER William J. Bernstein wbern@mail.coos.or.us and David Wilkinson DaveWilk@worldnet.att.net November 24, 1997 Abstract The effective (geometric

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### How the Greeks would have hedged correlation risk of foreign exchange options

How the Greeks would have hedged correlation risk of foreign exchange options Uwe Wystup Commerzbank Treasury and Financial Products Neue Mainzer Strasse 32 36 60261 Frankfurt am Main GERMANY wystup@mathfinance.de

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Factoring Polynomials and Solving Quadratic Equations

Factoring Polynomials and Solving Quadratic Equations Math Tutorial Lab Special Topic Factoring Factoring Binomials Remember that a binomial is just a polynomial with two terms. Some examples include 2x+3

### Examination paper for TMA4115 Matematikk 3

Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

### Cointegration Pairs Trading Strategy On Derivatives 1

Cointegration Pairs Trading Strategy On Derivatives Cointegration Pairs Trading Strategy On Derivatives 1 By Ngai Hang CHAN Co-Authors: Dr. P.K. LEE and Ms. Lai Fun PUN Department of Statistics The Chinese

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### The Behavior of Bonds and Interest Rates. An Impossible Bond Pricing Model. 780 w Interest Rate Models

780 w Interest Rate Models The Behavior of Bonds and Interest Rates Before discussing how a bond market-maker would delta-hedge, we first need to specify how bonds behave. Suppose we try to model a zero-coupon

### CBE 6333, R. Levicky 1. Tensor Notation.

CBE 6333, R. Levicky 1 Tensor Notation. Engineers and scientists find it useful to have a general terminology to indicate how many directions are associated with a physical quantity such as temperature

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### 1 IEOR 4700: Introduction to stochastic integration

Copyright c 7 by Karl Sigman 1 IEOR 47: Introduction to stochastic integration 1.1 Riemann-Stieltjes integration Recall from calculus how the Riemann integral b a h(t)dt is defined for a continuous function

### A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the