Dirichlet forms methods for error calculus and sensitivity analysis



Similar documents
South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Section 1.1. Introduction to R n

Electromagnetism - Lecture 2. Electric Fields

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

MATH. ALGEBRA I HONORS 9 th Grade ALGEBRA I HONORS

Numerical Analysis Lecture Notes

Lecture 14: Section 3.3

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

3. INNER PRODUCT SPACES

2 Integrating Both Sides

Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1

PCHS ALGEBRA PLACEMENT TEST

MATHEMATICAL METHODS OF STATISTICS

Exam 1 Sample Question SOLUTIONS. y = 2x

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

How To Understand The Theory Of Probability

Master s Theory Exam Spring 2006

APPLIED MATHEMATICS ADVANCED LEVEL

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

6 J - vector electric current density (A/m2 )

Statistical Machine Learning

Probability and Statistics

ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE

1 TRIGONOMETRY. 1.0 Introduction. 1.1 Sum and product formulae. Objectives

DEFINITION A complex number is a matrix of the form. x y. , y x

Lecture 3: Linear methods for classification

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Estimated Pre Calculus Pacing Timeline

SCHWEITZER ENGINEERING LABORATORIES, COMERCIAL LTDA.

Lecture L6 - Intrinsic Coordinates

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., July Derive 5: The Easiest... Just Got Better!

Statistics Graduate Courses

FURTHER VECTORS (MEI)

Equations Involving Lines and Planes Standard equations for lines in space

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

STA 4273H: Statistical Machine Learning

Dynamic Process Modeling. Process Dynamics and Control

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

G.A. Pavliotis. Department of Mathematics. Imperial College London

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

L 2 : x = s + 1, y = s, z = 4s Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

PROBLEM SET. Practice Problems for Exam #1. Math 2350, Fall Sept. 30, 2004 ANSWERS

Mean Value Coordinates

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Section The given line has equations. x = 3 + t(13 3) = t, y = 2 + t(3 + 2) = 2 + 5t, z = 7 + t( 8 7) = 7 15t.

Review Sheet for Test 1

13.4 THE CROSS PRODUCT

THREE DIMENSIONAL GEOMETRY

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Algebra 1 Course Information

Pre-Calculus Semester 1 Course Syllabus

Identifying second degree equations

1 Determinants and the Solvability of Linear Systems

HIGH SCHOOL: GEOMETRY (Page 1 of 4)

Probability and Random Variables. Generation of random variables (r.v.)

EMS. National Presentation Grand-Duchy of L U X E M B 0 U R G

12.5 Equations of Lines and Planes

Applications to Data Smoothing and Image Processing I

Inner Product Spaces

Lecture 8: Signal Detection and Noise Assumption

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Two vectors are equal if they have the same length and direction. They do not

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Diablo Valley College Catalog

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000)

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

State of Stress at Point

Understanding and Applying Kalman Filtering

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Sequence of Mathematics Courses

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i i 5 6i

Experiment #1, Analyze Data using Excel, Calculator and Graphs.

Monte Carlo Methods in Finance

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

1 Short Introduction to Time Series

Elasticity Theory Basics

Chapter 20. Vector Spaces and Bases

1 VECTOR SPACES AND SUBSPACES

THE CENTRAL LIMIT THEOREM TORONTO

INTEGRAL METHODS IN LOW-FREQUENCY ELECTROMAGNETICS

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization

Mathematics I, II and III (9465, 9470, and 9475)

Solving Systems of Linear Equations

Chapter 8 - Power Density Spectrum

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Transcription:

Dirichlet forms methods for error calculus and sensitivity analysis Nicolas BOULEAU, Osaka university, november 2004 These lectures propose tools for studying sensitivity of models to scalar or functional parameters. A Dirichlet forms based language is developed in order to manage the propagation of the variances and the biases of errors through mathematical models. Examples will be given in physics, numerical analysis and finance. In the two first lectures, the intuitive calculus of Gauss for the propagation of errors will be connected with the rigorous mathematical framework of error structures. Then the main features of error structures will be studied until infinite products in order to construct error structures on functional spaces and on the Monte Carlo space. The third lecture will be devoted to error calculus on the Wiener space with the classical Ornstein-Uhlenbeck structure or generalized Mehler-type structures with applications to SDE and finance. In the fourth lecture will be tackled the question of identifying an error structure. The main role of the Fisher information will be exposed and also other methods based on asymptotic Hopf-type theorems or based on Donsker invariance theorem. Reference : N. Bouleau, Error Calculus for Finance and Physics, De Gruyter 2003.

First lecture Propagation of errors : from Gauss to Dirichlet forms A) Propagation of errors - 1 Historical outlook (or landscape) - the propagation calculus of Gauss - its coherence property, non coherence of other formulae - 2 The propagation of errors - by non linear maps : variances and bias - bias by quadratic map - intuitive error calculus - 3 Examples of propagation calculations - Gaussian variables - triangle - oscillographe - 4 Examples of dynamical systems sensitivity analysis - Feigenbaum transition - piecewise linear system - Lorenz attractor B) Error structures - Languages with or without extension tool : the example of probability theory - Adding an extension tool to the language of Gauss : the idea - Definition of error structures - Examples - Comparison of approaches Second lecture Error structures and sensitivity analysis C) Properties of error structures - recalling the definition - Lipschitzian calculus - images - densities and Dloc - finite and infinite products - the gradient and the sharp - Integration by part formulae D) Application in simulation : error calculus on the Monte Carlo space - structures with or without border terms - Sensitivity analysis of a Markov chain E) Application in numerical analysis - sensitivity of an ODE to a functional coefficient - comments on finite elements methods

Third lecture New tools for finance F) Error structures on the Wiener space G) error structures on the Poisson space H) sensitivity analysis of an SDE, application to finance I) A non classical approach to finance Fourth lecture Links with statistics and empirical data J) Identifying an error structure - The link with the Fisher information and its stability - Natural error structures for some dynamical systems, extension of the "arbitrary functions method" K) Convergence in Dirichlet law and transition from finite to infinite dimension - Central limit theorem - Convergence of an erroneous random walk to the erroneous Brownian motion : extension of Donsker theorem and applications.

First lecture Propagation of errors : from Gauss to Dirichlet forms A) Propagation of errors - 1/ Historical outlook (or landscape) - the propagation calculus of Gauss - its coherence property, non coherence of other formulae - 2/ The propagation of errors - by non linear maps : variances and bias - bias by quadratic map - intuitive error calculus - 3/ Examples of propagation calculations - simulation of Gaussian variables - triangle - 4/ Examples of dynamical systems sensitivity analysis - Feigenbaum transition - piecewise linear system - Lorenz attractor B) Error structures - Languages with or without extension tool : the example of probability theory - Adding an extension tool to the language of Gauss : the idea - Definition of error structures - Examples - Comparison of approaches

Historical insights, error theory and least squares Legendre Nouvelles méthodes pour la détermination des orbites des planètes (1805) Gauss Theoria motus corporum coelestium (1809) Laplace Théorie analytique des probabilités (1811) Least squares method for chosing the best value Famous argument leading to the normal law for the errors Least squares method for solving linear systems

. Now, it is clear that if the errors are correlated, Gauss formula becomes σ 2 U = ij F F σ ij V i V j and in general σ ij depend on the values of the V i s, so that we obtain the general formula σ 2 U = ij F V i F V j σ ij (V 1, V 2,...)

2. The propagation of errors

Intuitive notion of error structure The preceding example shows that the quadratic error operator Ɣ naturally polarizes into a bilinear operator (as the covariance operator in probability theory), which is a first-order differential operator. 1. We thus adopt the following temporary definition of an error structure: An error structure is a space equipped with an operator Ɣ acting upon real functions (,Ɣ) and satisfying the following properties: a) Symmetry Ɣ[F, G] = Ɣ[G, F]; b) Bilinearity c) Positivity [ Ɣ λ i F i, ] µ j G j = λ i µ j Ɣ [ F i, G j ]; i j ij Ɣ [ F ] = Ɣ [ F, F ] 0 d) Functional calculus on regular functions Ɣ[ (F 1,...,F p ), (G 1,...,G q )] = i (F 1,...,F p ) j (G 1,...,G q )Ɣ[F i, G j ]. i, j 2. In order to take in account the biases, we also have to introduce a bias operato a linear operator acting on regular functions through a second order functional calculus involving Ɣ : A[ (F 1,...,F p )] = i (F 1,...,F p )A[F i ] i + 1 ij 2 (F 1,...,F p )Ɣ[F i, F j ] ij

Example Let us consider the usual way of simulating two normal random variables { X = R cos 2πV Y = R sin 2πV with R = 2 log U in other words U = e R2 2. Then (X, Y ) is a reduced normal pair when U and V are independent uniformly distributed on [0, 1]. Let us suppose that the variances of the errors on U and V are such that i) Γ[V,V]=1 ii) Γ[R,R]=4π 2 R 2 iii) G[V,R]=0. hypothesis i) means that the error on V does nt depend on V, hypothesis ii) means that the proportional error on R is constant, it is equivalent, by the functional calculus, to the hypothesis G[U,U]=16π 2 U 2 log 2 U, and hypothesis iii) means that the errors on Vand Rare uncorrelated. Then supposing G satisfies the functional calculus, we get G[X,X] =cos 2 2πV.G[R,R]+R 2 4π 2 sin 2 2πV =4π 2 R 2 G[Y,Y] =4π 2 R 2 G[X,Y] =cos2πv sin2πv.g[r,r]+4π 2 R 2 ( sin2πv cos2πv)=0 With these hypotheses on U and V and their errors, we can conclude that the variances of the errors on X and Y depend only on the radius R and the co-error on X and Y vanishes. If we think (U, V ) or (R, V ) as the data of the model and (X, Y ) as the output, we see that this error calculus may be seen as a sensitivity analysis but dealing with more information than simple derivation since we obtain co-sensitivity as well.

error calculation for a triangle. Suppose we are drawing atriangle with agraduated rule and protractor: we take the polar angle of O A, say θ 1, and set O A = l 1 ; next we take the angle (O A, AB), say θ 2, and set AB = l 2. y B θ 2 A O θ 1 x 1) Select hypotheses on errors l 1, l 2 and θ 1, θ 2 and their errors can be modeled as follows: ( (0, L) 2 (0, π) 2, B ( (0, L) 2 (0, π) 2), dl ) 1 dl 2 dθ 1 dθ 2 L L π π, ID, Ɣ where { ID = and Ɣ[ f ] = l 2 1 f L 2 ( dl1 L ( f l 1 dl 2 L dθ 1 π ) dθ 2 : π f, f, f, f ( L 2 dl1 dl 2 dθ 1 l 1 l 2 θ 1 θ 2 L L π ) 2 ( ) f f f 2 ( ) f 2 +l 1 l 2 +l 2 2 + + f ( ) f f 2 +, l 1 l 2 l 2 θ 1 θ 1 θ 2 θ 2 This quadratic error operator indicates that the errors on lengths l 1, l 2 are uncorrelated with those on angles θ 1, θ 2 (i.e. no term in f f l i θ j ). Such a hypothesis proves natural when measurements are conducted using different instruments. The bilinear operator associated with Ɣ is Ɣ[ f, g] = l 2 f g 1 + 1 ( f l 1 l 1 2 l g 1l 2 + f ) g + l 2 f g 2 l 1 l 2 l 2 l 1 l 2 l 2 + f g + 1 ( f g + f ) g + f g. θ 1 θ 1 2 θ 1 θ 2 θ 2 θ 1 θ 2 θ 2 )} dθ 2 π

2) Compute the errors on significant quantities using functional calculus on Ɣ Take point B for instance: X B = l 1 cos θ 1 + l 2 cos(θ 1 + θ 2 ), Y B = l 1 sin θ 1 + l 2 sin(θ 1 + θ 2 ) Ɣ[X B ] = l 2 1 + l 1l 2 (cos θ 2 + 2 sin θ 1 sin(θ 1 + θ 2 )) + l 2 2 (1 + 2 sin2 (θ 1 + θ 2 )) Ɣ[Y B ] = l 2 1 + l 1l 2 (cos θ 2 + 2 cos θ 1 cos(θ 1 + θ 2 )) + l 2 2 (1 + 2 cos2 (θ 1 + θ 2 )) Ɣ[X B, Y B ] = l 1 l 2 sin(2θ 1 + θ 2 ) l 2 2 sin(2θ 1 + 2θ 2 ). For the area of the triangle, the formula area(o AB) = 1 2 l 1l 2 sin θ 2 yields: Ɣ[area(O AB)] = 1 4 l2 1 l2 2 (1 + 2 sin2 θ 2 ). The proportional error on the triangle area (Ɣ[area(O AB)]) 1/2 ( ) 1 1/2 = area(o AB) sin 2 + 2 3 θ 2 reaches a minimum at θ 2 = π when the triangle is rectangular. From O B 2 = l 2 1 + 2l 1l 2 cos θ 2 + l 2 2, we obtain Ɣ[O B 2 ] = 4 [ (l 2 1 + l2 2 )2 + 3(l 2 1 + l2 2 )l ] 1l 2 cos θ 2 + 2l 2 1 l2 2 cos2 θ 2 = 4O B 2 (O B 2 l 1 l 2 cos θ 2 ) and by Ɣ[O B] = Ɣ[O B2 ] 4O B 2, we have: Ɣ[O B] = 1 l 1l 2 cos θ 2 O B 2 O B 2 thereby providing the result that the proportional error on O B is (Ɣ[O B])1/2 minimal when l 1 = l 2 and θ 2 = 0 and in this case = 3 O B 2.

Cathodic tube An oscillograph is modeled in the following way. After acceleration by an electric field, electrons arrive at point O 1 at a speed v 0 > 0 orthogonal to plane P 1. Between parallel planes P 1 and P 2, a magnetic field B orthogonal to O 1 O 2 is acting; its components on O 2 x and O 2 y are ( B 1, B 2 ). P 1 y P 2 P 3 v 0 O 1 O 2 O 3 x M a α Equation of the model. The physics of the problem is classical. The gravity force is negligible, the Lorenz force q v B is orthogonal to v such that the modulus v remains constant and equal to v 0, and the electrons describe a circle of radius R = mv 0 e B. If θ is the angle of the trajectory with O 1 O 2 as it passes through P 2, we then have: θ = arcsin a R O 2 A = R(1 cos θ) and A = ( O 2 A B 2 B, O 2 A B ) 1. B θ R P 2 M P 1 θ P 3 A O 1 O 2 O 3 a Figure in the plane of the trajectory α

The position of M is thus given by: M = (X, Y ) ( ) mv0 B2 X = (1 cos θ) + d tan θ e ( B ) B mv0 B1 Y = (1 cos θ) + d tan θ e B B θ = arcsin ae B mv 0 To study the sensitivity of point M to the magnetic field B, we assume that B varies in [ λ, λ] 2, equipped with the error structure ( [ λ, λ] 2, B ( [ λ, λ] 2), IP, ID, u u 2 1 + ) u 2 2. We can now compute the quadratic error on M, i.e. the matrix [ Ɣ ] ( ) ( M, M t Ɣ[X] Ɣ[X, Y ] ( X B = = 1 ) 2 + ( X B 2 ) 2 = Ɣ[X, Y ] Ɣ[Y ] X B 1 Y B 1 + X B 2 Y B 2 (1) X Y B 1 B 1 + X Y B 2 B 2 ( Y B 1 ) 2 + ( Y B 2 ) 2 ). Using some approximations to simplfy the calculations, we obtain [ Ɣ ] M, M t = (2) = ( 4β 2 B1 2 = B2 2 + ( α + β B1 2 + 3β ) 2 ( ( ) B2 2 2β B 1 B 2 α + 3β B 2 1 + B2)) 2 ( ( )) 2β B 1 B 2 α + 3β B 2 1 + B2 2 4β 2 B1 2B2 2 + ( α + β B2 2 + 3β ) 2. B2 1 It follows in particular, by computing the determinant, that the law of M is absolutely continuous.

If we now suppose that the inaccuracy on the magnetic field stems from a noise in the electric circuit responsible for generating B and that this noise is centered: A [ B 1 ] = A [ B2 ] = 0, we can compute the bias of the errors on M = (X, Y ) which yields A[X] = 1 2 X 2 B1 2 Ɣ [ ] 1 2 X B 1 + 2 B2 2 Ɣ [ ] B 2 A[Y ] = 1 2 Y 2 B1 2 Ɣ [ ] 1 2 Y B 1 + 2 B2 2 Ɣ [ ] B 2 A[X] = 4β B 2 A[Y ] = 4β B 1. By comparison with (2), we can observe that: A [ O3 M ] = y 4β α + β B 2 O3 M. bias O 3 x The appearance of biases in the absence of bias on the hypotheses is specifically due to the fact that the method considers the errors, although infinitesimal, to be random quantities. This feature will be highlighted in the following table.