On the dynamic programming principle for static and dynamic inverse problems

Similar documents
Exact shape-reconstruction by one-step linearization in electrical impedance tomography

ORDINARY DIFFERENTIAL EQUATIONS

Optimization of Supply Chain Networks

Numerical Methods for Differential Equations

Introduction to the Finite Element Method

Numerical Methods For Image Restoration

Probability and Random Variables. Generation of random variables (r.v.)

Airport Planning and Design. Excel Solver

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Model order reduction via Proper Orthogonal Decomposition

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

Iterative Solvers for Linear Systems

Discrete mechanics, optimal control and formation flying spacecraft

Domain Decomposition Methods. Partial Differential Equations


Numerical Methods for Differential Equations

Sparse recovery and compressed sensing in inverse problems

Numerical Methods for Engineers

An asynchronous Oneshot method with Load Balancing

Notes for AA214, Chapter 7. T. H. Pulliam Stanford University

An Introduction to Applied Mathematics: An Iterative Process

Pricing and calibration in local volatility models via fast quantization

1. Introduction. Consider the computation of an approximate solution of the minimization problem

Understanding and Applying Kalman Filtering

Heavy Parallelization of Alternating Direction Schemes in Multi-Factor Option Valuation Models. Cris Doloc, Ph.D.

How To Price A Call Option

9. Particular Solutions of Non-homogeneous second order equations Undetermined Coefficients

19 LINEAR QUADRATIC REGULATOR

Dynamics. Basilio Bona. DAUIN-Politecnico di Torino. Basilio Bona (DAUIN-Politecnico di Torino) Dynamics / 30

Reinforcement Learning

Finite Difference Approach to Option Pricing

Support Vector Machine (SVM)

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems

OpenFOAM Optimization Tools

SENSITIVITY ANALYSIS OF THE EARLY EXERCISE BOUNDARY FOR AMERICAN STYLE OF ASIAN OPTIONS

Reinforcement Learning

Paper Pulp Dewatering

FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang

Systems with Persistent Memory: the Observation Inequality Problems and Solutions

Figure 2.1: Center of mass of four points.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Applied Computational Economics and Finance

Journal of Computational and Applied Mathematics

Dual Methods for Total Variation-Based Image Restoration

Computation of portfolio hedging strategies using a reduced Monge-Ampère equation

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

The Heat Equation. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Bayesian Adaptive Trading with a Daily Cycle

Towards Dual MPC. Tor Aksel N. Heirung B. Erik Ydstie Bjarne Foss

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

8. Linear least-squares

Statistical machine learning, high dimension and big data

Lecture Cost functional

(Quasi-)Newton methods

Stochastic Gradient Method: Applications

Lecture 5: Variants of the LMS algorithm

Least-Squares Intersection of Lines

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

Predict Influencers in the Social Network

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

A First Course in Elementary Differential Equations. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Identification de Fissures en Elastodynamique Temporelle par la Méthode de Sensibilité Topologique

A Semi-Lagrangian Approach for Natural Gas Storage Valuation and Optimal Operation

HEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEV-TYPE INEQUALITY. Received June 1, 2007; revised September 30, 2007

MONTE-CARLO SIMULATION OF AMERICAN OPTIONS WITH GPUS. Julien Demouth, NVIDIA

The integrating factor method (Sect. 2.1).

Lecture 13 Linear quadratic Lyapunov theory

Optimization under uncertainty: modeling and solution methods

DFG-Schwerpunktprogramm 1324

Parabolic Equations. Chapter 5. Contents Well-Posed Initial-Boundary Value Problem Time Irreversibility of the Heat Equation

THE PROBLEM OF finding localized energy solutions

Kalman Filter Applied to a Active Queue Management Problem

ON SUPERCYCLICITY CRITERIA. Nuha H. Hamada Business Administration College Al Ain University of Science and Technology 5-th st, Abu Dhabi, , UAE

Nonlinear Programming Methods.S2 Quadratic Programming

Modelling electricity market data: the CARMA spot model, forward prices and the risk premium

Current Density Impedance Imaging with Complete Electrode Model

Reaction diffusion systems and pattern formation

Numerical methods for American options

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

A Stochastic 3MG Algorithm with Application to 2D Filter Identification

Kinematical Animation

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain)

Trend and Seasonal Components

Content. Professur für Steuerung, Regelung und Systemdynamik. Lecture: Vehicle Dynamics Tutor: T. Wey Date: , 20:11:52

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION

Factor analysis. Angela Montanari

Lecture 5 Rational functions and partial fraction expansion

1 Finite difference example: 1D implicit heat equation

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation

of Functional Analysis and Applications Group

1 Short Introduction to Time Series

Transcription:

On the dynamic programming principle for static and dynamic inverse problems Stefan Kindermann, Industrial Mathematics Institute Johannes Kepler University Linz, Austria joint work with Antonio Leitao, University of Santa Catarina, Brazil

Static Inverse Problems Static inverse problems in Hilbert spaces Fu = y F... (linear) operator between Hilbert spaces u... unknown solution y... data (possibly noisy) Examples (Parameter identification, image processing...) Regularization Well established theory

Dynamic Inverse Problems Dynamic inverse problems in Hilbert spaces F (t)u(t) = y(t) t... F (t)... u(t)... y(t)... (artificial) time parameter (linear) time-dependent operator between Hilbert spaces unknown time-dependent solution time-dependent data (possibly noisy) Can be handled by standard theory standard numerical algorithm do not take into account time-structure

Examples Parameter Identification in elliptic PDEs with time-dependent parameter (moving objects) Dynamic impedance tomography Endocardiography Online identification (Kügler)... just include a t in your favorite inverse problems

Regularization for dynamic problems F (t) : H L 2 F (t)u(t) = y(t) Y = L 2 ([0, T ], L 2 (Ω))... data space First choice for solution space: X = L 2 ([0, T ], H)... solution space e.g. Tikhonov Regularization u α = argmin 1 2 T 0 F (t)u(t) y(t) 2 H dt + α 2 T 0 Solution: u α (t) = (F (t)f (t) + αi ) 1 F (t)y(t) u(., t) 2 H dt

Tikhonov Regularization L 2 -case Solution: u α (t) = (F (t)f (t) + αi ) 1 F (t)y(t) Problem is time decoupled Easy implementation, results not satisfactory Not continuous in time Use higher regularization in t.

Tikhonov Regularization-H 1 H 1 Regularization in t: u α = argmin 1 2 Solution T T 0 F (t)u(t) y(t) 2 H dt + α 2 0 u (t) 2 H dt F (t)f (t)u α + αu α (t) = F (t)y(t) u α(0) = u α(t ) = 0 Hilbert-space-valued boundary-value problem. Discretization: If F F is matrix of size n n, [0, T ] is discretized into n T intervals full matrix of size (nn T ) (nn T ).

Louis-Schmitt Method A general numerical method for solving such problems was suggested by [Louis, Schmitt] Approximate derivatives by differences t-discretization Decomposition of the matrices Problem requires to solve Sylvester-Matrix equation FF UR + αu = Y for U, which can be done more efficient. Alternative: Dynamic Programming [S.K.,A. Leitao] iterative method

Principles of Dynamic Programming Dynamic Programming: [R. Bellman] Idea: Follow the path of the optimal solution in t Evolution equation in t

Solve Problem by dynamic programming Rewrite problem as constraint optimization problem min u,v 1 T F (t)u(t) y(t) 2 H 2 dt + α T v(t) 2 H 0 2 dt 0 constraints u = v Linear quadratic control problem, v control

Solve Problem by dynamic programming Value function: T V (t, ξ) := 1 min u,v 2 t u(t) = ξ u = v Value function satisfies Hamilton-Jacobi Eq F (s)u(s) y(s) 2 + α v(s) 2 ds V t (t, ξ) = 1 2 F (t)ξ y(t) 2 + 1 α (V ξ, V ξ ) From V (t, ξ) we get v V (T, ξ) = 0 v = 1 α V ξ(t, u) u can be found by evolution equation u (t) = 1 α V ξ(t, u(.t))

Hamilton-Jacobi Equation Hamilton-Jacobi: Equation: PDE in a very high dimensional space Dimension = number of unknown in solution Numerical solution almost impossible If F is linear, V is a quadratic functional in ξ Ansatz: V (t, ξ) = 1 ξ, Q(t)ξ + b(t)ξ + g(t) 2

Ansatz V (t, ξ) = 1 ξ, Q(t)ξ + b(t)ξ + g(t) 2 From HJ-Equation Equation for Q, b, g Riccati equation for operator Q(t) Evolution equation for b Equation for solution Q (t) = F (t)f (t) + 1 α Q(t) Q Q(T ) = 0 b (t) = Q 1 α + F (t)y(t) b(t ) = 0 u (t) = 1 (Q(t)u(t) + b(t)) α

Dynamic programming method I First: apply dynamic programming principle Second: discretize equation Algorithm 1 Solve Riccati-Equation for Q(t) and Equation for b(t) backwards in time by explicit Euler method 2 Solve Eq for u forwards in time by explicit Euler with some initial condition u 0

Alternative Method First discretize Functional t t i = k N Second: use discrete version of dynamic programming principle Algorithm Two backward recursions for Q, b one forward recursion for u: Q k 1 = (Q k + α 1 I ) 1 Q k + F k 1 F k 1 k = N + 1,..., 2 b k 1 = (Q k + α 1 I ) 1 b k F k 1 y k 1 k = N + 1,..., 2 u k = (Q k + αi ) 1 (αu k 1 b k )k = 1,..., N

Regularization Properties u(t) = T 0 σ γ(σ, µ, t)de λ (σ)f y(µ)dµ γ(λ, µ, t) = α 1 λ cosh(α 1 λ) { cosh(α 1 λ(t 1)) sinh(α 1 λ(µ) µ t sinh(α 1 λ(t) cosh(α 1 λ(µ 1)) µ t For fixed λ, γ(λ,.,.) is the Greens-Function for the boundary value problem Lx := x λ α x x(t ) = 0 x (0) = 0 Sturm-Liouville Theory Convergence Results as α 0

Computational Complexity After discretization let F (t) be a n N matrix, T timesteps Naive Approach: Solve Optimality Conditions 2 (NT )3 3 Louis-Schmitt Method (Sylvester Matrix Equation) Method I Explicit Euler if Q(t) is precomputed, then 25(n + T ) 3 + 2T (n(t + N)) O(n 3 T ) O(n 2 T ) Method II Complexity is linear T. O(n 3 T )

Related work: Dynamic programming for static problems with T as regularization parameter Static Problem: Fu = y Artificial time variable in u: u = u(t) Approximate solution u by minimizing Use J(u) = 1 2 T 0 Fu(t) y 2 H dt + 1 2 u T := u(t ) as regularized solution Question: Is this a regularization? Limit lim T u T? T acts as regularization parameter T 0 u (., t) H dt

Dynamic programming As before apply :... Algorithm Backward evolution for Q Forward evolution for u Q (t) = I + Q(t)FF Q Q(T ) = 0 u (t) = (F Q(t) (Fu(t) y) Approximation of solution by u T

Regularization Properties By Spectral Theory we get Q(t) = Q (t) = I + Q(t)FF Q(t) Q(T ) = 0 σ q(t, λ)df λ q(t, λ) = 1 λ tanh( λ(t t)). 1 1 u T := u(t ) = cosh( λt ) λ de λ F y + 1 cosh( λt ) de λu 0. Convergence and convergence rates results by usual spectral filter theory. (as in Engl, Hanke, Neubauer). Convergence as T, Convergence rates

Discrete Problems First: Discretization Second: Principle of optimality Instead of T, iteration index N acts as regularization parameter Recursion for Q i and u i u N = g N (λ)de λ F y. g N (λ) = 1 λ [ 1 ( ) ( ( )) ] 1 λ 4 + 1 λ T 2N+1 4 + 1. T n (x): Chebyshev polynomial of the first kind of order n Convergence as N, convergence rates

Examples Linear Integralequation with convolution kernel Stationary case Error vs. N, T error error 10 1 10 1 10 0 10 0 10 1 10 2 10 0 10 1 10 2 Landweber, CG, MethodI, MethodII Same order of convergence as CG parameter tuning in algorithm

Numerical Results- Dynamic case Linearized Dynamic Impedance Tomography Problem.γ(t,.) u = 0 Identify γ from Dirichlet to Neumann map for linearized problem Λ γ : u n u

Numerical Results- Dynamic case Nonlinear Problem F nl : γ(t) Λ γ Linearized Problem F δγ := F nl (1)δγ For results we use nonlinear data and add random noise

Numerical Results-Exact solution

Numerical Results-Reconstruction 5 % noise

Generalizations-Nonlinear Problems can be generalized to nonlinear problems: Nonlinear dynamic problems F (t, u(t)) = y(t) F is nonlinear operator in u Tikhonov functional for nonlinear problems J(u) = 1 2 T 0 F (t, u(t)) y(t) 2 H dt + α 2 T 0 u (t) 2 H dt

Nonlinear Problem Dynamic programming Hamilton-Jacobi Equation for Value function V t (t, ξ) = 1 2 F (t, ξ) y(t) 2 + 1 α (V ξ, V ξ ) V (T, ξ) = 0 u can be found by nonlinear evolution equation u (t) = 1 α V ξ(t, u(.t)) If V is known, equation for u is rather standard Open Problem: How to solve HJ equation?

Conclusion Iterative Method for static dynamical linear inverse problems Dynamic Programming Regularization theory available Complexity is linear in T Q can be precomputed Generalization to nonlinear problems