Introduction to Inverse Problems (2 lectures)

Similar documents
Numerical Methods For Image Restoration

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Advanced Signal Processing and Digital Noise Reduction

Sparse recovery and compressed sensing in inverse problems

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Lecture 14. Point Spread Function (PSF)

Convolution. 1D Formula: 2D Formula: Example on the web:

1. Introduction. Consider the computation of an approximate solution of the minimization problem

Least-Squares Intersection of Lines

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

STA 4273H: Statistical Machine Learning

Applications to Data Smoothing and Image Processing I

Section for Cognitive Systems DTU Informatics, Technical University of Denmark

ALGEBRAIC EIGENVALUE PROBLEM

Numerical Methods I Eigenvalue Problems

1646 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 12, DECEMBER 1997

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

System Identification for Acoustic Comms.:

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Linear Algebra Review. Vectors

Latest Results on High-Resolution Reconstruction from Video Sequences

THE PROBLEM OF finding localized energy solutions

Dual Methods for Total Variation-Based Image Restoration

Principles of Digital Communication

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

CS Introduction to Data Mining Instructor: Abdullah Mueen

Journal of Computational and Applied Mathematics

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Recipes in C++

Nonlinear Iterative Partial Least Squares Method

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Understanding and Applying Kalman Filtering

Linear Threshold Units

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Total Variation Regularization in PET Reconstruction

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

Bayesian Image Super-Resolution

The Image Deblurring Problem

Mean-Shift Tracking with Random Sampling

Mean value theorem, Taylors Theorem, Maxima and Minima.

Applied Linear Algebra I Review page 1

4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department sss40@eng.cam.ac.

Lecture 5: Variants of the LMS algorithm

Optical Metrology. Third Edition. Kjell J. Gasvik Spectra Vision AS, Trondheim, Norway JOHN WILEY & SONS, LTD

WAVES AND FIELDS IN INHOMOGENEOUS MEDIA

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Lecture 3: Linear methods for classification

Cortical Source Localization of Human Scalp EEG. Kaushik Majumdar Indian Statistical Institute Bangalore Center

Probability and Random Variables. Generation of random variables (r.v.)

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Evidence Optimization Techniques for Estimating Stimulus-Response Functions

METNUMER - Numerical Methods

Chapter 6. Orthogonality

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Short-time FFT, Multi-taper analysis & Filtering in SPM12

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Super-Resolution Methods for Digital Image and Video Processing

(Quasi-)Newton methods

Component Ordering in Independent Component Analysis Based on Data Power

Monotonicity Hints. Abstract

Computational Foundations of Cognitive Science

Statistical Machine Learning

Lecture 7: Finding Lyapunov Functions 1

Nonlinear Modal Analysis of Mechanical Systems with Frictionless Contact Interfaces

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Parametric Statistical Modeling

A Stochastic 3MG Algorithm with Application to 2D Filter Identification

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

Model order reduction via Proper Orthogonal Decomposition

QUALITY ENGINEERING PROGRAM

[1] Diagonal factorization

SGN-1158 Introduction to Signal Processing Test. Solutions

Part II Redundant Dictionaries and Pursuit Algorithms

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Least Squares Estimation

2.2 Creaseness operator

MATHEMATICAL METHODS OF STATISTICS

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Binary Image Reconstruction

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Course overview Processamento de sinais 2009/10 LEA

Lecture Topic: Low-Rank Approximations

MATH APPLIED MATRIX THEORY

How To Register Point Sets

ADVANCED ALGORITHMS FOR EQUALIZATION ON ADSL CHANNEL

Similarity and Diagonalization. Similar Matrices

Introduction to Sparsity in Signal Processing 1

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

Linköping University Electronic Press

Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets

State of Stress at Point

Methods of Data Analysis Working with probability distributions

Statistical machine learning, high dimension and big data

Transcription:

Introduction to Inverse Problems (2 lectures) Summary Direct and inverse problems Examples of direct (forward) problems Deterministic and statistical points of view Ill-posed and ill-conditioned problems An illustrative example: The deconvolution problem Truncated Fourier decomposition (TFD); Tikhonov regularization Generalized Tikhonov regularization; Bayesian perspective. Iterative optimization. IP, José Bioucas Dias, 2007, IST 1

Direct/Inverse problems Direct (forward) problem Causes Inverse problem Effects Example: Direct problem: the computation of the trajectories of bodies from the knowledge of the forces. Inverse problem: determination of the forces from the knowledge of the trajectories Newton solved the first direct/inverse problem: the determintion of the gravitation force from the Kepler laws describing the trajectories of planets 2

An example: a linear time invariant (LTI) system Direct problem: Fourier domain Inverse problem: Source of difficulties: is unbounded A perturbation on leads to a perturbation on given by high frequencies of the perturbation are amplified, degrading the estimate of f 3

Image deblurring Observation model in (linear) image restoration/reconstruction observed image noise Linear operator original image (e.g., blur, tomography, MRI,...) Goal: estimate f from g 4

Image deblurring via regularization original Blurred, 9x9 uniform restored 5

MRI example Hydrogen density 2D frequency samples (9.4%) 6

Compressed Sensing (sparse representation) 2 1.5 1 0.5 0-0.5-1 -1.5 Sparse vector f -2 0 200 400 600 800 1000 Random matrix 0.2 0.15 0.1 0.05 0-0.05-0.1-0.15 Observed data y -0.2 0 20 40 60 80 100 2 1.5 1 0.5 0-0.5 Compressed Sensing N=1000 M=100-1 -1.5-2 0 200 400 600 800 1000 7

Classes of direct problems Deterministic observation mechanism Original data (image) Operator + Observed data (image) perturbation 8

Classes of direct problems (deterministic) Linear space-invariant imaging systems Blur (motion, out-of-focus, Diffraction-limited imaging atmospheric) Near field acoustic holography Channel equalization Parameter identification Linear space-variant imaging systems (first kind Fredholm equation) X-ray tomography MR imaging Radar imaging Sonar imaging Inverse diffraction Inverse source Linear regression 9

Classes of direct problems Statistical observation mechanism Original data (image) Observed data (image) Ex: Linear/nonlinear observations in additive Gaussian noise + 10

Classes of direct problems (statistic) Linear/nonlinear observation driven by non-additive noise Parameters of a distribution Random signal/ image Rayleigh noise in coherent imaging Poisson noise in photo-electric conversion SPET (single photon emission tomography) PET (positron emission tomography) Ex: Amplitude in a coherent imaging system (radar, ultrasound) Terrain reflectance Inphase/quadrature backscattered signal 11

Well-posed/ill-posed inverse problems [Hadamard, 1923] Definition: Let be a (possible nonlinear) operator The inverse problem of solving Hadamard sense if: is well-posed in the 1) A solution exists for any in the observed data space 2) The solution is unique 3) The inverse mapping is continuous An inverse problem that is not well-posed is termed ill-posed The operator A of an inverse well/ill-posed problem is termed well/ill-posed 12

Finite/Infinite dimensional linear operators Linear Operators: The linear inverse problem equivalently, is well-posed if 1) and 2) holds or, and If is finite-dimensional, the corresponding inverse problem is well-posed iif either one of the properties 1) and 2) holds Example: In infinite-dimensional spaces Consider A defined on, If a solution of exists, it is unique since However, there are elements not in Thus, A is ill-posed (point 1 of the Hadamard conditions does not hold) Stability is also lacking: Take Then, does not converge when 13

Ill-conditioned inverse problems lll-posed lll-conditioned Many well-posed inverse problems are ill-conditioned, in the sense that For linear operators (tight bound) 14

Example: Discrete deconvolution Cyclic convolution Matrix notation N-periodic funtions A is cyclic Toeplitz 15

Example: Discrete deconvolution 16

Eigen-decomposition of cyclic matrices (unitary) Eigenvector (Fourier) matrix Eigenvalue matrix (diagonal) is the DFT of at frequency 17

Example: Discrete deconvolution (geometric viewpoint) Action of A on f Note: 18

Example: Discrete deconvolution (inferring f) Assume that Then is invertible and Thus, assuming the direct model We have error 19

Example: cyclic convolution with a Gaussian kernel 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 5 10 15 20 25 30 35 0 0 5 10 15 20 25 30 35 1.2 1 0.8 0.6 0.4 0.2 0-0.2 0 5 10 15 20 25 30 35 3 2 1 0-1 -2-3 0 5 10 15 20 25 30 35 What went wrong? 20

Example: Discrete deconvolution (estimation error) Size of the error Assume that Thus Which is a set enclosed by an ellipsoid with radii 21

Example: Discrete deconvolution (estimation error) The estimation error is the vector The components satisfy 22

Cyclic convolution with a Gaussian kernel (cont.) 10 2 10 0 10-2 1 (unit impulse function) 10-4 10-6 10-8 0 5 10 15 20 25 30 35 Noise dominates at high frequencies and is amplified by 23

Example: Discrete deconvolution (A is ill-posed) Assume now that is not invertible and it may happen that i.e, some are zero Least-squares solution Projection error 24

Example: Discrete deconvolution (A is ill-posed) Least-squares approach Orthogonal components 25

Example: Discrete deconvolution (A is ill-posed) Invisible objects is the minimum norm solution (related to the Moore-Penrose inverse) 26

Example: Discrete deconvolution (Regularization) A is ill-conditioned A is ill-posed In both cases small eigenvalues are sources of instabilities Often, the smaller the eigenvalue the more oscilating the corresponding eigenvector (high frequences) Regularization by filtereing: shrink/threshold large values of i.e, multiply the eigenvalues by a regularizer filter such that as 27

Example: Discrete deconvolution Regularization by filtering (frequency multiplication time convolution) Such that 1) as 2 ) The larger eigenvalues are retained Truncated Fourier Decomposition (TFD) Tikhonov (Wiener) filter 28

Example: Discrete deconvolution (Regularization by filtering) TFD Tikhonov Tikhonov regularization Thus Solution of the variational problem 29

Example: Discrete deconvolution (1D example) Gaussian shaped of standard deviation = 20 10 0 10-2 1.4 1.2 f g 10-4 1 10-6 10-8 10-10 0.8 0.6 0.4 0.2 10-12 -4-3 -2-1 0 1 2 3 4 frequency 0 0 50 100 150 200 250 300 30

Example: Discrete deconvolution (1D example -TFD) 4 1.5 1.2 3 f f f f 1 f f 2 1 0.8 1 0-1 -2 0.5 0 0.6 0.4 0.2 0-3 0 50 100 150 200 250 300-0.5 0 50 100 150 200 250 300-0.2 0 50 100 150 200 250 300 1.2 1.2 1.2 1 f f 1 f f 1 f f 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0-0.2 0 50 100 150 200 250 300-0.2 0 50 100 150 200 250 300-0.2 0 50 100 150 200 250 300 31

Example: Discrete deconvolution (2D example-fd) uniform 32

Example: Discrete deconvolution (2D example-tfd) 33

Curing Ill-posed/Ill-conditioned inverse problems Golden rule for solving ill-posed/ill-conditioned inverse problems Search for solutions which: are compatible with the observed data satisfy additional constraints (a priori or prior information) coming from the (physics) problem 34

Generalized Tikhonov regularization Tikhonov and TFD regularization are not well suited to deal with data Nonhomogeneities, such as edges Generalized Tikhonov regularization Bayesian viewpoint Data Discrepancy term Penalty/ Regularization term Negative loglikelihood Negative logprior 35

Dominating approaches to regularization 1) 2) 3) 4) In given circumstances 2), 3), and 4) are equivalent 36

Example: Discrete deconvolution (Nonquadratic regularization) penalize oscillatory solutions discontinuity preserving (robust) regularization is nonconvex hard optimization problem non-discontinuity preserving regularization is convex treatable optimization problem 37

Optimization - Quadratic Linear system of equations Large systems require iterative methods - Non-quadratic and smooth Methods: Steepest descent, nonlinear conjugate gradient, Newton, trust regions, - Non-quadratic and nonsmooth Constrained optimization (Linear, quadratic, second-order cone programs) Methods: Iterative Shrinkage/Thesholding; Coordinate Subspace Optimization; forward-backward splitting; Primal-dual Newton Majorization Minimizaton (MM) class 38

Majorization Minorization (MM) Framework Let Majorization Minorization algorithm:...with equality if and only if Easy to prove monotonicity: Notes: should be easy to maximize EM is an algorithm of this type. 39

Example: Discrete deconvolution (1D example NQ Regula.) Tikhonov 1.2 1.2 1.2 1 0.8 f f 1 0.8 f f 1 0.8 f f 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0-0.2 0 50 100 150-0.2 0 50 100 150-0.2 0 50 100 150 1.2 1.4 1.2 1 0.8 f f 1.2 1 f f 1 0.8 f f 0.6 0.8 0.6 0.4 0.6 0.4 0.2 0.4 0.2 0 0.2 0-0.2 0 50 100 150 0 0 50 100 150-0.2 0 50 100 150 40

Example: Discrete deconvolution (2D example-total Variation) Total variation regularization (TV) where TV regularizer penalizes highly oscilatory solutions, while it preserves the edges 41

Bibliography [Ch1. RB2, Ch1. L1] Important topics Euclidian and Hilbert spaces of functions [App. A, RB2] Linear operators in function spaces [App. B, RB2] Euclidian vector spaces and matrices [App. C, RB2] Properties of the DFT and the FFT algorithm [App. B, RB2] Matlab scripts TFD_regularization_1D.m TFD_regularization_2D.m TFD_Error_1D.m TV_regulatization_1D.m 42