Computational Optical Imaging - Optique Numerique. -- Deconvolution --



Similar documents
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Lecture 5: Singular Value Decomposition SVD (1)

Numerical Methods For Image Restoration

Chapter 6. Orthogonality

The Image Deblurring Problem

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Numerical Methods I Eigenvalue Problems

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Convolution. 1D Formula: 2D Formula: Example on the web:

Nonlinear Iterative Partial Least Squares Method

THE PROBLEM OF finding localized energy solutions

Linear Algebra Review. Vectors

Adaptive Coded Aperture Photography

High Quality Image Deblurring Panchromatic Pixels

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Applied Linear Algebra I Review page 1

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

Applications to Data Smoothing and Image Processing I

[1] Diagonal factorization

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Mean value theorem, Taylors Theorem, Maxima and Minima.

Similar matrices and Jordan form

3 Orthogonal Vectors and Matrices

Similarity and Diagonalization. Similar Matrices

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Latest Results on High-Resolution Reconstruction from Video Sequences

Lecture 5: Variants of the LMS algorithm

Sparse recovery and compressed sensing in inverse problems

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Numerical Analysis Lecture Notes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Symmetric Matrices

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Orthogonal Diagonalization of Symmetric Matrices

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Least-Squares Intersection of Lines

1 Example of Time Series Analysis by SSA 1

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Eigenvalues and Eigenvectors

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

DATA ANALYSIS II. Matrix Algorithms

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

Inner Product Spaces and Orthogonality

Lecture 14. Point Spread Function (PSF)

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Short-time FFT, Multi-taper analysis & Filtering in SPM12

SOLVING LINEAR SYSTEMS

Matrices and Polynomials

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

An introduction to OBJECTIVE ASSESSMENT OF IMAGE QUALITY. Harrison H. Barrett University of Arizona Tucson, AZ

October 3rd, Linear Algebra & Properties of the Covariance Matrix

x = + x 2 + x

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Models of Cortical Maps II

ALGEBRAIC EIGENVALUE PROBLEM

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Lecture 5 Least-squares

How High a Degree is High Enough for High Order Finite Elements?

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Inner Product Spaces

Geometric Constraints

Best practices for efficient HPC performance with large models

1 Review of Least Squares Solutions to Overdetermined Systems

6. Cholesky factorization

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

α = u v. In other words, Orthogonal Projection

5. Orthogonal matrices

Chapter 7. Lyapunov Exponents. 7.1 Maps

MATH APPLIED MATRIX THEORY

MATHEMATICAL METHODS OF STATISTICS

by the matrix A results in a vector which is a reflection of the given

ELECTRICAL ENGINEERING

Introduction to Matrix Algebra

More than you wanted to know about quadratic forms

Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC

Transcription:

Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke

Deconvolution Ivo Ihrke

Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method discretization matrix properties regularization solution methods Deconvolution Examples

Applications - Astronomy BEFORE AFTER Images courtesy of Robert Vanderbei

Applications - Microscopy Images courtesy Meyer Instruments

Inverse Problem - Definition forward problem given a mathematical model M and its parameters m, compute (predict) observations o inverse problem given observations o and a mathematical model M, compute the model's parameters

Inverse Problems Example Deconvolution forward problem convolution example blur filter given an image m and a filter kernel k, compute the blurred image o

Inverse Problems Example Deconvolution inverse problem deconvolution example blur filter given a blurred image o and a filter kernel k, compute the sharp image need to invert n is noise

Fourier Solution

Deconvolution - Theory deconvolution in Fourier space convolution theorem ( F is the Fourier transform ): deconvolution: problems division by zero Gibbs phenomenon (ringing artifacts)

A One-Dimensional Example Deconvolution Spectral most common: is a low pass filter, the inverse filter, is high pass amplifies noise and numerical errors

A One-Dimensional Example Deconvolution Spectral reconstruction is noisy even if data is perfect! Reason: numerical errors in representation of function

A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter

A One-Dimensional Example Deconvolution Spectral solution: restrict frequency response of high pass filter (clamping)

A One-Dimensional Example - Deconvolution Spectral reconstruction with clamped inverse filter

A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter

A One-Dimensional Example Deconvolution Spectral Automatic per-frequency tuning: Wiener Deconvolution - Alternative definition of inverse kernel - Least squares optimal - Per-frequency SNR must be known

Algebraic Solution

A One-Dimensional Example- Deconvolution Algebraic alternative: algebraic reconstruction convolution discretization: linear combination of basis functions

A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) linear system

A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) unknown linear system

A One-Dimensional Example Deconvolution Algebraic normal equations solve to obtain solution in a least squares sense apply to deconvolution solution is completely broken!

A One-Dimensional Example Deconvolution Algebraic Why? analyze distribution of eigenvalues Remember: we will check the singular values Ok, since and non-negative eigenvalues is SPD (symmetric, positive semi-definite) Singular values are the square root of the eigenvalues Matrix is underdetermined

A One-Dimensional Example Deconvolution Algebraic matrix values! has a very wide range of singular more than half of the singular values are smaller than machine epsilon ( ) for double precision Log-Plot!

A One-Dimensional Example Deconvolution Algebraic Why is this bad? Singular Value Decomposition: U, V are orthonormal, D is diagonal Inverse of M: singular values are diagonal elements of D inversion:

A One-Dimensional Example Deconvolution Algebraic computing model parameters from observations: again: amplification of noise potential division by zero Log-Plot!

A One-Dimensional Example Deconvolution Algebraic inverse problems are often ill-conditioned (have a numerical null-space) inversion causes amplification of noise numerical null space

Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is well-posed if 1. a solution exists 2. the solution is unique 3. the solution continually depends on the data

Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is ill-posed if it is not well-posed most often condition (3) is violated if model has a (numerical) null space, parameter choice influences the data in the null-space of the data very slightly, if at all noise takes over and is amplified when inverting the model

Condition Number measure of ill-conditionedness: condition number measure of stability for numerical inversion ratio between largest and smallest singular value smaller condition number less problems when inverting linear system condition number close to one implies near orthogonal matrix

Truncated Singular Value Decomposition solution to stability problems: avoid dividing by values close to zero Truncated Singular Value Decomposition (TSVD) is called the regularization parameter

Minimum Norm Solution Let be the null-space of A and is the minimum norm solution

Regularization countering the effect of ill-conditioned problems is called regularization an ill-conditioned problem behaves like a singular (i.e. under-constrained) system family of solutions exist impose additional knowledge to pick a favorable solution TSVD results in minimum norm solution

Example 1D Deconvolution back to our example apply TSVD solution is much smoother than Fourier deconvolution unregularized solution TSVD regularized solution

Large Scale Problems consider 2D deconvolution 512x512 image, 256x256 basis functions least squares problem results in matrix that is 65536x65536! even worse in 3D (millions of unknowns) problem: SVD is Intel Xeon 2-core (E5503) @ 2GHz (introduced 2010) today impractical to compute for systems larger than (takes a couple of hours) Question: How to compute regularized solutions for large scale systems?

Explicit Regularization Answer: modify original problem to include additional optimization goals (e.g. small norm solutions) minimize modified quadratic form regularized normal equations:

Modified Normal Equations include data term, smoothness term and blending parameter data Prior information (popular: smoothness) blending (regularization) parameter

Tikhonov Regularization setting and we have a quadratic optimization problem with data fitting and minimum norm terms data fitting minimum norm large will result in small value solution, small fits the data well find good trade-off regularization parameter

Tikhonov Regularization - Example reconstruction for different choices of small lambda, many oscillations large lambda, smooth solution (in the limit constant)

Tikhonov Regularization - Example

L-Curve criterion [Hansen98] need automatic way of determining want solution with small oscillations also want good data fit log-log plot of norm of residual (data fitting error) vs. norm of the solution (measure of oscillations in solution)

L-Curve Criterion video shows reconstructions for different start with L-Curve regularized solution

L-Curve Criterion compute L-Curve by solving inverse problem with choices of over a large range, e.g. point of highest curvature on resulting curve corresponds to optimal regularization parameter curvature computation find maximum and use corresponding to compute optimal solution

L-Curve Criterion Example 1D Deconvolution L-curve with automatically selected optimal point optimal regularization parameter is different for every problem

L-Curve Criterion Example 1D Deconvolution regularized solution (red) with optimal

Solving Large Linear Systems we can now regularize large ill-conditioned linear systems How to solve them? Gaussian elimination: SVD: direct solution methods are too time-consuming Solution: approximate iterative solution

Iterative Solution Methods for Large Linear Systems stationary iterative methods [Barret94] Examples Jacobi Gauss-Seidel Successive Over-Relaxation (SOR) use fixed-point iteration matrix G and vector c are constant throughout iteration generally slow convergence don't use for practical applications

Iterative Solution Methods for Large Linear Systems non-stationary iterative methods [Barret94] conjugate gradients (CG) symmetric, positive definite linear systems ( SPD ) conjugate gradients for the normal equations short CGLS or CGNR avoid explicit computation of CG type methods are good because fast convergence (depends on condition number) regularization built in! number of iterations = regularization parameter behave similar to truncated SVD

Iterative Solution Methods for Large Linear Systems iterative solution methods require only matrix-vector multiplications most efficient if matrix is sparse sparse matrix means lots of zero entries back to our hypothetical 65536x65536 matrix A memory consumption for full matrix: sparse matrices store only non-zero matrix entries Question: How do we get sparse matrices?

Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model:

Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model: will be zero over a wide range of values

Iterative Solution Methods for Large Linear Systems sparse matrix structure for 1D deconvolution problem

Inverse Problems Wrap Up inverse problems are often ill-posed if solution is unstable check condition number if problem is small use TSVD and Matlab otherwise use CG if problem is symmetric (positive definite), otherwise CGLS if convergence is slow try Tikhonov regularization it's simple improves condition number and thus convergence if problem gets large sparse linear system! if system is sparse, avoid computing usually dense make sure you have a explicitly it is

Example Coded Aperture Imaging

Coded Aperture Motivation building a camera without a lens hard to bend high energy rays astronomy, x-rays high energy rays can be blocked -> pinhole? pinhole has too much light loss use multiple pinholes at the same time how to reconstruct the desired signal?

Coded Aperture Imaging

Coded Aperture Imaging

Coded Aperture Imaging

Coded Aperture Imaging

Coded Aperture Imaging

Coded Aperture Imaging

Coded Aperture Imaging

Reconstruction by Inversion acquisition: inversion: the mask needs to be invertible even if it is there might be some numerical problems (condition number of M)

An Example original image single pinhole larger aperture multiple pinholes [http://www.paulcarlisle.net/old/codedaperture.html]

MURA Code coding pattern:, Simulated coded aperture image

Reconstruction by Inverse Filtering decoding pattern: A 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 1 compute convolution between captured image and D reconstructed image D + - - - - + + - - + + - + + - + - + + - + + - - +

Computational Photography Example Hardware-based Stabilization of Deconvolution (Flutter Shutter Camera) Slides by Ramesh Raskar

Input Image

Rectified Crop Deblurred Result

Traditional Camera Shutter is OPEN

Flutter Shutter Camera Shutter is OPEN and CLOSED

Comparison of Blurred Images

Implementation Completely Portable

Lab Setup

Log space Sync Function Blurring == Convolution Zeros! Traditional Camera: Box Filter

Preserves High Frequencies!!! Flutter Shutter: Coded Filter

Comparison

Inverse Filter stable Inverse Filter Unstable

Short Exposure Long Exposure Coded Exposure Flutter Shutter Matlab Lucy Ground Truth

References [Barret94] - Barret et al. Templates for the solution of linear systems: Building Blocks for Iterative Methods, 2 nd edition, 1994 [Hansen98] Per-Christian Hansen Rank-Deficient and Discrete Ill-Posed Problems, 1998 [Raskar06] - Ramesh Raskar, Amit Agrawal, and Jack Tumblin, Coded Exposure Photography: Motion Deblurring using Fluttered Shutter, ACM SIGGRAPH 2006

Exercise Implement the MURA pattern Convolve an image with the pattern normalized, range [0,1] Add noise Deconvolve using inverse mask Compute RMS error to original image Plot noise level vs. RMS error for different runs