# Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke

2 Deconvolution Ivo Ihrke

3 Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method discretization matrix properties regularization solution methods Deconvolution Examples

4 Applications - Astronomy BEFORE AFTER Images courtesy of Robert Vanderbei

5 Applications - Microscopy Images courtesy Meyer Instruments

6 Inverse Problem - Definition forward problem given a mathematical model M and its parameters m, compute (predict) observations o inverse problem given observations o and a mathematical model M, compute the model's parameters

7 Inverse Problems Example Deconvolution forward problem convolution example blur filter given an image m and a filter kernel k, compute the blurred image o

8 Inverse Problems Example Deconvolution inverse problem deconvolution example blur filter given a blurred image o and a filter kernel k, compute the sharp image need to invert n is noise

9 Fourier Solution

10 Deconvolution - Theory deconvolution in Fourier space convolution theorem ( F is the Fourier transform ): deconvolution: problems division by zero Gibbs phenomenon (ringing artifacts)

11 A One-Dimensional Example Deconvolution Spectral most common: is a low pass filter, the inverse filter, is high pass amplifies noise and numerical errors

12 A One-Dimensional Example Deconvolution Spectral reconstruction is noisy even if data is perfect! Reason: numerical errors in representation of function

13 A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter

14 A One-Dimensional Example Deconvolution Spectral solution: restrict frequency response of high pass filter (clamping)

15 A One-Dimensional Example - Deconvolution Spectral reconstruction with clamped inverse filter

16 A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter

17 A One-Dimensional Example Deconvolution Spectral Automatic per-frequency tuning: Wiener Deconvolution - Alternative definition of inverse kernel - Least squares optimal - Per-frequency SNR must be known

18 Algebraic Solution

19 A One-Dimensional Example- Deconvolution Algebraic alternative: algebraic reconstruction convolution discretization: linear combination of basis functions

20 A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) linear system

21 A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) unknown linear system

22 A One-Dimensional Example Deconvolution Algebraic normal equations solve to obtain solution in a least squares sense apply to deconvolution solution is completely broken!

23 A One-Dimensional Example Deconvolution Algebraic Why? analyze distribution of eigenvalues Remember: we will check the singular values Ok, since and non-negative eigenvalues is SPD (symmetric, positive semi-definite) Singular values are the square root of the eigenvalues Matrix is underdetermined

24 A One-Dimensional Example Deconvolution Algebraic matrix values! has a very wide range of singular more than half of the singular values are smaller than machine epsilon ( ) for double precision Log-Plot!

25 A One-Dimensional Example Deconvolution Algebraic Why is this bad? Singular Value Decomposition: U, V are orthonormal, D is diagonal Inverse of M: singular values are diagonal elements of D inversion:

26 A One-Dimensional Example Deconvolution Algebraic computing model parameters from observations: again: amplification of noise potential division by zero Log-Plot!

27 A One-Dimensional Example Deconvolution Algebraic inverse problems are often ill-conditioned (have a numerical null-space) inversion causes amplification of noise numerical null space

28 Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is well-posed if 1. a solution exists 2. the solution is unique 3. the solution continually depends on the data

29 Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is ill-posed if it is not well-posed most often condition (3) is violated if model has a (numerical) null space, parameter choice influences the data in the null-space of the data very slightly, if at all noise takes over and is amplified when inverting the model

30 Condition Number measure of ill-conditionedness: condition number measure of stability for numerical inversion ratio between largest and smallest singular value smaller condition number less problems when inverting linear system condition number close to one implies near orthogonal matrix

31 Truncated Singular Value Decomposition solution to stability problems: avoid dividing by values close to zero Truncated Singular Value Decomposition (TSVD) is called the regularization parameter

32 Minimum Norm Solution Let be the null-space of A and is the minimum norm solution

33 Regularization countering the effect of ill-conditioned problems is called regularization an ill-conditioned problem behaves like a singular (i.e. under-constrained) system family of solutions exist impose additional knowledge to pick a favorable solution TSVD results in minimum norm solution

34 Example 1D Deconvolution back to our example apply TSVD solution is much smoother than Fourier deconvolution unregularized solution TSVD regularized solution

35 Large Scale Problems consider 2D deconvolution 512x512 image, 256x256 basis functions least squares problem results in matrix that is 65536x65536! even worse in 3D (millions of unknowns) problem: SVD is Intel Xeon 2-core 2GHz (introduced 2010) today impractical to compute for systems larger than (takes a couple of hours) Question: How to compute regularized solutions for large scale systems?

36 Explicit Regularization Answer: modify original problem to include additional optimization goals (e.g. small norm solutions) minimize modified quadratic form regularized normal equations:

37 Modified Normal Equations include data term, smoothness term and blending parameter data Prior information (popular: smoothness) blending (regularization) parameter

38 Tikhonov Regularization setting and we have a quadratic optimization problem with data fitting and minimum norm terms data fitting minimum norm large will result in small value solution, small fits the data well find good trade-off regularization parameter

39 Tikhonov Regularization - Example reconstruction for different choices of small lambda, many oscillations large lambda, smooth solution (in the limit constant)

40 Tikhonov Regularization - Example

41 L-Curve criterion [Hansen98] need automatic way of determining want solution with small oscillations also want good data fit log-log plot of norm of residual (data fitting error) vs. norm of the solution (measure of oscillations in solution)

42 L-Curve Criterion video shows reconstructions for different start with L-Curve regularized solution

43 L-Curve Criterion compute L-Curve by solving inverse problem with choices of over a large range, e.g. point of highest curvature on resulting curve corresponds to optimal regularization parameter curvature computation find maximum and use corresponding to compute optimal solution

44 L-Curve Criterion Example 1D Deconvolution L-curve with automatically selected optimal point optimal regularization parameter is different for every problem

45 L-Curve Criterion Example 1D Deconvolution regularized solution (red) with optimal

46 Solving Large Linear Systems we can now regularize large ill-conditioned linear systems How to solve them? Gaussian elimination: SVD: direct solution methods are too time-consuming Solution: approximate iterative solution

47 Iterative Solution Methods for Large Linear Systems stationary iterative methods [Barret94] Examples Jacobi Gauss-Seidel Successive Over-Relaxation (SOR) use fixed-point iteration matrix G and vector c are constant throughout iteration generally slow convergence don't use for practical applications

48 Iterative Solution Methods for Large Linear Systems non-stationary iterative methods [Barret94] conjugate gradients (CG) symmetric, positive definite linear systems ( SPD ) conjugate gradients for the normal equations short CGLS or CGNR avoid explicit computation of CG type methods are good because fast convergence (depends on condition number) regularization built in! number of iterations = regularization parameter behave similar to truncated SVD

49 Iterative Solution Methods for Large Linear Systems iterative solution methods require only matrix-vector multiplications most efficient if matrix is sparse sparse matrix means lots of zero entries back to our hypothetical 65536x65536 matrix A memory consumption for full matrix: sparse matrices store only non-zero matrix entries Question: How do we get sparse matrices?

50 Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model:

51 Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model: will be zero over a wide range of values

52 Iterative Solution Methods for Large Linear Systems sparse matrix structure for 1D deconvolution problem

53 Inverse Problems Wrap Up inverse problems are often ill-posed if solution is unstable check condition number if problem is small use TSVD and Matlab otherwise use CG if problem is symmetric (positive definite), otherwise CGLS if convergence is slow try Tikhonov regularization it's simple improves condition number and thus convergence if problem gets large sparse linear system! if system is sparse, avoid computing usually dense make sure you have a explicitly it is

54 Example Coded Aperture Imaging

55 Coded Aperture Motivation building a camera without a lens hard to bend high energy rays astronomy, x-rays high energy rays can be blocked -> pinhole? pinhole has too much light loss use multiple pinholes at the same time how to reconstruct the desired signal?

56 Coded Aperture Imaging

57 Coded Aperture Imaging

58 Coded Aperture Imaging

59 Coded Aperture Imaging

60 Coded Aperture Imaging

61 Coded Aperture Imaging

62 Coded Aperture Imaging

63 Reconstruction by Inversion acquisition: inversion: the mask needs to be invertible even if it is there might be some numerical problems (condition number of M)

64 An Example original image single pinhole larger aperture multiple pinholes [

65 MURA Code coding pattern:, Simulated coded aperture image

66 Reconstruction by Inverse Filtering decoding pattern: A compute convolution between captured image and D reconstructed image D

67 Computational Photography Example Hardware-based Stabilization of Deconvolution (Flutter Shutter Camera) Slides by Ramesh Raskar

68 Input Image

69 Rectified Crop Deblurred Result

70

71

72

73 Traditional Camera Shutter is OPEN

74 Flutter Shutter Camera Shutter is OPEN and CLOSED

75 Comparison of Blurred Images

76 Implementation Completely Portable

77 Lab Setup

78 Log space Sync Function Blurring == Convolution Zeros! Traditional Camera: Box Filter

79 Preserves High Frequencies!!! Flutter Shutter: Coded Filter

80 Comparison

81 Inverse Filter stable Inverse Filter Unstable

82 Short Exposure Long Exposure Coded Exposure Flutter Shutter Matlab Lucy Ground Truth

83 References [Barret94] - Barret et al. Templates for the solution of linear systems: Building Blocks for Iterative Methods, 2 nd edition, 1994 [Hansen98] Per-Christian Hansen Rank-Deficient and Discrete Ill-Posed Problems, 1998 [Raskar06] - Ramesh Raskar, Amit Agrawal, and Jack Tumblin, Coded Exposure Photography: Motion Deblurring using Fluttered Shutter, ACM SIGGRAPH 2006

84 Exercise Implement the MURA pattern Convolve an image with the pattern normalized, range [0,1] Add noise Deconvolve using inverse mask Compute RMS error to original image Plot noise level vs. RMS error for different runs

### Eigenvalues of C SMS (16) 1 A. Eigenvalues of A (32) 1 A (16,32) 1 A

Negative Results for Multilevel Preconditioners in Image Deblurring C. R. Vogel Department of Mathematical Sciences Montana State University Bozeman, MT 59717-0240 USA Abstract. A one-dimensional deconvolution

### Chapter 3. Inverse Problems

20 Chapter 3 Inverse Problems Mathematicians often cannot help but cringe when physicists are doing math, and even more so whenever physicists claim to be rigorous with their math. This chapter is not

### Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

### Another Example: the Hubble Space Telescope

296 DIP Chapter and 2: Introduction and Integral Equations Motivation: Why Inverse Problems? A large-scale example, coming from a collaboration with Università degli Studi di Napoli Federico II in Naples.

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB. Sohail A. Dianat. Rochester Institute of Technology, New York, U.S.A. Eli S.

ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB Sohail A. Dianat Rochester Institute of Technology, New York, U.S.A. Eli S. Saber Rochester Institute of Technology, New York, U.S.A. (g) CRC Press Taylor

### Introduction to Inverse Problems (2 lectures)

Introduction to Inverse Problems (2 lectures) Summary Direct and inverse problems Examples of direct (forward) problems Deterministic and statistical points of view Ill-posed and ill-conditioned problems

### Regularized Matrix Computations

Regularized Matrix Computations Andrew E. Yagle Department of EECS, The University of Michigan, Ann Arbor, MI 489- Abstract We review the basic results on: () the singular value decomposition (SVD); ()

### (57) (58) (59) (60) (61)

Module 4 : Solving Linear Algebraic Equations Section 5 : Iterative Solution Techniques 5 Iterative Solution Techniques By this approach, we start with some initial guess solution, say for solution and

### 1. Introduction. Consider the computation of an approximate solution of the minimization problem

A NEW TIKHONOV REGULARIZATION METHOD MARTIN FUHRY AND LOTHAR REICHEL Abstract. The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available

### Numerical Methods For Image Restoration

Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:

### Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### The Image Deblurring Problem

page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

### Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Linear Least Squares

Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

### Linear Algebraic and Equations

Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods

### Inverse problems. Nikolai Piskunov 2014. Regularization: Example Lecture 4

Inverse problems Nikolai Piskunov 2014 Regularization: Example Lecture 4 Now that we know the theory, let s try an application: Problem: Optimal filtering of 1D and 2D data Solution: formulate an inverse

### LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

### AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

Linköping University Electronic Press Report A Preconditioned GMRES Method for Solving a 1D Sideways Heat Equation Zohreh Ranjbar and Lars Eldén LiTH-MAT-R, 348-296, No. 6 Available at: Linköping University

### Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/

Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters

### Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

### Blind Deconvolution of Corrupted Barcode Signals

Blind Deconvolution of Corrupted Barcode Signals Everardo Uribe and Yifan Zhang Advisors: Ernie Esser and Yifei Lou Interdisciplinary Computational and Applied Mathematics Program University of California,

### Resolving Objects at Higher Resolution from a Single Motion-blurred Image

Resolving Objects at Higher Resolution from a Single Motion-blurred Image Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research Labs (MERL) 201 Broadway, Cambridge, MA, USA 02139 [agrawal,raskar]@merl.com

### P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### THE PROBLEM OF finding localized energy solutions

600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,

Adaptive Coded Aperture Photography Oliver Bimber, Haroon Qureshi, Daniel Danch Institute of Johannes Kepler University, Linz Anselm Grundhoefer Disney Research Zurich Max Grosse Bauhaus University Weimar

### Convolution, Noise and Filters

T H E U N I V E R S I T Y of T E X A S Convolution, Noise and Filters Philip Baldwin, Ph.D. Department of Biochemistry Response to an Entire Signal The response of a system with impulse response h(t) to

### High Quality Image Deblurring Panchromatic Pixels

High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical

### Computer Vision: Filtering

Computer Vision: Filtering Raquel Urtasun TTI Chicago Jan 10, 2013 Raquel Urtasun (TTI-C) Computer Vision Jan 10, 2013 1 / 82 Today s lecture... Image formation Image Filtering Raquel Urtasun (TTI-C) Computer

### An Application of Linear Algebra to Image Compression

An Application of Linear Algebra to Image Compression Paul Dostert July 2, 2009 1 / 16 Image Compression There are hundreds of ways to compress images. Some basic ways use singular value decomposition

### Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide

### Linear Algebra Review Part 2: Ax=b

Linear Algebra Review Part 2: Ax=b Edwin Olson University of Michigan The Three-Day Plan Geometry of Linear Algebra Vectors, matrices, basic operations, lines, planes, homogeneous coordinates, transformations

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### Numerical Solution of Linear Systems

Numerical Solution of Linear Systems Chen Greif Department of Computer Science The University of British Columbia Vancouver B.C. Tel Aviv University December 17, 2008 Outline 1 Direct Solution Methods

### Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?

Solving linear equations on parallel distributed memory architectures by extrapolation Christer Andersson Abstract Extrapolation methods can be used to accelerate the convergence of vector sequences. It

### Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

### CSE168 Computer Graphics II, Rendering. Spring 2006 Matthias Zwicker

CSE168 Computer Graphics II, Rendering Spring 2006 Matthias Zwicker Last time Sampling and aliasing Aliasing Moire patterns Aliasing Sufficiently sampled Insufficiently sampled [R. Cook ] Fourier analysis

### Principal Component Analysis Application to images

Principal Component Analysis Application to images Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

### Mean value theorem, Taylors Theorem, Maxima and Minima.

MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

### Efficient Iterative Methods for Large Scale Inverse Problems

Efficient Iterative Methods for Large Scale Inverse Problems Joint work with: James Nagy Emory University Atlanta, GA, USA Julianne Chung (University of Maryland) Veronica Mejia-Bustamante (Emory) Inverse

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### Moving Average Filters

CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average

### Stability of time variant filters

Stable TV filters Stability of time variant filters Michael P. Lamoureux, Safa Ismail, and Gary F. Margrave ABSTRACT We report on a project to investigate the stability and boundedness properties of interpolated

### Contents. Gbur, Gregory J. Mathematical methods for optical physics and engineering digitalisiert durch: IDS Basel Bern

Preface page xv 1 Vector algebra 1 1.1 Preliminaries 1 1.2 Coordinate System invariance 4 1.3 Vector multiplication 9 1.4 Useful products of vectors 12 1.5 Linear vector Spaces 13 1.6 Focus: periodic media

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in

### Compressive Sensing. Examples in Image Compression. Lecture 4, July 30, Luiz Velho Eduardo A. B. da Silva Adriana Schulz

Compressive Sensing Examples in Image Compression Lecture 4, July, 09 Luiz Velho Eduardo A. B. da Silva Adriana Schulz Today s Lecture Discuss applications of CS in image compression Evaluate CS efficiency

### Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing

### NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

### Latest Results on High-Resolution Reconstruction from Video Sequences

Latest Results on High-Resolution Reconstruction from Video Sequences S. Lertrattanapanich and N. K. Bose The Spatial and Temporal Signal Processing Center Department of Electrical Engineering The Pennsylvania

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

### NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical

### CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics 226 (2009) 92 102 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

### 3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

### Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

### Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

### Lecture 5: Variants of the LMS algorithm

1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### ROCHESTER INSTITUTE OF TECHNOLOGY COURSE OUTLINE FORM COLLEGE OF SCIENCE. School of Mathematical Sciences

! ROCHESTER INSTITUTE OF TECHNOLOGY COURSE OUTLINE FORM COLLEGE OF SCIENCE School of Mathematical Sciences New Revised COURSE: COS-MATH-341 Advanced Linear Algebra 1.0 Course designations and approvals:

### Linear Algebra. Introduction

Linear Algebra Caren Diefenderfer, Hollins University, chair David Hill, Temple University, lead writer Sheldon Axler, San Francisco State University Nancy Neudauer, Pacific University David Strong, Pepperdine

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Need for Speed in MRI & Parallel Imaging I

G16.4428 Practical Magnetic Resonance Imaging II Sackler Institute of Biomedical Sciences New York University School of Medicine Need for Speed in MRI & Parallel Imaging I Ricardo Otazo, PhD ricardo.otazo@nyumc.org

### Sparse recovery and compressed sensing in inverse problems

Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology

### Linear Systems COS 323

Linear Systems COS 323 Last time: Constrained Optimization Linear constrained optimization Linear programming (LP) Simplex method for LP General optimization With equality constraints: Lagrange multipliers

### The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

### Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

### Lecture 2: 2D Fourier transforms and applications

Lecture 2: 2D Fourier transforms and applications B14 Image Analysis Michaelmas 2014 A. Zisserman Fourier transforms and spatial frequencies in 2D Definition and meaning The Convolution Theorem Applications

### Problem definition: optical flow

Motion Estimation http://www.sandlotscience.com/distortions/breathing_objects.htm http://www.sandlotscience.com/ambiguous/barberpole.htm Why estimate motion? Lots of uses Track object behavior Correct

### Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

### Compressive Acquisition of Sparse Deflectometric Maps

Compressive Acquisition of Sparse Deflectometric Maps Prasad Sudhakar*, Laurent Jacques*, Adriana Gonzalez Gonzalez* Xavier Dubois +, Philippe Antoine +, Luc Joannes + *: Louvain University (UCL), Louvain-la-Neuve,

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### 1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

### Lecture No. # 02 Prologue-Part 2

Advanced Matrix Theory and Linear Algebra for Engineers Prof. R.Vittal Rao Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 02 Prologue-Part 2 In the last

### CSE 494 CSE/CBS 598 (Fall 2007): Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye

CSE 494 CSE/CBS 598 Fall 2007: Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye 1 Introduction One important method for data compression and classification is to organize

### Automatic parameter setting for Arnoldi-Tikhonov methods

Automatic parameter setting for Arnoldi-Tikhonov methods S. Gazzola, P. Novati Department of Mathematics University of Padova, Italy March 16, 2012 Abstract In the framework of iterative regularization

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

### Jennifer Sloan May 23, Linear Inverse Problems

Jennifer Sloan May 23, 2006 Linear Inverse Problems OUR PLAN Review Pertinet Linear Algebra Topics Forward Problem for Linear Systems Inverse Problem for Linear Systems Discuss Well-posedness and Overdetermined

### APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large