Computational Optical Imaging - Optique Numerique. -- Deconvolution --
|
|
|
- Eugenia Hutchinson
- 10 years ago
- Views:
Transcription
1 Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke
2 Deconvolution Ivo Ihrke
3 Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method discretization matrix properties regularization solution methods Deconvolution Examples
4 Applications - Astronomy BEFORE AFTER Images courtesy of Robert Vanderbei
5 Applications - Microscopy Images courtesy Meyer Instruments
6 Inverse Problem - Definition forward problem given a mathematical model M and its parameters m, compute (predict) observations o inverse problem given observations o and a mathematical model M, compute the model's parameters
7 Inverse Problems Example Deconvolution forward problem convolution example blur filter given an image m and a filter kernel k, compute the blurred image o
8 Inverse Problems Example Deconvolution inverse problem deconvolution example blur filter given a blurred image o and a filter kernel k, compute the sharp image need to invert n is noise
9 Fourier Solution
10 Deconvolution - Theory deconvolution in Fourier space convolution theorem ( F is the Fourier transform ): deconvolution: problems division by zero Gibbs phenomenon (ringing artifacts)
11 A One-Dimensional Example Deconvolution Spectral most common: is a low pass filter, the inverse filter, is high pass amplifies noise and numerical errors
12 A One-Dimensional Example Deconvolution Spectral reconstruction is noisy even if data is perfect! Reason: numerical errors in representation of function
13 A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter
14 A One-Dimensional Example Deconvolution Spectral solution: restrict frequency response of high pass filter (clamping)
15 A One-Dimensional Example - Deconvolution Spectral reconstruction with clamped inverse filter
16 A One-Dimensional Example Deconvolution Spectral spectral view of signal, filter and inverse filter
17 A One-Dimensional Example Deconvolution Spectral Automatic per-frequency tuning: Wiener Deconvolution - Alternative definition of inverse kernel - Least squares optimal - Per-frequency SNR must be known
18 Algebraic Solution
19 A One-Dimensional Example- Deconvolution Algebraic alternative: algebraic reconstruction convolution discretization: linear combination of basis functions
20 A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) linear system
21 A One-Dimensional Example Deconvolution Algebraic discretization: observations are linear combinations of convolved basis functions linear system with unknowns often over-determined, i.e. more observations o than degrees of freedom (# basis functions ) unknown linear system
22 A One-Dimensional Example Deconvolution Algebraic normal equations solve to obtain solution in a least squares sense apply to deconvolution solution is completely broken!
23 A One-Dimensional Example Deconvolution Algebraic Why? analyze distribution of eigenvalues Remember: we will check the singular values Ok, since and non-negative eigenvalues is SPD (symmetric, positive semi-definite) Singular values are the square root of the eigenvalues Matrix is underdetermined
24 A One-Dimensional Example Deconvolution Algebraic matrix values! has a very wide range of singular more than half of the singular values are smaller than machine epsilon ( ) for double precision Log-Plot!
25 A One-Dimensional Example Deconvolution Algebraic Why is this bad? Singular Value Decomposition: U, V are orthonormal, D is diagonal Inverse of M: singular values are diagonal elements of D inversion:
26 A One-Dimensional Example Deconvolution Algebraic computing model parameters from observations: again: amplification of noise potential division by zero Log-Plot!
27 A One-Dimensional Example Deconvolution Algebraic inverse problems are often ill-conditioned (have a numerical null-space) inversion causes amplification of noise numerical null space
28 Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is well-posed if 1. a solution exists 2. the solution is unique 3. the solution continually depends on the data
29 Well-Posed and Ill-Posed Problems Definition [Hadamard1902] a problem is ill-posed if it is not well-posed most often condition (3) is violated if model has a (numerical) null space, parameter choice influences the data in the null-space of the data very slightly, if at all noise takes over and is amplified when inverting the model
30 Condition Number measure of ill-conditionedness: condition number measure of stability for numerical inversion ratio between largest and smallest singular value smaller condition number less problems when inverting linear system condition number close to one implies near orthogonal matrix
31 Truncated Singular Value Decomposition solution to stability problems: avoid dividing by values close to zero Truncated Singular Value Decomposition (TSVD) is called the regularization parameter
32 Minimum Norm Solution Let be the null-space of A and is the minimum norm solution
33 Regularization countering the effect of ill-conditioned problems is called regularization an ill-conditioned problem behaves like a singular (i.e. under-constrained) system family of solutions exist impose additional knowledge to pick a favorable solution TSVD results in minimum norm solution
34 Example 1D Deconvolution back to our example apply TSVD solution is much smoother than Fourier deconvolution unregularized solution TSVD regularized solution
35 Large Scale Problems consider 2D deconvolution 512x512 image, 256x256 basis functions least squares problem results in matrix that is 65536x65536! even worse in 3D (millions of unknowns) problem: SVD is Intel Xeon 2-core 2GHz (introduced 2010) today impractical to compute for systems larger than (takes a couple of hours) Question: How to compute regularized solutions for large scale systems?
36 Explicit Regularization Answer: modify original problem to include additional optimization goals (e.g. small norm solutions) minimize modified quadratic form regularized normal equations:
37 Modified Normal Equations include data term, smoothness term and blending parameter data Prior information (popular: smoothness) blending (regularization) parameter
38 Tikhonov Regularization setting and we have a quadratic optimization problem with data fitting and minimum norm terms data fitting minimum norm large will result in small value solution, small fits the data well find good trade-off regularization parameter
39 Tikhonov Regularization - Example reconstruction for different choices of small lambda, many oscillations large lambda, smooth solution (in the limit constant)
40 Tikhonov Regularization - Example
41 L-Curve criterion [Hansen98] need automatic way of determining want solution with small oscillations also want good data fit log-log plot of norm of residual (data fitting error) vs. norm of the solution (measure of oscillations in solution)
42 L-Curve Criterion video shows reconstructions for different start with L-Curve regularized solution
43 L-Curve Criterion compute L-Curve by solving inverse problem with choices of over a large range, e.g. point of highest curvature on resulting curve corresponds to optimal regularization parameter curvature computation find maximum and use corresponding to compute optimal solution
44 L-Curve Criterion Example 1D Deconvolution L-curve with automatically selected optimal point optimal regularization parameter is different for every problem
45 L-Curve Criterion Example 1D Deconvolution regularized solution (red) with optimal
46 Solving Large Linear Systems we can now regularize large ill-conditioned linear systems How to solve them? Gaussian elimination: SVD: direct solution methods are too time-consuming Solution: approximate iterative solution
47 Iterative Solution Methods for Large Linear Systems stationary iterative methods [Barret94] Examples Jacobi Gauss-Seidel Successive Over-Relaxation (SOR) use fixed-point iteration matrix G and vector c are constant throughout iteration generally slow convergence don't use for practical applications
48 Iterative Solution Methods for Large Linear Systems non-stationary iterative methods [Barret94] conjugate gradients (CG) symmetric, positive definite linear systems ( SPD ) conjugate gradients for the normal equations short CGLS or CGNR avoid explicit computation of CG type methods are good because fast convergence (depends on condition number) regularization built in! number of iterations = regularization parameter behave similar to truncated SVD
49 Iterative Solution Methods for Large Linear Systems iterative solution methods require only matrix-vector multiplications most efficient if matrix is sparse sparse matrix means lots of zero entries back to our hypothetical 65536x65536 matrix A memory consumption for full matrix: sparse matrices store only non-zero matrix entries Question: How do we get sparse matrices?
50 Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model:
51 Iterative Solution Methods for Large Linear Systems answer: use a discretization with basis functions that have local support, i.e. which are themselves zero over a wide range for deconvolution the filter kernel should also be locally supported discretized model: will be zero over a wide range of values
52 Iterative Solution Methods for Large Linear Systems sparse matrix structure for 1D deconvolution problem
53 Inverse Problems Wrap Up inverse problems are often ill-posed if solution is unstable check condition number if problem is small use TSVD and Matlab otherwise use CG if problem is symmetric (positive definite), otherwise CGLS if convergence is slow try Tikhonov regularization it's simple improves condition number and thus convergence if problem gets large sparse linear system! if system is sparse, avoid computing usually dense make sure you have a explicitly it is
54 Example Coded Aperture Imaging
55 Coded Aperture Motivation building a camera without a lens hard to bend high energy rays astronomy, x-rays high energy rays can be blocked -> pinhole? pinhole has too much light loss use multiple pinholes at the same time how to reconstruct the desired signal?
56 Coded Aperture Imaging
57 Coded Aperture Imaging
58 Coded Aperture Imaging
59 Coded Aperture Imaging
60 Coded Aperture Imaging
61 Coded Aperture Imaging
62 Coded Aperture Imaging
63 Reconstruction by Inversion acquisition: inversion: the mask needs to be invertible even if it is there might be some numerical problems (condition number of M)
64 An Example original image single pinhole larger aperture multiple pinholes [
65 MURA Code coding pattern:, Simulated coded aperture image
66 Reconstruction by Inverse Filtering decoding pattern: A compute convolution between captured image and D reconstructed image D
67 Computational Photography Example Hardware-based Stabilization of Deconvolution (Flutter Shutter Camera) Slides by Ramesh Raskar
68 Input Image
69 Rectified Crop Deblurred Result
70
71
72
73 Traditional Camera Shutter is OPEN
74 Flutter Shutter Camera Shutter is OPEN and CLOSED
75 Comparison of Blurred Images
76 Implementation Completely Portable
77 Lab Setup
78 Log space Sync Function Blurring == Convolution Zeros! Traditional Camera: Box Filter
79 Preserves High Frequencies!!! Flutter Shutter: Coded Filter
80 Comparison
81 Inverse Filter stable Inverse Filter Unstable
82 Short Exposure Long Exposure Coded Exposure Flutter Shutter Matlab Lucy Ground Truth
83 References [Barret94] - Barret et al. Templates for the solution of linear systems: Building Blocks for Iterative Methods, 2 nd edition, 1994 [Hansen98] Per-Christian Hansen Rank-Deficient and Discrete Ill-Posed Problems, 1998 [Raskar06] - Ramesh Raskar, Amit Agrawal, and Jack Tumblin, Coded Exposure Photography: Motion Deblurring using Fluttered Shutter, ACM SIGGRAPH 2006
84 Exercise Implement the MURA pattern Convolve an image with the pattern normalized, range [0,1] Add noise Deconvolve using inverse mask Compute RMS error to original image Plot noise level vs. RMS error for different runs
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001,
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
Numerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
THE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Adaptive Coded Aperture Photography
Adaptive Coded Aperture Photography Oliver Bimber, Haroon Qureshi, Daniel Danch Institute of Johannes Kepler University, Linz Anselm Grundhoefer Disney Research Zurich Max Grosse Bauhaus University Weimar
High Quality Image Deblurring Panchromatic Pixels
High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical
P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition
P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs
Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
Latest Results on High-Resolution Reconstruction from Video Sequences
Latest Results on High-Resolution Reconstruction from Video Sequences S. Lertrattanapanich and N. K. Bose The Spatial and Temporal Signal Processing Center Department of Electrical Engineering The Pennsylvania
Lecture 5: Variants of the LMS algorithm
1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output
Sparse recovery and compressed sensing in inverse problems
Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.
Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
1 Example of Time Series Analysis by SSA 1
1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales
Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
Eigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
Linear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Lecture 14. Point Spread Function (PSF)
Lecture 14 Point Spread Function (PSF), Modulation Transfer Function (MTF), Signal-to-noise Ratio (SNR), Contrast-to-noise Ratio (CNR), and Receiver Operating Curves (ROC) Point Spread Function (PSF) Recollect
Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
Short-time FFT, Multi-taper analysis & Filtering in SPM12
Short-time FFT, Multi-taper analysis & Filtering in SPM12 Computational Psychiatry Seminar, FS 2015 Daniel Renz, Translational Neuromodeling Unit, ETHZ & UZH 20.03.2015 Overview Refresher Short-time Fourier
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
Matrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
An introduction to OBJECTIVE ASSESSMENT OF IMAGE QUALITY. Harrison H. Barrett University of Arizona Tucson, AZ
An introduction to OBJECTIVE ASSESSMENT OF IMAGE QUALITY Harrison H. Barrett University of Arizona Tucson, AZ Outline! Approaches to image quality! Why not fidelity?! Basic premises of the task-based approach!
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Finite Dimensional Hilbert Spaces and Linear Inverse Problems
Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,
Models of Cortical Maps II
CN510: Principles and Methods of Cognitive and Neural Modeling Models of Cortical Maps II Lecture 19 Instructor: Anatoli Gorchetchnikov dy dt The Network of Grossberg (1976) Ay B y f (
ALGEBRAIC EIGENVALUE PROBLEM
ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.
Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
Lecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
How High a Degree is High Enough for High Order Finite Elements?
This space is reserved for the Procedia header, do not use it How High a Degree is High Enough for High Order Finite Elements? William F. National Institute of Standards and Technology, Gaithersburg, Maryland,
ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Geometric Constraints
Simulation in Computer Graphics Geometric Constraints Matthias Teschner Computer Science Department University of Freiburg Outline introduction penalty method Lagrange multipliers local constraints University
Best practices for efficient HPC performance with large models
Best practices for efficient HPC performance with large models Dr. Hößl Bernhard, CADFEM (Austria) GmbH PRACE Autumn School 2013 - Industry Oriented HPC Simulations, September 21-27, University of Ljubljana,
1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
Chapter 7. Lyapunov Exponents. 7.1 Maps
Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
MATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
ELECTRICAL ENGINEERING
EE ELECTRICAL ENGINEERING See beginning of Section H for abbreviations, course numbers and coding. The * denotes labs which are held on alternate weeks. A minimum grade of C is required for all prerequisite
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC
Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC 1. Introduction A popular rule of thumb suggests that
