Efficient Iterative Methods for Large Scale Inverse Problems
|
|
- Georgina Lane
- 7 years ago
- Views:
Transcription
1 Efficient Iterative Methods for Large Scale Inverse Problems Joint work with: James Nagy Emory University Atlanta, GA, USA Julianne Chung (University of Maryland) Veronica Mejia-Bustamante (Emory)
2 Inverse Problems in Imaging Imaging problems are often modeled as: b = Ax + e where A - large, ill-conditioned matrix b - known, measured (image) data e - noise, statistical properties may be known Goal: Compute approximation of image x
3 Application: Medical Imaging PET motion correction for brain imaging: Head moves during data acquisition reconstructed brain image, b is distorted by motion blur.
4 Application: Medical Imaging PET motion correction for brain imaging: Head moves during data acquisition reconstructed brain image, b is distorted by motion blur. Attach cap with fixed markers to patient head.
5 Application: Medical Imaging PET motion correction for brain imaging: Head moves during data acquisition reconstructed brain image, b is distorted by motion blur. Attach cap with fixed markers to patient head. Motion detection camera records position of patient head.
6 Application: Medical Imaging PET motion correction for brain imaging: Head moves during data acquisition reconstructed brain image, b is distorted by motion blur. Attach cap with fixed markers to patient head. Motion detection camera records position of patient head. Construct large, sparse matrix A from position information. Solve linear inverse problem, b = Ax + e.
7 Application: Medical Imaging Original reconstruction Improved reconstruction Collaborators: Sarah Knepper, Piotr Wendykier (Math/CS) John Votaw, Niv Raghunath, Tracy Faber (Radiology)
8 Inverse Problems in Imaging A more challenging problem: where b = A(y) x + e A(y) - large, ill-conditioned matrix b - known, measured (image) data e - noise, statistical properties may be known y - parameters defining A, usually approximated Goal: Compute approximation of image x and improve estimate of parameters y
9 Application: Space Situational Awareness Multi-Frame Blind Deconvolution: Given images, b:
10 Application: Space Situational Awareness Multi-Frame Blind Deconvolution: Given images, b: Solve nonlinear inverse problem b = A(y) x + e
11 Outline Linear Inverse Problem: b = Ax + e 1 Linear Inverse Problem: b = Ax + e Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods 2 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples 3
12 Linear Inverse Problem Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Assume A = A(y) is known exactly. We are given A and b, where b = Ax + e A is an ill-conditioned matrix, and we do not know e. We want to compute an approximation of x. Bad idea: e is small, so ignore it, and use x inv A 1 b = x + A 1 e
13 Linear Inverse Problem Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Assume A = A(y) is known exactly. We are given A and b, where b = Ax + e A is an ill-conditioned matrix, and we do not know e. We want to compute an approximation of x. Bad idea: e is small, so ignore it, and use x inv A 1 b = x + A 1 e
14 Inverse Solution Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Original image
15 Inverse Solution Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Original image Inverse solution
16 Regularization Linear Inverse Problem: b = Ax + e Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Basic Idea: Instead of computing x inv = A 1 b, use: so that where x reg = A 1 regb ˆx = A 1 regb = A 1 reg (Ax + e) = A 1 regax + A 1 rege A 1 regax x and A 1 rege is not too large
17 Regularized Solution Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Original image Inverse solution
18 Regularized Solution Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Original image Regularized solution
19 An Example: Tikhonov Regularization Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods { min b Ax 2 x 2 + λ 2 x 2 } 2 min x [ b 0 ] [ A λi ] x 2 2
20 An Example: Tikhonov Regularization Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods { min b Ax 2 x 2 + λ 2 x 2 } 2 min x [ b 0 ] [ A λi ] x 2 2 Choose λ to minimize GCV(λ) = n (I AA 1 reg)b 2 2 ( trace(i AA 1 reg) ) 2 where A 1 reg = (A T A + λ 2 I) 1 A T
21 Computational Approaches Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Computational approaches: For small matrices, can use SVD. For large matrices, computing SVD is expensive. SVD algorithms do not readily simplify for structured or sparse matrices. Alternative for large scale problems: LSQR iteration (Paige and Saunders, ACM TOMS, 1982)
22 Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Golub-Kahan (Lanczos) Bidiagonalization Given A and b, for k = 1, 2,..., compute U k+1 = [ ] u 1 u 2 u k u k+1, u1 = b/ b V k = [ ] v 1 v 2 v k α 1 β 2 α 2 B k = β k α k β k+1 where U k+1 and V k have orthonormal columns, and A T U k+1 = V k B T k + α k+1v k+1 e T k+1 AV k = U k+1 B k
23 GKBD and LSQR Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods At kth GKBD iteration, use QR to solve projected LS problem: where x k = V k f min b x R(V k ) Ax 2 2 = min βe 1 B k f 2 2 f
24 GKBD and LSQR Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods At kth GKBD iteration, use QR to solve projected LS problem: where x k = V k f min b x R(V k ) Ax 2 2 = min βe 1 B k f 2 2 f For our ill-posed inverse problems: Singular values of B k converge to k largest sing. values of A. Thus, x k is in a subspace that approximates a subspace spanned by the large singular components of A. For k < n, x k is a regularized solution. x n = x inv = A 1 b (bad approximation)
25 Golub-Kahan Based Hybrid Methods Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods To avoid noisy reconstructions, embed regularization in GKBD: O Leary and Simmons, SISSC, Björck, BIT Björck, Grimme, and Van Dooren, BIT, Larsen, PhD Thesis, Hanke, BIT Kilmer and O Leary, SIMAX, Kilmer, Hansen, Español, SISC Chung, N, O Leary, ETNA 2007 (HyBR Implementation)
26 Golub-Kahan Bidiagonalization Golub-Kahan Based Hybrid Methods Regularize the Projected Least Squares Problem To stabilize convergence, regularize the projected problem: min f [ βe1 0 ] [ Bk λi Note: B k is very small compared to A, so ] f Can use expensive methods to choose λ (e.g., GCV) Very little regularization is needed in early iterations. GCV tends to choose too large λ for bidiagonal system. Our remedy: Use a weighted GCV (Chung, N, O Leary, 2007) Can also use WGCV information to estimate stopping iteration (approach similar to Björck, Grimme, and Van Dooren, BIT, 1994). 2 2
27 Nonlinear Inverse Problem Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples We want to find x and y so that b = A(y)x + e With Tikhonov regularization, solve min x,y [ A(y) λi ] [ b x 0 ] 2 As with linear problem, choosing a good regularization parameter λ is important. Problem is linear in x, nonlinear in y. y R p, x R n, with p n. 2
28 General Mathematical Model Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Assume where b = A(y) x + e b is measured, noisy data (image) Parametric model for A(y) is known. e.g., can implement function: A = param2matrix(y) We use iterative methods, so: Do not need A explicitly. Just need to be able to compute matrix-vector products with A(y) and A(y) T
29 Solution Scheme Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples We use Tikhonov regularization, and solve nonlinear least squares (NLLS) problem: [ { min A(y) x b 2 x,y 2 + λ 2 x 2 } 2 = min A(y) x,y λi Goals: Exploit structure of NLLS problem. Exploit structure of A(y) Use parameter choice method to choose λ. ] [ b x 0 ] 2 2
30 Standard Gauss-Newton Approach ([ ]) Define f(z) = f xy = Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples [ A(y) λi Consider min ψ(z) = min f(z) 2 z z 2 [ f(x, y) Jacobian is given by: J ψ = x Basic Gauss-Newton iteration:» x0 choose initial z 0 = for l = 0, 1, 2,...» b r l = 0 end y 0» A(yl ) λi d l = arg min d J ψ d r l 2 z l+1 = z l + d l ] x x l [ b 0 ] f(x, y) y ]
31 General Gauss-Newton Method Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Difficulties with the general Gauss-Newton approach: Constructing and solving linear systems with J ψ can be very expensive. Effective preconditioners may be difficult to find. Requires either specifying a priori regularization parameter λ or estimating it within a nonlinear iterative scheme. Do not take algorithmic advantage of fact that the problem is strongly convex in x. May take small steps due to the nonlinearity induced by y.
32 Separable Nonlinear Least Squares Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Variable Projection Method: Implicitly eliminate linear term. Optimize over nonlinear term. Some general references: Golub and Pereyra, SINUM 1973 (also IP 2003) Kaufman, BIT 1975 Osborne, SINUM 1975 (also ETNA 2007) Ruhe and Wedin, SIREV, 1980 How to apply to inverse problems?
33 Variable Projection Method Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Exploit following properties of our problem: ([ ]) ψ(z) = ψ xy is linear in x. y contains relatively few parameters compared to x. Implicitly eliminate linear parameters x: x(y) = arg min x ψ(x, y) = arg min x [ A(y) λi ] [ b x 0 ] 2. 2 Obtain reduced cost functional: ρ(y) ψ(x(y), y), Use Gauss-Newton to minimize reduced cost functional ρ(y)
34 Variable Projection Approach Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Consider ρ(y) ψ(x(y), y) = f(x(y), y) 2 2 [ [ A(y)x ] Jacobian is given by: J ρ = f y = y 0 Reduced Gauss-Newton iteration: ] = [ Ĵρ 0 ] choose initial y 0 for l = 0, 1, 2,... x l = arg min x» A(yl ) λ l I» b x 0 2 ˆr l = b A(y l ) x l d l = arg min d b Jρd ˆr l 2 y l+1 = y l + d l end
35 Variable Projection Approach Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Consider ρ(y) ψ(x(y), y) = f(x(y), y) 2 2 [ [ A(y)x ] Jacobian is given by: J ρ = f y = y 0 Reduced Gauss-Newton iteration: choose initial y 0 for l = 0, 1, 2,... [x l, λ l ] = HyBR(A(y l ), b) ] = [ Ĵρ 0 ] end ˆr l = b A(y l ) x l d l = arg min d b Jρd ˆr l 2 y l+1 = y l + d l
36 Example 1: Blind Deconvolution Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples A(y) = A(p(y)) where A is , with entries given by p. p is PSF, with entries: ( (i k) 2 s2 2 p ij = exp (j l)2 s1 2 ) + 2(i k)(j l)ρ2 2s1 2s2 2 2ρ4 (k, l) is the PSF center (location of point source) y vector of unknown parameters: s 1 y = s 2 ρ
37 Example 1: Blind Deconvolution Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Can get analytical formula for Jacobian: where x = vec(x). Ĵ ρ = { A( p(y) ) x } y = { A( p(y) ) x } p = A(X) y { p(y) } y { p(y) } Though in this example, finite difference approximation of Ĵρ works very well.
38 Example 1: Blind Deconvolution Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Gauss-Newton Iteration History G-N Iteration y x HyBR λ HyBR its
39 Example 1: Blind Deconvolution Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Blurred image Initial reconstruction Final reconstruction
40 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Example 2: Multi-Frame Blind Deconvolution Similar setup as previous problem, except: Use 3 different blurred images (frames). y has 9 parameters (3 for each PSF) Goal: Find approximations of 3 PSFs and true image.
41 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Example 2: Multi-Frame Blind Deconvolution Similar setup as previous problem, except: Use 3 different blurred images (frames). y has 9 parameters (3 for each PSF) Goal: Find approximations of 3 PSFs and true image.
42 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Example 2: Multi-Frame Blind Deconvolution Convergence results: relative error (y) relative error (x) GN iteration GN iteration
43 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Example 2: Multi-Frame Blind Deconvolution blurred images initial reconstruction final reconstruction
44 Nonlinear Least Squares Formulation Regularized Variable Projection Method Solving the linear subproblem Examples Example 2: Multi-Frame Blind Deconvolution blurred images initial reconstruction final reconstruction
45 Regularized variable projection method works well for challenging inverse problems in image processing. Exploits high level structure. Can exploit low level structure in linear system solves. HyBR works well for the linear system solves Automatic estimation of regularization parameter. Automatic estimation of stopping iteration. Can incorporate preconditioning. Some work to do: Stopping rule for the Gauss-Newton iteration. Incorporating other regularization schemes, and constraints (nonnegativity). Geometric models for nonlinear distortions and synthetic boundary conditions: Daniel Fan, Emory University
Automatic parameter setting for Arnoldi-Tikhonov methods
Automatic parameter setting for Arnoldi-Tikhonov methods S. Gazzola, P. Novati Department of Mathematics University of Padova, Italy March 16, 2012 Abstract In the framework of iterative regularization
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Mathematics 226 (2009) 92 102 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationComputational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationNumerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
More informationP164 Tomographic Velocity Model Building Using Iterative Eigendecomposition
P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic
More informationAutomatic parameter setting for Arnoldi-Tikhonov methods
Automatic parameter setting for Arnoldi-Tikhonov methods S. Gazzola, P. Novati Department of Mathematics University of Padova, Italy Abstract In the framework of iterative regularization techniques for
More informationLinköping University Electronic Press
Linköping University Electronic Press Report A Preconditioned GMRES Method for Solving a 1D Sideways Heat Equation Zohreh Ranjbar and Lars Eldén LiTH-MAT-R, 348-296, No. 6 Available at: Linköping University
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationStatistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
More informationLecture Topic: Low-Rank Approximations
Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More information3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationSMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)
SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationThe Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationSparse recovery and compressed sensing in inverse problems
Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationLatest Results on High-Resolution Reconstruction from Video Sequences
Latest Results on High-Resolution Reconstruction from Video Sequences S. Lertrattanapanich and N. K. Bose The Spatial and Temporal Signal Processing Center Department of Electrical Engineering The Pennsylvania
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More information1. Introduction. Consider the computation of an approximate solution of the minimization problem
A NEW TIKHONOV REGULARIZATION METHOD MARTIN FUHRY AND LOTHAR REICHEL Abstract. The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationConstrained Least Squares
Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580-587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,
More informationNMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu
More informationBayesian Image Super-Resolution
Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More informationDual Methods for Total Variation-Based Image Restoration
Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony
More informationDerivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationExamination paper for TMA4205 Numerical Linear Algebra
Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination
More information8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
More informationFinite Dimensional Hilbert Spaces and Linear Inverse Problems
Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationSolving Mass Balances using Matrix Algebra
Page: 1 Alex Doll, P.Eng, Alex G Doll Consulting Ltd. http://www.agdconsulting.ca Abstract Matrix Algebra, also known as linear algebra, is well suited to solving material balance problems encountered
More informationLinear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
More informationVariational approach to restore point-like and curve-like singularities in imaging
Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012
More informationComputing a Nearest Correlation Matrix with Factor Structure
Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger
More informationExact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationSolving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationSIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I
Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University
More informationMAT 242 Test 3 SOLUTIONS, FORM A
MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection
More informationA linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationAutomated Stellar Classification for Large Surveys with EKF and RBF Neural Networks
Chin. J. Astron. Astrophys. Vol. 5 (2005), No. 2, 203 210 (http:/www.chjaa.org) Chinese Journal of Astronomy and Astrophysics Automated Stellar Classification for Large Surveys with EKF and RBF Neural
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationHow To Solve The Fmfontham Equation In A Two Level Iterative Computer Science Project
Computer Science Master's Project Report Rensselaer Polytechnic Institute Troy, NY 12180 Development of A Two-Level Iterative Computational Method for Solution of the Franklin Approximation Algorithm for
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationBlind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
More informationBinary Image Reconstruction
A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands kbatenbu@math.leidenuniv.nl Abstract We present a new algorithm
More informationPARAMETER IDENTIFICATION ARISING IN OPTION PRICING MODELS AND NEURAL NETWORKS FOR UNDERDETERMINED SYSTEMS. Dissertation MICHAELA SCHULZE
PARAMETER IDENTIFICATION FOR UNDERDETERMINED SYSTEMS ARISING IN OPTION PRICING MODELS AND NEURAL NETWORKS Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften Dem Fachbereich
More informationMetrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing
More informationAN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More informationBilinear Prediction Using Low-Rank Models
Bilinear Prediction Using Low-Rank Models Inderjit S. Dhillon Dept of Computer Science UT Austin 26th International Conference on Algorithmic Learning Theory Banff, Canada Oct 6, 2015 Joint work with C-J.
More informationYousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
More informationImage super-resolution: Historical overview and future challenges
1 Image super-resolution: Historical overview and future challenges Jianchao Yang University of Illinois at Urbana-Champaign Thomas Huang University of Illinois at Urbana-Champaign CONTENTS 1.1 Introduction
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationRank one SVD: un algorithm pour la visualisation d une matrice non négative
Rank one SVD: un algorithm pour la visualisation d une matrice non négative L. Labiod and M. Nadif LIPADE - Universite ParisDescartes, France ECAIS 2013 November 7, 2013 Outline Outline 1 Data visualization
More informationExact shape-reconstruction by one-step linearization in electrical impedance tomography
Exact shape-reconstruction by one-step linearization in electrical impedance tomography Bastian von Harrach harrach@math.uni-mainz.de Institut für Mathematik, Joh. Gutenberg-Universität Mainz, Germany
More informationObject-oriented scientific computing
Object-oriented scientific computing Pras Pathmanathan Summer 2012 The finite element method Advantages of the FE method over the FD method Main advantages of FE over FD 1 Deal with Neumann boundary conditions
More informationKrylov subspace iterative techniques: on the detection of brain activity with electrical impedance tomography
Krylov subspace iterative techniques: on the detection of brain activity with electrical impedance tomography N. Polydorides, W.R.B. Lionheart and H. McCann 2002 MIMS EPrint: 2006.240 Manchester Institute
More informationDepartment of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI
Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:
More informationFUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
More informationChapter 7 Nonlinear Systems
Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationSimulation-based optimization methods for urban transportation problems. Carolina Osorio
Simulation-based optimization methods for urban transportation problems Carolina Osorio Civil and Environmental Engineering Department Massachusetts Institute of Technology (MIT) Joint work with: Prof.
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationAirport Planning and Design. Excel Solver
Airport Planning and Design Excel Solver Dr. Antonio A. Trani Professor of Civil and Environmental Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia Spring 2012 1 of
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationDantzig-Wolfe bound and Dantzig-Wolfe cookbook
Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe
More informationDiscrete Tomography in Discrete Deconvolution: Deconvolution of Binary Images Using Ryser s Algorithm
Electronic Notes in Discrete Mathematics 2 (25) 555 57 www.elsevier.com/locate/endm Discrete Tomography in Discrete Deconvolution: Deconvolution of Binary Images Using Ryser s Algorithm Behzad Sharif a,,2
More information5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
More informationA new approach for regularization of inverse problems in image processing
A new approach for regularization of inverse problems in image processing I. Souopgui 1,2, E. Kamgnia 2, 1, A. Vidard 1 (1) INRIA / LJK Grenoble (2) University of Yaounde I 10 th African Conference on
More information