# The Image Deblurring Problem

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry. Thus, image deblurring is fundamental in making pictures sharp and useful. A digital image is composed of picture elements called pixels. Each pixel is assigned an intensity, meant to characterize the color of a small rectangular segment of the scene. A small image typically has around = pixels while a high-resolution image often has 5 to 10 million pixels. Some blurring always arises in the recording of a digital image, because it is unavoidable that scene information spills over to neighboring pixels. For example, the optical system in a camera lens may be out of focus, so that the incoming light is smeared out. The same problem arises, for example, in astronomical imaging where the incoming light in the telescope has been slightly bent by turbulence in the atmosphere. In these and similar situations, the inevitable result is that we record a blurred image. In image deblurring, we seek to recover the original, sharp image by using a mathematical model of the blurring process. The key issue is that some information on the lost details is indeed present in the blurred image but this information is hidden and can only be recovered if we know the details of the blurring process. Unfortunately there is no hope that we can recover the original image exactly! This is due to various unavoidable errors in the recorded image. The most important errors are fluctuations in the recording process and approximation errors when representing the image with a limited number of digits. The influence of this noise puts a limit on the size of the details that we can hope to recover in the reconstructed image, and the limit depends on both the noise and the blurring process. POINTER. Image enhancement is used in the restoration of older movies. For example, the original Star Wars trilogy was enhanced for release on DVD. These methods are not model based and therefore not covered in this book. See [33] for more information. 1

2 page 2 2 Chapter 1. The Image Deblurring Problem POINTER. Throughout the book, we provide example images and MATLAB code. This material can be found on the book s website: For readers needing an introduction to MATLAB programming, we suggest the excellent book by Higham and Higham [27]. One of the challenges of image deblurring is to devise efficient and reliable algorithms for recovering as much information as possible from the given data. This chapter provides a brief introduction to the basic image deblurring problem and explains why it is difficult. In the following chapters we give more details about techniques and algorithms for image deblurring. MATLAB is an excellent environment in which to develop and experiment with filtering methods for image deblurring. The basic MATLAB package contains many functions and tools for this purpose, but in some cases it is more convenient to use routines that are only available from the Signal Processing Toolbox (SPT) and the Image Processing Toolbox (IPT). We will therefore use these toolboxes when convenient. When possible, we provide alternative approaches that require only core MATLAB commands in case the reader does not have access to the toolboxes. 1.1 How Images Become Arrays of Numbers Having a way to represent images as arrays of numbers is crucial to processing images using mathematical techniques. Consider the following 9 16 array: If we enter this into a MATLAB variable X and display the array with the commands imagesc(x), axis image, colormap(gray), then we obtain the picture shown at the left of Figure 1.1. The entries with value 8 are displayed as white, entries equal to zero are black, and values in between are shades of gray. Color images can be represented using various formats; the RGB format stores images as three components, which represent their intensities on the red, green, and blue scales. A pure red color is represented by the intensity values (1, 0, 0) while, for example, the values (1, 1, 0) represent yellow and (0, 0, 1) represent blue; other colors can be obtained with different choices of intensities. Hence, to represent a color image, we need three values per pixel. For example, if X is a multidimensional MATLAB array of dimensions

3 page How Images Become Arrays of Numbers Figure 1.1. Images created by displaying arrays of numbers. defined as X(:,:,1) = , X(:,:,2) = , X(:,:,3) = , then we can display this image, in color, with the command imagesc(x), obtaining the second picture shown in Figure 1.1. This brings us to our first Very Important Point (VIP). VIP 1. A digital image is a two- or three-dimensional array of numbers representing intensities on a grayscale or color scale.

4 page 4 4 Chapter 1. The Image Deblurring Problem Most of this book is concerned with grayscale images. However, the techniques carry over to color images, and in Chapter 7 we extend our notation and models to color images. 1.2 A Blurred Picture and a Simple Linear Model Before we can deblur an image, we must have a mathematical model that relates the given blurred image to the unknown true image. Consider the example shown in Figure 1.2. The left is the true scene, and the right is a blurred version of the same image. The blurred image is precisely what would be recorded in the camera if the photographer forgot to focus the lens. Figure 1.2. A sharp image (left) and the corresponding blurred image (right). Grayscale images, such as the ones in Figure 1.2, are typically recorded by means of a CCD (charge-coupled device), which is an array of tiny detectors, arranged in a rectangular grid, able to record the amount, or intensity, of the light that hits each detector. Thus, as explained above, we can think of a grayscale digital image as a rectangular m n array, whose entries represent light intensities captured by the detectors. To fix notation, X R m n represents the desired sharp image, while B R m n denotes the recorded blurred image. Let us first consider a simple case where the blurring of the columns in the image is independent of the blurring of the rows. When this is the case, then there exist two matrices A c R m m and A r R n n, such that we can express the relation between the sharp and blurred images as A c XA T r = B. (1.1) The left multiplication with the matrix A c applies the same vertical blurring operation to all n columns x j of X, because A c X = A c [ x1 x 2 x n ] = [ Ac x 1 A c x 2 A c x n ]. Similarly, the right multiplication with A T r applies the same horizontal blurring to all m rows of X. Since matrix multiplication is associative, i.e., (A c X) A T r = A c (XA T r ), it does

5 page A First Attempt at Deblurring 5 POINTER. Image deblurring is much more than just a useful tool for our vacation pictures. For example, analysis of astronomical images gives clues to the behavior of the universe. At a more mundane level, barcode readers used in supermarkets and by shipping companies must be able to compensate for imperfections in the scanner optics; see Wittman [63] for more information. not matter in which order we perform the two blurring operations. The reason for our use of the transpose of the matrix A r will be clear later, when we return to this blurring model and matrix formulations. 1.3 A First Attempt at Deblurring If the image blurring model is of the simple form A c XA T r the naïve solution X naïve = A 1 c BA T r = B, then one might think that will yield the desired reconstruction, where A T r = (A T r ) 1 = (A 1 r ) T. Figure 1.3 illustrates that this is probably not such a good idea; the reconstructed image does not appear to have any features of the true image! Figure 1.3. The naïve reconstruction of the pumpkin image in Figure 1.2, obtained by computing X naïve = A 1 c BA T r via Gaussian elimination on both A c and A r. Both matrices are ill-conditioned, and the image X naïve is dominated by the influence from rounding errors as well as errors in the blurred image B. To understand why this naïve approach fails, we must realize that the blurring model in (1.1) is not quite correct, because we have ignored several types of errors. Let us take a closer look at what is represented by the image B. First, let B exact = A c XA T r represent the ideal blurred image, ignoring all kinds of errors. Because the blurred image is collected by a mechanical device, it is inevitable that small random errors (noise) will be present in the recorded data. Moreover, when the image is digitized, it is represented by a finite (and typically small) number of digits. Thus the recorded blurred image B is really given by B = B exact + E = A c XA T r + E, (1.2)

6 page 6 6 Chapter 1. The Image Deblurring Problem where the noise image E (of the same dimensions as B) represents the noise and the quantization errors in the recorded image. Consequently the naïve reconstruction is given by X naïve = A 1 c BA T r = A 1 c B exact A T r + A 1 c EA T r and therefore X naïve = X + A 1 c EA T r, (1.3) where the term A 1 c EA T r, which we can informally call inverted noise, represents the contribution to the reconstruction from the additive noise. This inverted noise will dominate the solution if the second term A 1 c EA T r in (1.3) has larger elements than the first term X. Unfortunately, in many situations, as in Figure 1.3, the inverted noise indeed dominates. Apparently, image deblurring is not as simple as it first appears. We can now state the purpose of our book more precisely, namely, to describe effective deblurring methods that are able to handle correctly the inverted noise. CHALLENGE 1. The exact and blurred images X and B in the above figure can be constructed in MATLAB by calling [B, Ac, Ar, X] = challenge1(m, n, noise); with m = n = 256 and noise = Try to deblur the image B using Xnaive = Ac \ B / Ar ; To display a grayscale image, say, X, use the commands imagesc(x), axis image, colormap gray How large can you choose the parameter noise before the inverted noise dominates the deblurred image? Does this value of noise depend on the image size?

7 page Deblurring Using a General Linear Model 7 CHALLENGE 2. The above image B as well as the blurring matrices Ac and Ar are given in the file challenge2.mat. Can you deblur this image with the naïve approach, so that you can read the text in it? As you learn more throughout the book, use Challenges 1 and 2 as examples to test your skills and learn more about the presented methods. CHALLENGE 3. For the simple model B = A c XA T r + E it is easy to show that the relative error in the naïve reconstruction X naïve = A 1 c BA T r satisfies X naïve X F X F cond(a c ) cond(a r ) E F B F, where m X F = xij 2 j=1 denotes the Frobenius norm of the matrix X. The quantity cond(a) is computed by the MATLAB function cond(a).it is the condition number of A, formally defined by (1.8), measuring the possible magnification of the relative error in E in producing the solution X naïve. For the test problem in Challenge 1 and different values of the image size, use this relation to determine the maximum allowed value of E F such that the relative error in the naïve reconstruction is guaranteed to be less than 5%. n 1/2 1.4 Deblurring Using a General Linear Model Underlying all material in this book is the assumption that the blurring, i.e., the operation of going from the sharp image to the blurred image, is linear. As usual in the physical sciences, this assumption is made because in many situations the blur is indeed linear, or at least well approximated by a linear model. An important consequence of the assumption

8 page 8 8 Chapter 1. The Image Deblurring Problem POINTER. Our basic assumption is that we have a linear blurring process. This means that if B 1 and B 2 are the blurred images of the exact images X 1 and X 2, then B = α B 1 +β B 2 is the image of X = α X 1 + β X 2. When this is the case, then there exists a large matrix A such that b = vec(b) and x = vec(x) are related by the equation Ax= b. The matrix A represents the blurring that is taking place in the process of going from the exact to the blurred image. The equation Ax = b can often be considered as a discretization of an underlying integral equation; the details can be found in [23]. is that we have a large number of tools from linear algebra and matrix computations at our disposal. The use of linear algebra in image reconstruction has a long history, and goes back to classical works such as the book by Andrews and Hunt [1]. In order to handle a variety of applications, we need a blurring model somewhat more general than that in (1.1). The key to obtaining this general linear model is to rearrange the elements of the images X and B into column vectors by stacking the columns of these images into two long vectors x and b, both of length N = mn. The mathematical notation for this operator is vec, i.e., x = vec(x) = x 1.. x n R N, b = vec(b) = b 1.. b n R N. Since the blurring is assumed to be a linear operation, there must exist a large blurring matrix A R N N such that x and b are related by the linear model Ax= b, (1.4) and this is our fundamental image blurring model. For now, assume that A is known; we will explain how it can be constructed from the imaging system in Chapter 3, and also discuss the precise structure of the matrix in Chapter 4. For our linear model, the naïve approach to image deblurring is simply to solve the linear algebraic system in (1.4), but from the previous section, we expect failure. Let us now explain why. We repeat the computation from the previous section, this time using the general formulation in (1.4). Again let B exact and E be, respectively, the noise-free blurred image and the noise image, and define the corresponding vectors b exact = vec(b exact ) = Ax, e = vec(e). Then the noisy recorded image B is represented by the vector b = b exact + e, and consequently the naïve solution is given by x naïve = A 1 b = A 1 b exact + A 1 e = x + A 1 e, (1.5)

9 page Deblurring Using a General Linear Model 9 POINTER. Good presentations of the SVD can be found in the books by Björck [4], Golub and Van Loan [18], and Stewart [57]. where the term A 1 e is the inverted noise. Equation (1.3) in the previous section is a special case of this equation. The important observation here is that the deblurred image consists of two components: the first component is the exact image, and the second component is the inverted noise. If the deblurred image looks unacceptable, it is because the inverted noise term contaminates the reconstructed image. Important insight about the inverted noise term can be gained using the singular value decomposition (SVD), which is the tool-of-the-trade in matrix computations for analyzing linear systems of equations. The SVD of a square matrix A R N N is essentially unique and is defined as the decomposition A = U V T, where U and V are orthogonal matrices, satisfying U T U = I N and V T V = I N, and = diag(σ i ) is a diagonal matrix whose elements σ i are nonnegative and appear in nonincreasing order, σ 1 σ 2 σ N 0. The quantities σ i are called the singular values, and the rank of A is equal to the number of positive singular values. The columns u i of U are called the left singular vectors, while the columns v i of V are the right singular vectors. Since U T U = I N, we see that ui T u j = 0if i = j, and, similarly, vi T v j = 0ifi = j. Assuming for the moment that all singular values are strictly positive, it is straightforward to show that the inverse of A is given by A 1 = V 1 U T (we simply verify that A 1 A = I n ). Since is a diagonal matrix, its inverse 1 is also diagonal, with entries 1/σ i for i = 1,...,N. Another representation of A and A 1 is also useful to us: A = U V T = [ u 1 u N ] σ 1... v T 1... σ N v T N = u 1 σ 1 v1 T + +u Nσ N vn T N = σ i u i vi T. Similarly, A 1 = N 1 σ i v i u T i.

10 page 1 10 Chapter 1. The Image Deblurring Problem Using this relation, it follows immediately that the naïve reconstruction given in (1.5) can be written as N x naïve = A 1 b = V 1 U T ui T b = b v i (1.6) σ i and the inverted noise contribution to the solution is given by A 1 e = V 1 U T e = N u T i e σ i v i. (1.7) In order to understand when this error term dominates the solution, we need to know that the following properties generally hold for image deblurring problems: The error components ui T e are small and typically of roughly the same order of magnitude for all i. The singular values decay to a value very close to zero. As a consequence the condition number cond(a) = σ 1 /σ N (1.8) is very large, indicating that the solution is very sensitive to perturbation and rounding errors. The singular vectors corresponding to the smaller singular values typically represent higher-frequency information. That is, as i increases, the vectors u i and v i tend to have more sign changes. The consequence of the last property is that the SVD provides us with basis vectors v i for an expansion where each basis vector represents a certain frequency, approximated by the number of times the entries in the vector change signs. Figure 1.4 shows images of some of the singular vectors v i for the blur of Figure 1.2. Note that each vector v i is reshaped into an m n array V i in such a way that we can write the naïve solution as N ui T X naïve = b V i. σ i All the V i arrays (except the first) have negative elements and therefore, strictly speaking, they are not images. We see that the spatial frequencies in V i increase with the index i. When we encounter an expansion of the form N ξ i v i, such as in (1.6) and (1.7), then the ith expansion coefficient ξ i measures the contribution of v i to the result. And since each vector v i can be associated with some frequency, the ith coefficient measures the amount of information of that frequency in our image. Looking at the expression (1.7), for A 1 e we see that the quantities ui T e/σ i are the expansion coefficients for the basis vectors v i. When these quantities are small in magnitude, the solution has very little contribution from v i, but when we divide by a small singular value such as σ N, we greatly magnify the corresponding error component, un T e, which in turn contributes a large multiple of the high-frequency information contained in v N to

11 page Deblurring Using a General Linear Model 11 v 1 v 2 x v 4 v 15 2 Figure 1.4. A few of the singular vectors for the blur of the pumpkin image in Figure 1.2. The images shown in this figure were obtained by reshaping the mn 1 singular vectors v i into m n arrays. the computed solution. This is precisely why a naïve reconstruction, such as the one in Figure 1.3, appears as a random image dominated by high frequencies. Because of this, we might be better off leaving the high-frequency components out altogether, since they are dominated by error. For example, for some choice of k<nwe can compute the truncated expansion x k = k u T i b σ i v i A k b in which we have introduced the rank-k matrix A k = [ σ 1 ] v 1 v k... σ k 1 u T 1... u T k k 1 = v i ui T σ. i Figure 1.5 shows what happens when we replace A 1 b by x k with k = 800; this reconstruction is much better than the naïve solution shown in Figure 1.3. We may wonder if a different value for k will produce a better reconstruction! The truncated SVD expansion for x k involves the computation of the SVD of the large N N matrix A, and is therefore computationally feasible only if we can find fast algorithms

12 page 1 12 Chapter 1. The Image Deblurring Problem Figure 1.5. The reconstruction x k obtained for the blur of the pumpkins of Figure 1.2 by using k = 800 (instead of the full k = N = ). to compute the decomposition. Before analyzing such ways to solve our problem, though, it may be helpful to have a brief tutorial on manipulating images in MATLAB, and we present that in the next chapter. VIP 2. We model the blurring of images as a linear process characterized by a blurring matrix A and an observed image B, which, in vector form, is b. The reason A 1 b cannot be used to deblur images is the amplification of high-frequency components of the noise in the data, caused by the inversion of very small singular values of A. Practical methods for image deblurring need to avoid this pitfall. CHALLENGE 4. For the simple model B = A c XA T r + E in Sections 1.2 and 1.3, let us introduce the two rank-k matrices (A c ) k and (A r) k, defined similarly to A k. Then for k<min(m, n) we can define the reconstruction X k = (A r ) k B ((A c ) k) T. Use this approach to deblur the image from Challenge 2. Can you find a value of k such that you can read the text?

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Blind Deconvolution of Corrupted Barcode Signals

Blind Deconvolution of Corrupted Barcode Signals Everardo Uribe and Yifan Zhang Advisors: Ernie Esser and Yifei Lou Interdisciplinary Computational and Applied Mathematics Program University of California,

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

### Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

### The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### 1 Review of Least Squares Solutions to Overdetermined Systems

cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

### Digital Image Processing Using Matlab. Haris Papasaika-Hanusch Institute of Geodesy and Photogrammetry, ETH Zurich

Haris Papasaika-Hanusch Institute of Geodesy and Photogrammetry, ETH Zurich haris@geod.baug.ethz.ch Images and Digital Images A digital image differs from a photo in that the values are all discrete. Usually

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### High Quality Image Deblurring Panchromatic Pixels

High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

### Matrix Differentiation

1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

### An Application of Linear Algebra to Image Compression

An Application of Linear Algebra to Image Compression Paul Dostert July 2, 2009 1 / 16 Image Compression There are hundreds of ways to compress images. Some basic ways use singular value decomposition

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Operation Count; Numerical Linear Algebra

10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### 3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

### Another Example: the Hubble Space Telescope

296 DIP Chapter and 2: Introduction and Integral Equations Motivation: Why Inverse Problems? A large-scale example, coming from a collaboration with Università degli Studi di Napoli Federico II in Naples.

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

### Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### 1 Portfolio mean and variance

Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Linear Algebraic and Equations

Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods

### Practical Numerical Training UKNum

Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

### Lecture 19 Camera Matrices and Calibration

Lecture 19 Camera Matrices and Calibration Project Suggestions Texture Synthesis for In-Painting Section 10.5.1 in Szeliski Text Project Suggestions Image Stitching (Chapter 9) Face Recognition Chapter

### IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING

ABSTRACT: IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING Hakan Wiman Department of Photogrammetry, Royal Institute of Technology S - 100 44 Stockholm, Sweden (e-mail hakanw@fmi.kth.se) ISPRS Commission

### A Introduction to Matrix Algebra and Principal Components Analysis

A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra

### We shall turn our attention to solving linear systems of equations. Ax = b

59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

### = [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

### SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

### Geometric Camera Parameters

Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### Lecture No. # 02 Prologue-Part 2

Advanced Matrix Theory and Linear Algebra for Engineers Prof. R.Vittal Rao Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 02 Prologue-Part 2 In the last

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

### Lecture 5 Least-squares

EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### 2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

### Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) Linear Algebra Methods for Data Mining, Spring 2007, University of

### Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/

Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### Image Manipulation in MATLAB Due 11/1 at 5:00 PM

Image Manipulation in MATLAB Due 11/1 at 5:00 PM 1 Introduction Digital images are just matrices of pixels, and any type of matrix operation can be applied to a matrix containing image data. In this project

### Lecture 7: Singular Value Decomposition

0368-3248-01-Algorithms in Data Mining Fall 2013 Lecturer: Edo Liberty Lecture 7: Singular Value Decomposition Warning: This note may contain typos and other inaccuracies which are usually discussed during

### Numerical Linear Algebra Chap. 4: Perturbation and Regularisation

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear

### The Second Undergraduate Level Course in Linear Algebra

The Second Undergraduate Level Course in Linear Algebra University of Massachusetts Dartmouth Joint Mathematics Meetings New Orleans, January 7, 11 Outline of Talk Linear Algebra Curriculum Study Group

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### R mp nq. (13.1) a m1 B a mn B

This electronic version is for personal use and may not be duplicated or distributed Chapter 13 Kronecker Products 131 Definition and Examples Definition 131 Let A R m n, B R p q Then the Kronecker product

### MATH 105: Finite Mathematics 2-6: The Inverse of a Matrix

MATH 05: Finite Mathematics 2-6: The Inverse of a Matrix Prof. Jonathan Duncan Walla Walla College Winter Quarter, 2006 Outline Solving a Matrix Equation 2 The Inverse of a Matrix 3 Solving Systems of

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB. Sohail A. Dianat. Rochester Institute of Technology, New York, U.S.A. Eli S.

ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB Sohail A. Dianat Rochester Institute of Technology, New York, U.S.A. Eli S. Saber Rochester Institute of Technology, New York, U.S.A. (g) CRC Press Taylor

### Picture Perfect: The Mathematics of JPEG Compression

Picture Perfect: The Mathematics of JPEG Compression May 19, 2011 1 2 3 in 2D Sampling and the DCT in 2D 2D Compression Images Outline A typical color image might be 600 by 800 pixels. Images Outline A

### Lecture 23: The Inverse of a Matrix

Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

### Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Linear Systems COS 323

Linear Systems COS 323 Last time: Constrained Optimization Linear constrained optimization Linear programming (LP) Simplex method for LP General optimization With equality constraints: Lagrange multipliers

### Camera Resolution Explained

Camera Resolution Explained FEBRUARY 17, 2015 BY NASIM MANSUROV Although the megapixel race has been going on since digital cameras had been invented, the last few years in particular have seen a huge

### The Classical Linear Regression Model

The Classical Linear Regression Model 1 September 2004 A. A brief review of some basic concepts associated with vector random variables Let y denote an n x l vector of random variables, i.e., y = (y 1,

### Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

### Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### The Solution of Linear Simultaneous Equations

Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve

### Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

### Direct Methods for Solving Linear Systems. Linear Systems of Equations

Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Introduction to Matrices for Engineers

Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

### I. Solving Linear Programs by the Simplex Method

Optimization Methods Draft of August 26, 2005 I. Solving Linear Programs by the Simplex Method Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston,

### Lecture 21: The Inverse of a Matrix

Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Overview. Essential Questions. Precalculus, Quarter 3, Unit 3.4 Arithmetic Operations With Matrices

Arithmetic Operations With Matrices Overview Number of instruction days: 6 8 (1 day = 53 minutes) Content to Be Learned Use matrices to represent and manipulate data. Perform arithmetic operations with

### Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

### Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

### 9 Matrices, determinants, inverse matrix, Cramer s Rule

AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### Equations, Inequalities & Partial Fractions

Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities