NMR Measurement of T1T2 Spectra with Partial Measurements using Compressive Sensing


 Audrey Stokes
 2 years ago
 Views:
Transcription
1 NMR Measurement of T1T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park
2 Outline Basics of Nuclear Magnetic Resonance 1 Basics of Nuclear Magnetic Resonance 2 3 Using Compressive Sensing to Speed Up Data Collection 4
3 Outline Basics of Nuclear Magnetic Resonance 1 Basics of Nuclear Magnetic Resonance 2 3 Using Compressive Sensing to Speed Up Data Collection 4
4 Basics of Nuclear Magnetic Resonance We will consider multidimensional correlations as measured by NMR Specifically look at T1T2 relaxation properties Relaxation measures how fast magnetic spins forget orientation and return to equalibrium T1 is decay constant for zcomponent of nuclear spin T2 is decay constant for xycomponent perpendicular to magnetic field Analysis of multidimensional correlations requires multidimensional inverse Laplace transform Due to illconditioning of 2D ILT, require a large number of measurements Problem is getting T1T2 map is incredibly slow
5 SpinEcho Pulse Figure: Animation of SpinEcho Pulse
6 T1T2 Correlations
7 Math Behind NMR Echo measurements are related to T1T2 correlations via Laplace Transform M(τ 1, τ 2 ) = (1 2e τ 1/T 1 )e τ 2/T 2 F(T 1, T 2 )dt 1 dt 2 + E(τ 1, τ 2 ) We ll consider more general 2D Fredholm Integral M(τ 1, τ 2 ) = k 1 (τ 1, T 1 )k 2 (τ 2, T 2 )F(T 1, T 2 )dt 1 dt 2 + E(τ 1, τ 2 ) where E(τ 1, τ 2 ) N (0, ɛ) Discretize to M = K 1 FK 2 + E
8 Problems with Inverse Laplace Transform k 1 (τ1, T 1 ) and k 2 (τ 2, T 2 ) are smooth continuous Means K 1 and K 2 are illconditioned Makes inversion difficult
9 Outline Basics of Nuclear Magnetic Resonance 1 Basics of Nuclear Magnetic Resonance 2 3 Using Compressive Sensing to Speed Up Data Collection 4
10 Algorithm for Approximation of F(T 1, T 2 ) Definition (Objective Function for Recovery) Wish to find minimizer to min F 0 M K 1FK α F 2 Because inversion unstable, α F 2 term smooths solution Called Tikhonov regularization with parameter α Possible to determine choice of α that minimizes bias Computationally unwieldy
11 Outline of Preexisting Algorithm 1 Reduce the dimension of the problem in order to make the minimization manageable 2 Holding alpha fixed, solve the Tikhonov regularization problem 3 Having solved the minimization problem, determine a more optimal alpha to use in the regularization problem 4 Loop between Steps 2 & 3 until convergence upon the optimal alpha
12 Step 1: Dimension Reduction Assume M is N 1 N 2 data matrix To reduce the dimension of the problem, let K 1 = U 1 Σ 1 V1, K 2 = U 2 Σ 2 V2 where Σ 1 R s 1 s 1 and Σ 2 R s 2 s 2. Choose s 1, s 2 to account for 99% of the energy Because of fast decay of kernels, s 1 N 1 and s 2 N 2.
13 Step 1: Dimension Reduction Change objective function by min F 0 U 1 (M K 1 FK 2 )U α F 2 = min F 0 U 1 MU 2 Σ 1 V 1 FV 2 Σ α F 2 = min F 0 M K 1 F K α F 2. ek i = Σ i V i em = U 1 MU 2 R s 1 s 2 Note that now minimization only depends on M
14 Step 2: Tikhonov Regularization Now need to solve minimization with compressed data M min M K 1 F K α F 2. F 0 Reform problem into unconstraned optimization for f = max(0, ( K 1 K 2 ) c), c R s 1 s 2. Solve new unconstrained problem using Newton s method
15 Step 3: Choosing Optimal Alpha There are multiple ways to choose the optimal alpha: 1 Choose new alpha to be α new = s1 s 2 c 2 After determining F α for fixed α, calculate the fit error χ(α) = M K 1F αk2 F, N1 N 2 1 which is the standard deviation of the noise in reconstruction. Choose α such that d log χ d log α =.1
16 Outline Basics of Nuclear Magnetic Resonance 1 Basics of Nuclear Magnetic Resonance 2 3 Using Compressive Sensing to Speed Up Data Collection 4
17 Intuition Behind Compressive Sensing Collected N 1 N 2 data points in M, then compressed to s 1 s 2 matrix M. s In practice, 1 s 2 1% N 1 N 2 Key Question Why collect large amounts of data only to throw away over 99% of it? At end of day, only M matters in minimization problem If we had way of measuring e M directly, speedup would be massive
18 Basic ideas Since postsensing compression is inefficient, why not compress while sensing to boost efficiency? Key is that observations in traditional sensing are of form M i,j = M, δ i,j. Question: which functions should replace δ i,j in order to minimize the number of samples needed for reconstruction? Answer: they should not match image structures; they should mimic the behavior of random noise.
19 Incoherent Measurements of M Remember that M = U 1 MU 2, so M i,j = u i Mv j = M, u i v j. where u i is the i th row of U 1 and v j is the j th row of U 2 By structure of U 1 and U 2, rows u i and v i are very incoherent Incoherent basically means energy is spread out over all elements rather than concentrated Because of this, may be possible to observe small number of entries and recover M Even if number of measurements greater than size of e M, still could be much less than N 1 N 2
20 Random Sensing Consider observing a small number of elements of M randomly We will call set of elements observed Ω Fewer measurements directly reduces time spent collecting Problem is using these measurements to reconstruct e M
21 Singular Values and Rank of M Measurements of e M are underdetermined and noisy. Need a way to bring in more a priori knowledge of e M Because of illconditioned K 1 and K 2, singular values of e M decay quickly Can be closely approximated by only first r min(s 1, s 2 ) singular values Possible to recover M as solution to arg min rank(x) X such that u i Xv j = M i,j, (i, j) Ω
22 Problems and Reformulation Two problems: Rank minimization is computationally infeasible Definition (Nuclear Norm) Let σ i (M) be the i th largest singular value of M. If rank(m) = r, then M := rx σ i (M) i=1 Measurements aren t perfect (there is noise) Definition (Sampling Operator) For a set of measurements {φ i } i J, then for Ω J the sampling operator is R Ω : C s 1 s 2 C m, (R Ω (X)) j = φ j, X, j Ω
23 Problems and Reformulation Recovery Algorithm Let y = R Ω ( e M) + e, where e 2 < ɛ. Wish to recover e M by solving arg min X R s 1 s 2 such that X R Ω (X) y 2 < ɛ (P*) Definition Need to establish that (P*) with these measurements will recover e M A Parseval tight frame for a d dimensional Hilbert space H is a collection of elements {φ j } j J H such that X f, φ j 2 = f 2, f H. (1) j J
24 Reconstruction and Noise Bounds Because u i and v j are rows of U 1 and U 2 (who s columns are orthogonal), these measurements form a tight frame Definition A Fouriertype Parseval tight frame with incoherence µ is a Parseval tight frame {φ j } j J on C d d with operator norm satisfying φ j 2 ν d, j J. (2) J This definition was introduced by David Gross for orthonormal basis Can be thought of as singular values bounded in L norm
25 Theorem (Cloninger) Let the measurements in (P*) be a Fouriertype Parseval tight frame of size J, and the true matrix of interest be denoted M 0. If the number of measurements m satisfies m Cνnr log 5 n log J, where ν measures the incoherence of the measurements and n = max(s 1, s 2 ), then the result of (P*), denoted M, will satisfy M 0 M F C 1 M 0 M 0,r r + C 2 J m ɛ, where M 0,r is the best rank r approximation of M 0.
26 Noise in Practice Keep in mind, ɛ is the measure of noise between the ideal matrix M 0 and the compressed data M M 0 M F ɛ C 2 is a very small constant, and in practice N 1 N 2 m 5
27 Proof of Theorem Definition Let R Ω : R s 1 s 2 R m be a linear operator of measurements. A satisfies the restricted isometry property (RIP) of order r if there exists some small δ r such that for all matrices X of rank r. (1 δ r ) X F R Ω (X) 2 (1 + δ r ) X F Theorem (Candès, Fazel) Let X 0 be an arbitrary matrix in C m n and assume δ 5r < 1/10. If the measurements satisfy RIP, then b X obtained from solving (P*) obeys b X X 0 F C 0 X 0 X 0,r r + C 1 ɛ, where C 0, C 1 are small constants depending only on the isometry constant.
28 Proof of Theorem Lemma Let R Ω be defined as before. Fix some 0 < δ < 1. Let m satisfy m Cνrn log 5 n log J, (3) where C only depends on δ like C = O ( 1/δ 2). Then with high J probability, m R Ω satisfies the RIP of rank r with isometry constant δ. Furthermore, the probability of failure is exponentially small in δ 2 C. J Since, m R Ω satisfies the RIP, can reform (P*) to say arg min such that J m R Ω(X) X J m y 2 < J m ɛ
29 Outline of Proof Can restate RIP as ɛ r (A) = sup X U 2 X, (A A I)X 2δ δ 2 where U 2 = {X C s 1 s 2 : X F 1, X r X F } Notation: Norm: M (r) = sup X U 2 X, MX ; Simplify Terms: A = q J m R Ω Eɛ r (A) = E A A I (r) X E Ω E ɛ ɛi (φ i φ i (φ i ) φ i ) J m (r) 2 n r r m E X J J ΩE ɛ ɛi n φ i φ i n (r)
30 Outline of Proof Lemma Let {V i } m i=1 Cs 1 s 2 have uniformly bounded norm, V i K. Let n = max(s 1, s 2 ) and let {ɛ i } m i=1 be iid uniform ±1 random variables. Then m X E ɛ ɛ i Vi m X 1/2 V i C 1 Vi V i i=1 (r) i=1 (r) where C 1 = C 0 rk log 5/2 n log 1/2 m and C 0 is a universal constant. q J For our purposes, V i = n φ i. Then n Eɛ r (A) 2C 1 m E X r r J J 1/2 Ω n φ i φ i n (r) = 2C 1 r n m (E A A ) 1/2 2C 1 r n m (Eɛr (A) + 1)1/2.
31 Outline of Proof Fix some λ 1 and choose This makes Eɛ r (A) 1 λ + 1 λ. We can now use result by Talagrand. Theorem m Cλµrn log 5 n log J λn(2c 1 ) 2 Let {Y i } m i=1 be independent symmetric random variables on some Banach space such P that Y i R. Let Y = m Y i. Then for any integers l q and any t > 0 i=1 Pr( Y 8qE Y + 2Rl + te Y ) (K /q) l + 2e t2 /256q, (4) where K is a universal constant. Use theorem to establish Pr( ɛ r (A) ɛ) e Cɛ2 λ
32 Revised Algorithm 1 Randomly subsample data matrix M at indices Ω 2 Reconstruct M by solving arg min such that X R Ω (X) y 2 < ɛ 3 Holding alpha fixed, solve the Tikhonov regularization problem 4 Having solved the minimization problem, determine a more optimal alpha to use in the regularization problem 5 Loop between Steps 2 & 3 until convergence upon the optimal alpha
33 Outline Basics of Nuclear Magnetic Resonance 1 Basics of Nuclear Magnetic Resonance 2 3 Using Compressive Sensing to Speed Up Data Collection 4
34 Recovery from Small Number of Entries
35 Error Analysis
36 Simple T1T2 Map with 30dB SNR Figure: (TL) T1T2 Map, (TR) Original Algorithm, (BL) 30% Measurements, (BR) 10% Measurements
37 Simple T1T2 Map with 15dB SNR Figure: (TL) T1T2 Map, (TR) Original Algorithm, (BL) 30% Measurements, (BR) 10% Measurements
38 Correlated T1T2 Map with 30dB SNR Figure: (TL) T1T2 Map, (TR) Original Algorithm, (BL) 30% Measurements, (BR) 10% Measurements
39 Two Peak T1T2 Map with 35dB SNR Figure: (TL) Original Algorithm, (TR) 20% Measurements, (BL) 1D ILT of T1, (BR) 1D ILT of T2
40 Two Peak T1T2 Map with 35dB SNR Figure: (TL) T1T2 Map, (TR) Original Algorithm, (BL) 30% Measurements, (BR) 20% Measurements
41 Two Peak Skewing Only keeping largest singular values causes fattening of peaks Necessary for computational efficiency of any 2D ILT algorithm Can be shown by taking 1D ILT of M, U 1 U 1 MU 2U 2, and reconstructions of M using CS
42 Preliminary Real Data Figure: (L) Original Algorithm, (R) 20% Measurements
43 Conclusion recover measurement matrix M almost perfectly While small noise is added in compressive sensing reconstruction, it is dwarfed by algorithmic errors Next step is to implement on machine and run thorough testing
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationCSE 494 CSE/CBS 598 (Fall 2007): Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye
CSE 494 CSE/CBS 598 Fall 2007: Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye 1 Introduction One important method for data compression and classification is to organize
More informationSublinear Algorithms for Big Data. Part 4: Random Topics
Sublinear Algorithms for Big Data Part 4: Random Topics Qin Zhang 11 21 Topic 1: Compressive sensing Compressive sensing The model (CandesRombergTao 04; Donoho 04) Applicaitons Medical imaging reconstruction
More informationSparse recovery and compressed sensing in inverse problems
Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationAPPROXIMATION OF FRAME BASED MISSING DATA RECOVERY
APPROXIMATION OF FRAME BASED MISSING DATA RECOVERY JIANFENG CAI, ZUOWEI SHEN, AND GUIBO YE Abstract. Recovering missing data from its partial samples is a fundamental problem in mathematics and it has
More informationAPPROXIMATION OF FRAME BASED MISSING DATA RECOVERY
APPROXIMATION OF FRAME BASED MISSING DATA RECOVERY JIANFENG CAI, ZUOWEI SHEN, AND GUIBO YE Abstract. Recovering missing data from its partial samples is a fundamental problem in mathematics and it has
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationComputational Optical Imaging  Optique Numerique.  Deconvolution 
Computational Optical Imaging  Optique Numerique  Deconvolution  Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationApplications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
More informationLecture Topic: LowRank Approximations
Lecture Topic: LowRank Approximations LowRank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationAn Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.
An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity
More informationA network flow algorithm for reconstructing. binary images from discrete Xrays
A network flow algorithm for reconstructing binary images from discrete Xrays Kees Joost Batenburg Leiden University and CWI, The Netherlands kbatenbu@math.leidenuniv.nl Abstract We present a new algorithm
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationGE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time
Module 8: IQ: Acquisition Time IQ : Acquisition Time Objectives...Describe types of data acquisition modes....compute acquisition times for 2D and 3D scans. 2D Acquisitions The 2D mode acquires and reconstructs
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationA Branch and Bound Algorithm for Solving the Binary Bilevel Linear Programming Problem
A Branch and Bound Algorithm for Solving the Binary Bilevel Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More information0.1 Phase Estimation Technique
Phase Estimation In this lecture we will describe Kitaev s phase estimation algorithm, and use it to obtain an alternate derivation of a quantum factoring algorithm We will also use this technique to design
More informationL1 vs. L2 Regularization and feature selection.
L1 vs. L2 Regularization and feature selection. Paper by Andrew Ng (2004) Presentation by Afshin Rostami Main Topics Covering Numbers Definition Convergence Bounds L1 regularized logistic regression L1
More informationSome stability results of parameter identification in a jump diffusion model
Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationCS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d dimensional subspace Axes of this subspace
More informationMATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
More informationModel order reduction via Proper Orthogonal Decomposition
Model order reduction via Proper Orthogonal Decomposition Reduced Basis Summer School 2015 Martin Gubisch University of Konstanz September 17, 2015 Martin Gubisch (University of Konstanz) Model order reduction
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationCheng Soon Ong & Christfried Webers. Canberra February June 2016
c Cheng Soon Ong & Christfried Webers Research Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 31 c Part I
More informationDifferential Privacy Preserving Spectral Graph Analysis
Differential Privacy Preserving Spectral Graph Analysis Yue Wang, Xintao Wu, and Leting Wu University of North Carolina at Charlotte, {ywang91, xwu, lwu8}@uncc.edu Abstract. In this paper, we focus on
More informationWavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2πperiodic squareintegrable function is generated by a superposition
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationThe pnorm generalization of the LMS algorithm for adaptive filtering
The pnorm generalization of the LMS algorithm for adaptive filtering Jyrki Kivinen University of Helsinki Manfred Warmuth University of California, Santa Cruz Babak Hassibi California Institute of Technology
More informationIntroduction to Inverse Problems (2 lectures)
Introduction to Inverse Problems (2 lectures) Summary Direct and inverse problems Examples of direct (forward) problems Deterministic and statistical points of view Illposed and illconditioned problems
More informationCofactor Expansion: Cramer s Rule
Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating
More informationIncorporating Internal Gradient and Restricted Diffusion Effects in Nuclear Magnetic Resonance Log Interpretation
The OpenAccess Journal for the Basic Principles of Diffusion Theory, Experiment and Application Incorporating Internal Gradient and Restricted Diffusion Effects in Nuclear Magnetic Resonance Log Interpretation
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationBilinear Prediction Using LowRank Models
Bilinear Prediction Using LowRank Models Inderjit S. Dhillon Dept of Computer Science UT Austin 26th International Conference on Algorithmic Learning Theory Banff, Canada Oct 6, 2015 Joint work with CJ.
More informationANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING
ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING BY OMID ROUHANIKALLEH THESIS Submitted as partial fulfillment of the requirements for the degree of
More information(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
More information1. Introduction. Consider the computation of an approximate solution of the minimization problem
A NEW TIKHONOV REGULARIZATION METHOD MARTIN FUHRY AND LOTHAR REICHEL Abstract. The numerical solution of linear discrete illposed problems typically requires regularization, i.e., replacement of the available
More informationChapter 7. Lyapunov Exponents. 7.1 Maps
Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average
More informationNimble Algorithms for Cloud Computing. Ravi Kannan, Santosh Vempala and David Woodruff
Nimble Algorithms for Cloud Computing Ravi Kannan, Santosh Vempala and David Woodruff Cloud computing Data is distributed arbitrarily on many servers Parallel algorithms: time Streaming algorithms: sublinear
More informationMATHEMATICS (MATH) 3. Provides experiences that enable graduates to find employment in sciencerelated
194 / Department of Natural Sciences and Mathematics MATHEMATICS (MATH) The Mathematics Program: 1. Provides challenging experiences in Mathematics, Physics, and Physical Science, which prepare graduates
More informationWeakly Secure Network Coding
Weakly Secure Network Coding Kapil Bhattad, Student Member, IEEE and Krishna R. Narayanan, Member, IEEE Department of Electrical Engineering, Texas A&M University, College Station, USA Abstract In this
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., RichardPlouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationMath 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 SelfAdjoint and Normal Operators
More informationInverse Problems and Regularization An Introduction
Inverse Problems and Regularization An Introduction Stefan Kindermann Industrial Mathematics Institute University of Linz, Austria What are Inverse Problems? One possible definition [Engl, Hanke, Neubauer
More informationTHE KADISONSINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT
THE ADISONSINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT PETER G. CASAZZA, MATTHEW FICUS, JANET C. TREMAIN, ERIC WEBER Abstract. We will show that the famous, intractible 1959 adisonsinger
More informationPart II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
More informationGenerating Valid 4 4 Correlation Matrices
Applied Mathematics ENotes, 7(2007), 5359 c ISSN 16072510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generating Valid 4 4 Correlation Matrices Mark Budden, Paul Hadavas, Lorrie
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationDerivatives of Matrix Functions and their Norms.
Derivatives of Matrix Functions and their Norms. Priyanka Grover Indian Statistical Institute, Delhi India February 24, 2011 1 / 41 Notations H : ndimensional complex Hilbert space. L (H) : the space
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements Emmanuel Candes, Justin Romberg, and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics,
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationVariational approach to restore pointlike and curvelike singularities in imaging
Variational approach to restore pointlike and curvelike singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure BlancFéraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012
More informationRow Ideals and Fibers of Morphisms
Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion
More informationBig Data  Lecture 1 Optimization reminders
Big Data  Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data  Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics
More informationSection 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationSparsitypromoting recovery from simultaneous data: a compressive sensing approach
SEG 2011 San Antonio Sparsitypromoting recovery from simultaneous data: a compressive sensing approach Haneet Wason*, Tim T. Y. Lin, and Felix J. Herrmann September 19, 2011 SLIM University of British
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More information17.3.1 Follow the Perturbed Leader
CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.
More informationNumerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
More informationMehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationReference: Introduction to Partial Differential Equations by G. Folland, 1995, Chap. 3.
5 Potential Theory Reference: Introduction to Partial Differential Equations by G. Folland, 995, Chap. 3. 5. Problems of Interest. In what follows, we consider Ω an open, bounded subset of R n with C 2
More informationIntroduction to Matrix Algebra I
Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationStable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationFIELDSMITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed
FIELDSMITACS Conference on the Mathematics of Medical Imaging Photoacoustic and Thermoacoustic Tomography with a variable sound speed Gunther Uhlmann UC Irvine & University of Washington Toronto, Canada,
More informationThe van Hoeij Algorithm for Factoring Polynomials
The van Hoeij Algorithm for Factoring Polynomials Jürgen Klüners Abstract In this survey we report about a new algorithm for factoring polynomials due to Mark van Hoeij. The main idea is that the combinatorial
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationInfluences in lowdegree polynomials
Influences in lowdegree polynomials Artūrs Bačkurs December 12, 2012 1 Introduction In 3] it is conjectured that every bounded real polynomial has a highly influential variable The conjecture is known
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationIntroduction to Online Learning Theory
Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent
More informationThe Dantzig selector: statistical estimation when p is much larger than n
The Dantzig selector: statistical estimation when p is much larger than n Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University
More informationPETER G. CASAZZA AND GITTA KUTYNIOK
A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION GENERATING ALL PARSEVAL FRAMES PETER G. CASAZZA AND GITTA KUTYNIOK Abstract. Given an arbitrary finite sequence of vectors in a finite dimensional Hilbert
More informationEigenvalues and eigenvectors of a matrix
Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a nonzero column vector V such that AV = λv then λ is called an eigenvalue of A and V is
More informationWhen is missing data recoverable?
When is missing data recoverable? Yin Zhang CAAM Technical Report TR0615 Department of Computational and Applied Mathematics Rice University, Houston, TX 77005 October, 2006 Abstract Suppose a nonrandom
More information16.3 Fredholm Operators
Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this
More information