Sparse recovery and compressed sensing in inverse problems
|
|
|
- Andrew Edwards
- 10 years ago
- Views:
Transcription
1 Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology Neubrandenburg University of Applied Sciences, Germany Konrad-Zuse-Institute Berlin (ZIB), Germany June 7, 2010
2 Gerd Teschke (7. Juni 2010) 2/68 1 Motivation 2 Sparse recovery for inverse problems 3 Compressively sensed data and recovery 4 Ill-posed sensing model and sparse recovery 5 Numerical experiment 6 Short summary
3 Gerd Teschke (7. Juni 2010) 3/68 Motivation Motivation
4 Gerd Teschke (7. Juni 2010) 4/68 Motivation
5 Gerd Teschke (7. Juni 2010) 5/68 Motivation References E. Candes, The restricted isometry property and its implications for compressed sensing, Academie des sciences, E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51(12), , Y. Eldar, Compressed sensing of analog signals in shift invariant space, IEEE Trans. on Signal Processing, 57(8), I. Daubechies, M. Fornasier and I. Loris, Accelerated projected gradient method for linear inverse problems with sparsity constraints, J. Fourier Anal. Appl., G. Teschke and C. Borries, Accelerated projected steepest descent method for nonlinear inverse problems with sparsity constraints, Inverse Problems, 26, , E.Herrholz and G.Teschke, A compressed sensing approach for inverse problems, preprint, 2010.
6 Gerd Teschke (7. Juni 2010) 6/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces)
7 Gerd Teschke (7. Juni 2010) 7/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ
8 Gerd Teschke (7. Juni 2010) 8/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed
9 Gerd Teschke (7. Juni 2010) 9/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed Often: incomplete data or no data sampled at adequate rate
10 Gerd Teschke (7. Juni 2010) 10/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed Often: incomplete data or no data sampled at adequate rate Some a-priori knowledge required: solution has sparse representation / is compressible
11 Gerd Teschke (7. Juni 2010) 11/68 Sparse recovery for inverse problems Sparse recovery for inverse problems
12 Gerd Teschke (7. Juni 2010) 12/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p
13 Gerd Teschke (7. Juni 2010) 13/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n ))
14 Gerd Teschke (7. Juni 2010) 14/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n )) Nonlinear case: [Ramlau, T. 2005, 2007] x n+1 = S α (x n + γf (x n+1 ) (y F (x n )))
15 Gerd Teschke (7. Juni 2010) 15/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n )) Nonlinear case: [Ramlau, T. 2005, 2007] x n+1 = S α (x n + γf (x n+1 ) (y F (x n ))) Many alternatives and improvements (incomplete list...) steepest descent, domain decomposition, semi-smooth Newton methods, projection methods [Bredies, Daubechies, Fornasier, Lorenz, Loris,...] adaptive iteration [Ramlau, T., Zhariy and Dahlke, Fornasier, Raasch,...]
16 Gerd Teschke (7. Juni 2010) 16/68 Sparse recovery for inverse problems Sparsity through projection + acceleration Problem: x n+1 = S α (x n + γa (y Ax n )) overshooting (dynamics discussed by Daubechies, Fornasier, Loris 2008)
17 Gerd Teschke (7. Juni 2010) 17/68 Sparse recovery for inverse problems Sparsity through projection + acceleration Problem: x n+1 = S α (x n + γa (y Ax n )) overshooting (dynamics discussed by Daubechies, Fornasier, Loris 2008) Alternative: (Daubechies et.al. 2008, Borries, T (nonlinear case)) min { Ax y 2 } where B R := {x l 2 ; x 1 R} x B R x n+1 = P R (x n + γ n A (y Ax n )).
18 Gerd Teschke (7. Juni 2010) 18/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R
19 Gerd Teschke (7. Juni 2010) 19/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R Special case p = 2, q = 1: P R (x) = (S µ ({x 1,λ } λ Λ ),..., S µ ({x m,λ } λ Λ ))
20 Gerd Teschke (7. Juni 2010) 20/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R Special case p = 2, q = 1: P R (x) = (S µ ({x 1,λ } λ Λ ),..., S µ ({x m,λ } λ Λ )) with {x l,λ } λ Λ S µ ({x l,λ } λ Λ ) = max( {x l,λ } λ Λ {x l,λ } λ Λ l2 (λ) µ, 0) l2 (λ)
21 Gerd Teschke (7. Juni 2010) 21/68 Compressively sensed data and recovery Compressively sensed data
22 Gerd Teschke (7. Juni 2010) 22/68 Compressively sensed data and recovery Compressed Sensing Idea: Suppose there is a signal x. x
23 Gerd Teschke (7. Juni 2010) 23/68 Compressively sensed data and recovery Compressed Sensing Idea: = Ã Suppose we measure y just a few linear samples. Can we reconstruct x? x
24 Gerd Teschke (7. Juni 2010) 24/68 Compressively sensed data and recovery Compressed Sensing Idea: y = Ã B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d d
25 Gerd Teschke (7. Juni 2010) 25/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d
26 Gerd Teschke (7. Juni 2010) 26/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d k-sparse, then p 2k data points are required (stability: p ck log(m/p)) d
27 Gerd Teschke (7. Juni 2010) 27/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d k-sparse, then p 2k data points are required (stability: p ck log(m/p)) A = ÃB has to fulfill a restricted isometry property d
28 Gerd Teschke (7. Juni 2010) 28/68 Compressively sensed data and recovery Definition (restricted isometry property RIP) For each integer k = 1, 2,..., define the isometry constant δ k of a sensing matrix A as the smallest number such that (1 δ k ) d 2 l 2 Ad 2 l 2 (1 + δ k ) d 2 l 2 holds for all k-sparse vectors d. A vector is said to be k-sparse if it has at most k non-vanishing entries.
29 Gerd Teschke (7. Juni 2010) 29/68 Compressively sensed data and recovery Optimization problem: min d d l1 subject to y = Ad or y δ Ad δ ( )
30 Gerd Teschke (7. Juni 2010) 30/68 Compressively sensed data and recovery Optimization problem: min d d l1 subject to y = Ad or y δ Ad δ ( ) (E. Candes 2008) Let A satisfy RIP with δ 2k < 2 1 and d k denote the best k-term approximation, then the solution d to ( ) obeys d d l2 C 0 k 1/2 d k d l1 or in case of noisy data y δ with y δ y l2 δ d d l2 C 0 k 1/2 d k d l1 + C 1 δ
31 Gerd Teschke (7. Juni 2010) 31/68 Ill-posed sensing model and sparse recovery Ill-posed sensing model and sparse recovery
32 Gerd Teschke (7. Juni 2010) 32/68 Ill-posed sensing model and sparse recovery Indirect measurement Kx: y = Ã K B d Consequences K may be ill-conditioned RIP is usually lost.
33 Gerd Teschke (7. Juni 2010) 33/68 Ill-posed sensing model and sparse recovery Indirect measurement Kx: y = Ã K B d Consequences K may be ill-conditioned RIP is usually lost. Question: How to overcome this problem and how to adjust the finite-dimensional CS setting to inverse problems?
34 Gerd Teschke (7. Juni 2010) 34/68 Ill-posed sensing model and sparse recovery (setup by Y. Eldar, 2009) Let X be a Hilbert space and X m X the reconstruction space { } m X m = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l=1 λ Λ
35 Gerd Teschke (7. Juni 2010) 35/68 Ill-posed sensing model and sparse recovery (setup by Y. Eldar, 2009) Let X be a Hilbert space and X m X the reconstruction space { } m X m = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l=1 λ Λ Sparse structure: support set I {1,..., m} X k = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l I, I =k λ Λ
36 Gerd Teschke (7. Juni 2010) 36/68 Ill-posed sensing model and sparse recovery Analysis and synthesis operators T φ : X l 2 via x x = { x, φ λ } λ Λ T φ : l 2 X via x λ Λ x λ φ λ.
37 Gerd Teschke (7. Juni 2010) 37/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d
38 Gerd Teschke (7. Juni 2010) 38/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d
39 Gerd Teschke (7. Juni 2010) 39/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d compressed sampling: s 1,λ. s p,λ = A v 1,λ. v m,λ for all λ Λ
40 Gerd Teschke (7. Juni 2010) 40/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d compressed sampling: s 1,λ. s p,λ = A v 1,λ. v m,λ for all λ Λ compressed data model: y = T s KT a d
41 Gerd Teschke (7. Juni 2010) 41/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d
42 Gerd Teschke (7. Juni 2010) 42/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ
43 Gerd Teschke (7. Juni 2010) 43/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ )
44 Gerd Teschke (7. Juni 2010) 44/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d = ADd Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ )
45 Gerd Teschke (7. Juni 2010) 45/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d = ADd Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ ) Problem: AD does not satisfy RIP
46 Gerd Teschke (7. Juni 2010) 46/68 Ill-posed sensing model and sparse recovery For all λ Λ consider optimization problem min yλ δ AD λd λ 2 l d λ B 2 + α d λ 2 l 2, R where B R = {d λ l 2 : d λ l1 R}
47 Gerd Teschke (7. Juni 2010) 47/68 Ill-posed sensing model and sparse recovery For all λ Λ consider optimization problem min yλ δ AD λd λ 2 l d λ B 2 + α d λ 2 l 2, R where B R = {d λ l 2 : d λ l1 R} Corresponding model operator L 2 := D λ A AD λ + αi, leading to stabilized RIP (κ 2 min(1 δ k ) + α) x 2 Lx 2 (κ 2 max(1 + δ k ) + α) x 2
48 Gerd Teschke (7. Juni 2010) 48/68 Ill-posed sensing model and sparse recovery Theorem (Herrholz, T. 2010) Define d λ as the B R-best approximation to the true solution. Assume R was chosen such that the solution d λ B R and that 0 δ 2k < (1 + 2)κ 2 min κ2 max + 2α (1 + 2)κ 2 min + κ2 max. Then the minimizer d λ satisfies d λ d λ l2 C 0 k 1/2 d k λ d λ l1 +C 1 δ+c 2 L(d λ d λ) l2 +C 3 αr.
49 Gerd Teschke (7. Juni 2010) 49/68 Ill-posed sensing model and sparse recovery Treatment of all λ simultaneously by a joint sparsity formulation: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R
50 Gerd Teschke (7. Juni 2010) 50/68 Ill-posed sensing model and sparse recovery Treatment of all λ simultaneously by a joint sparsity formulation: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R where C 2,1 R = {d (l 2(Λ)) m : Ψ 2,1 (d) R}
51 Gerd Teschke (7. Juni 2010) 51/68 Ill-posed sensing model and sparse recovery Theorem (Herrholz, T. 2010) Let d be the B R -best approximation to the true solution, R as before and 0 δ 2k < (1 + 2)κ 2 min κ2 max + 2α (1 + 2)κ 2 min + κ2 max. Then the minimizer d satisfies d d (l2 (Λ)) m C 0k 1/2 Ψ 2,1 (d k d) + C 1 δ +C 2 L(d d) (l2 (Λ)) m + C 3 αr.
52 Gerd Teschke (7. Juni 2010) 52/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α:
53 Gerd Teschke (7. Juni 2010) 53/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α
54 Gerd Teschke (7. Juni 2010) 54/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min >
55 Gerd Teschke (7. Juni 2010) 55/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min > If δ 2k 1, lower bound is always positive
56 Gerd Teschke (7. Juni 2010) 56/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min > If δ 2k 1, lower bound is always positive Specializing the case δ 2k < 1 to κ 2 max = κ 2 min = 1: δ 2k > 2 1
57 Gerd Teschke (7. Juni 2010) 57/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α.
58 Gerd Teschke (7. Juni 2010) 58/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < 1 + 2
59 Gerd Teschke (7. Juni 2010) 59/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < δ 2k < κ 2 max/κ 2 min κ 2 max/κ 2 min
60 Gerd Teschke (7. Juni 2010) 60/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < δ 2k < κ 2 max/κ 2 min κ 2 max/κ 2 min Stabilization necessary (α > 0): κ 2 max/κ 2 min
61 δ 2k < κ 2 max/κ 2 min + 2α/κ 2 min κ 2 max/κ 2 min Gerd Teschke (7. Juni 2010) 61/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < δ 2k < κ 2 max/κ 2 min κ 2 max/κ 2 min Stabilization necessary (α > 0): κ 2 max/κ 2 min
62 Gerd Teschke (7. Juni 2010) 62/68 Numerical experiment Numerical experiment (MATLAB)
63 Numerical experiment a l,λ - sinc- wavelets, l { 3, 2,..., 4, 5}, i.e. m = 9 K convolution operator D λ x is 2-sparse, d 1,21 = 3, d 4,20 = 1, d 4,25 = 2 yλ δ = AD λd λ + δ Gerd Teschke (7. Juni 2010) 63/68
64 Gerd Teschke (7. Juni 2010) 64/68 Numerical experiment K corresponds to diagonal entries κ l,λ :
65 Gerd Teschke (7. Juni 2010) 65/68 Numerical experiment minimization of yields: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R
66 Gerd Teschke (7. Juni 2010) 66/68 Numerical experiment Algorithmic aspects:
67 Gerd Teschke (7. Juni 2010) 67/68 Short summary Summary: Algorithms for sparse recovery: - projected steepest descent, step length control, vector-valued, joint sparsity Suggestion for a setup for incomplete/compressed data: - semi-infinite dimensional In the compressed data model: - balance between the sensing properties of A and stabilization by α - choice of α restricted (typically α 0 to ensure CS recovery) Future work: - apply infinite CS model (A. Hansen, 2010) - combine compressed data model with adaptive operator evaluation
68 Gerd Teschke (7. Juni 2010) 68/68 Short summary Workshop on CS, Sparsity and Inverse Problems September 6-7, 2010 at TU Braunschweig, Germany
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve
Sparsity-promoting recovery from simultaneous data: a compressive sensing approach
SEG 2011 San Antonio Sparsity-promoting recovery from simultaneous data: a compressive sensing approach Haneet Wason*, Tim T. Y. Lin, and Felix J. Herrmann September 19, 2011 SLIM University of British
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu
Stable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
Computational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
Part II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
Sublinear Algorithms for Big Data. Part 4: Random Topics
Sublinear Algorithms for Big Data Part 4: Random Topics Qin Zhang 1-1 2-1 Topic 1: Compressive sensing Compressive sensing The model (Candes-Romberg-Tao 04; Donoho 04) Applicaitons Medical imaging reconstruction
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
Numerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks
Vol.8, No.1 (14), pp.1-8 http://dx.doi.org/1.1457/ijsia.14.8.1.1 An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks Sai Ji 1,, Liping Huang, Jin Wang, Jian
Associative Memory via a Sparse Recovery Model
Associative Memory via a Sparse Recovery Model Arya Mazumdar Department of ECE University of Minnesota Twin Cities [email protected] Ankit Singh Rawat Computer Science Department Carnegie Mellon University
When is missing data recoverable?
When is missing data recoverable? Yin Zhang CAAM Technical Report TR06-15 Department of Computational and Applied Mathematics Rice University, Houston, TX 77005 October, 2006 Abstract Suppose a non-random
Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
THE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)
PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units) Instructor: Behnam Jafarpour, Mork Family Department of Chemical Engineering and Material Science Petroleum Engineering, HED 313,
How To Understand The Problem Of Decoding By Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
Computing a Nearest Correlation Matrix with Factor Structure
Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester [email protected] http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Bag of Pursuits and Neural Gas for Improved Sparse Coding
Bag of Pursuits and Neural Gas for Improved Sparse Coding Kai Labusch, Erhardt Barth, and Thomas Martinetz University of Lübec Institute for Neuro- and Bioinformatics Ratzeburger Allee 6 23562 Lübec, Germany
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
Step Frequency Radar Using Compressed Sensing
Step Frequency Radar Using Compressed Sensing Ioannis Sarkas 99602384 I INTRODUCTION The current and advent progress in semiconductor technologies will allow the development of integrated circuits operating
(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
WAVELET FRAME BASED IMAGE RESTORATION WITH MISSING/DAMAGED PIXELS
WAVELET FRAME BASED IMAGE RESTORATION WITH MISSING/DAMAGED PIXELS HUI JI, ZUOWEI SHEN, AND YUHONG XU Abstract. This paper addresses the problem of how to recover degraded images with partial image pixels
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)
Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that
Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery
: Distribution-Oblivious Support Recovery Yudong Chen Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 7872 Constantine Caramanis Department of Electrical
Learning to Find Pre-Images
Learning to Find Pre-Images Gökhan H. Bakır, Jason Weston and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 Tübingen, Germany {gb,weston,bs}@tuebingen.mpg.de
Big Data - Lecture 1 Optimization reminders
Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics
Integer Programming: Algorithms - 3
Week 9 Integer Programming: Algorithms - 3 OPR 992 Applied Mathematical Programming OPR 992 - Applied Mathematical Programming - p. 1/12 Dantzig-Wolfe Reformulation Example Strength of the Linear Programming
SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)
SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace
QUANTIZED PRINCIPAL COMPONENT ANALYSIS WITH APPLICATIONS TO LOW-BANDWIDTH IMAGE COMPRESSION AND COMMUNICATION. D. Wooden. M. Egerstedt. B.K.
International Journal of Innovative Computing, Information and Control ICIC International c 5 ISSN 1349-4198 Volume x, Number x, x 5 pp. QUANTIZED PRINCIPAL COMPONENT ANALYSIS WITH APPLICATIONS TO LOW-BANDWIDTH
Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.
An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 [email protected] In fact, the great watershed in optimization isn t between linearity
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
Matrix Factorizations for Reversible Integer Mapping
2314 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 49, NO 10, OCTOBER 2001 Matrix Factorizations for Reversible Integer Mapping Pengwei Hao, Member, IEEE, and Qingyun Shi Abstract Reversible integer mapping
COMPRESSED SENSING: A NEW HOLOGRAPHY FRAMEWORK FOR SIGNALS RECOVERY AND ITS APPLICATION IN DIGITAL TESI DI DOTTORATO PASQUALE MEMMOLO
TESI DI DOTTORATO UNIVERSITÀ DEGLI STUDI DI NAPOLI FEDERICO II DIPARTIMENTO DI INGEGNERIA BIOMEDICA, ELETTRONICA E DELLE TELECOMUNICAZIONI DOTTORATO DI RICERCA IN INGEGNERIA ELETTRONICA E DELLE TELECOMUNICAZIONI
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
Stationarity Results for Generating Set Search for Linearly Constrained Optimization
SANDIA REPORT SAND2003-8550 Unlimited Release Printed October 2003 Stationarity Results for Generating Set Search for Linearly Constrained Optimization Tamara G. Kolda, Robert Michael Lewis, and Virginia
A Direct Numerical Method for Observability Analysis
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
Non-parametric seismic data recovery with curvelet frames
Non-parametric seismic data recovery with curvelet frames Felix J. Herrmann and Gilles Hennenfent (January 2, 2007) ABSTRACT Seismic data recovery from data with missing traces on otherwise regular acquisition
Solving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
APPROXIMATION OF FRAME BASED MISSING DATA RECOVERY
APPROXIMATION OF FRAME BASED MISSING DATA RECOVERY JIAN-FENG CAI, ZUOWEI SHEN, AND GUI-BO YE Abstract. Recovering missing data from its partial samples is a fundamental problem in mathematics and it has
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
ALGEBRAIC EIGENVALUE PROBLEM
ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
Basic Components of an LP:
1 Linear Programming Optimization is an important and fascinating area of management science and operations research. It helps to do less work, but gain more. Linear programming (LP) is a central topic
Couple Dictionary Training for Image Super-resolution
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Couple Dictionary Training for Image Super-resolution Jianchao Yang, Student Member, IEEE, Zhaowen Wang, Student Member, IEEE, Zhe Lin, Member, IEEE, Scott Cohen,
Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval
Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
Matrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
SGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : [email protected].
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : [email protected] 1 1 Outline The LMS algorithm Overview of LMS issues
Dual Methods for Total Variation-Based Image Restoration
Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
A Distributed Line Search for Network Optimization
01 American Control Conference Fairmont Queen Elizabeth, Montréal, Canada June 7-June 9, 01 A Distributed Line Search for Networ Optimization Michael Zargham, Alejandro Ribeiro, Ali Jadbabaie Abstract
Avez-vous entendu parler du Compressed sensing? Atelier Inversion d ISTerre 15 Mai 2012 H-C. Nataf
Avez-vous entendu parler du Compressed sensing? Atelier Inversion d ISTerre 15 Mai 2012 H-C. Nataf Compressed Sensing (CS) Candès & Wakin, 2008 15 Mai 2012 Atelier Inversion d'isterre 2 15 Mai 2012 Atelier
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Introduction to Sparsity in Signal Processing 1
Introduction to Sparsity in Signal Processing Ivan Selesnick November, NYU-Poly Introduction These notes describe how sparsity can be used in several signal processing problems. A common theme throughout
Robust and Scalable Algorithms for Big Data Analytics
Robust and Scalable Algorithms for Big Data Analytics Georgios B. Giannakis Acknowledgment: Drs. G. Mateos, K. Slavakis, G. Leus, and M. Mardani Arlington, VA, USA March 22, 2013 1 Roadmap n Robust principal
