Sparse recovery and compressed sensing in inverse problems



Similar documents
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Part II Redundant Dictionaries and Pursuit Algorithms

Sublinear Algorithms for Big Data. Part 4: Random Topics

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Numerical Methods For Image Restoration

Numerical Methods I Eigenvalue Problems

An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks

Associative Memory via a Sparse Recovery Model

When is missing data recoverable?

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Applications to Data Smoothing and Image Processing I

Linear Programming Notes V Problem Transformations

THE PROBLEM OF finding localized energy solutions

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

How To Understand The Problem Of Decoding By Linear Programming

Computing a Nearest Correlation Matrix with Factor Structure

6. Cholesky factorization

Bag of Pursuits and Neural Gas for Improved Sparse Coding

Solving Linear Systems, Continued and The Inverse of a Matrix

Step Frequency Radar Using Compressed Sensing

(Quasi-)Newton methods

24. The Branch and Bound Method

WAVELET FRAME BASED IMAGE RESTORATION WITH MISSING/DAMAGED PIXELS

1 Norms and Vector Spaces

SOLVING LINEAR SYSTEMS

CS Introduction to Data Mining Instructor: Abdullah Mueen

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Nonlinear Programming Methods.S2 Quadratic Programming

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery

Learning to Find Pre-Images

Big Data - Lecture 1 Optimization reminders

Integer Programming: Algorithms - 3

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Manifold Learning Examples PCA, LLE and ISOMAP

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

QUANTIZED PRINCIPAL COMPONENT ANALYSIS WITH APPLICATIONS TO LOW-BANDWIDTH IMAGE COMPRESSION AND COMMUNICATION. D. Wooden. M. Egerstedt. B.K.

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM

1 VECTOR SPACES AND SUBSPACES

Matrix Factorizations for Reversible Integer Mapping

COMPRESSED SENSING: A NEW HOLOGRAPHY FRAMEWORK FOR SIGNALS RECOVERY AND ITS APPLICATION IN DIGITAL TESI DI DOTTORATO PASQUALE MEMMOLO

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Support Vector Machines with Clustering for Training with Very Large Datasets

Stationarity Results for Generating Set Search for Linearly Constrained Optimization

A Direct Numerical Method for Observability Analysis

Nonlinear Iterative Partial Least Squares Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Non-parametric seismic data recovery with curvelet frames

Solving polynomial least squares problems via semidefinite programming relaxations

APPROXIMATION OF FRAME BASED MISSING DATA RECOVERY

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Discrete Optimization

ALGEBRAIC EIGENVALUE PROBLEM

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Basic Components of an LP:

Couple Dictionary Training for Image Super-resolution

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES

Matrix Representations of Linear Transformations and Changes of Coordinates

SGN-1158 Introduction to Signal Processing Test. Solutions

4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department sss40@eng.cam.ac.

Dual Methods for Total Variation-Based Image Restoration

Class-specific Sparse Coding for Learning of Object Representations

A Distributed Line Search for Network Optimization

Avez-vous entendu parler du Compressed sensing? Atelier Inversion d ISTerre 15 Mai 2012 H-C. Nataf

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Introduction to Sparsity in Signal Processing 1

Robust and Scalable Algorithms for Big Data Analytics

Transcription:

Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology Neubrandenburg University of Applied Sciences, Germany Konrad-Zuse-Institute Berlin (ZIB), Germany June 7, 2010

Gerd Teschke (7. Juni 2010) 2/68 1 Motivation 2 Sparse recovery for inverse problems 3 Compressively sensed data and recovery 4 Ill-posed sensing model and sparse recovery 5 Numerical experiment 6 Short summary

Gerd Teschke (7. Juni 2010) 3/68 Motivation Motivation

Gerd Teschke (7. Juni 2010) 4/68 Motivation

Gerd Teschke (7. Juni 2010) 5/68 Motivation References E. Candes, The restricted isometry property and its implications for compressed sensing, Academie des sciences, 2008. E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51(12), 4203-4215, 2005. Y. Eldar, Compressed sensing of analog signals in shift invariant space, IEEE Trans. on Signal Processing, 57(8), 2009. I. Daubechies, M. Fornasier and I. Loris, Accelerated projected gradient method for linear inverse problems with sparsity constraints, J. Fourier Anal. Appl., 2008. G. Teschke and C. Borries, Accelerated projected steepest descent method for nonlinear inverse problems with sparsity constraints, Inverse Problems, 26, 025007, 2010. E.Herrholz and G.Teschke, A compressed sensing approach for inverse problems, preprint, 2010.

Gerd Teschke (7. Juni 2010) 6/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces)

Gerd Teschke (7. Juni 2010) 7/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ

Gerd Teschke (7. Juni 2010) 8/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed

Gerd Teschke (7. Juni 2010) 9/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed Often: incomplete data or no data sampled at adequate rate

Gerd Teschke (7. Juni 2010) 10/68 Motivation Solve Ax = y or F (x) = y (A, F : X Y, X, Y Hilbert spaces) Often noisy data y δ y δ The inverse problem is ill-posed Often: incomplete data or no data sampled at adequate rate Some a-priori knowledge required: solution has sparse representation / is compressible

Gerd Teschke (7. Juni 2010) 11/68 Sparse recovery for inverse problems Sparse recovery for inverse problems

Gerd Teschke (7. Juni 2010) 12/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p

Gerd Teschke (7. Juni 2010) 13/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n ))

Gerd Teschke (7. Juni 2010) 14/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n )) Nonlinear case: [Ramlau, T. 2005, 2007] x n+1 = S α (x n + γf (x n+1 ) (y F (x n )))

Gerd Teschke (7. Juni 2010) 15/68 Sparse recovery for inverse problems Tikhonov approach for linear/nonlinear problems: min F (x) y δ 2 + 2α x p x p Linear case: [Daubechies,Defrise,DeMol 2004] x n+1 = S α (x n + γa (y Ax n )) Nonlinear case: [Ramlau, T. 2005, 2007] x n+1 = S α (x n + γf (x n+1 ) (y F (x n ))) Many alternatives and improvements (incomplete list...) steepest descent, domain decomposition, semi-smooth Newton methods, projection methods [Bredies, Daubechies, Fornasier, Lorenz, Loris,...] adaptive iteration [Ramlau, T., Zhariy and Dahlke, Fornasier, Raasch,...]

Gerd Teschke (7. Juni 2010) 16/68 Sparse recovery for inverse problems Sparsity through projection + acceleration Problem: x n+1 = S α (x n + γa (y Ax n )) overshooting (dynamics discussed by Daubechies, Fornasier, Loris 2008)

Gerd Teschke (7. Juni 2010) 17/68 Sparse recovery for inverse problems Sparsity through projection + acceleration Problem: x n+1 = S α (x n + γa (y Ax n )) overshooting (dynamics discussed by Daubechies, Fornasier, Loris 2008) Alternative: (Daubechies et.al. 2008, Borries, T. 2009 (nonlinear case)) min { Ax y 2 } where B R := {x l 2 ; x 1 R} x B R x n+1 = P R (x n + γ n A (y Ax n )).

Gerd Teschke (7. Juni 2010) 18/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R

Gerd Teschke (7. Juni 2010) 19/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R Special case p = 2, q = 1: P R (x) = (S µ ({x 1,λ } λ Λ ),..., S µ ({x m,λ } λ Λ ))

Gerd Teschke (7. Juni 2010) 20/68 Sparse recovery for inverse problems Vector-valued setup and joint sparsity formulation: min y δ Ax 2 d C p,q R where C p,q R = x (l 2(Λ)) m : Ψ p,q (x) := ( ) q m p x l,λ p l=1 λ Λ R Special case p = 2, q = 1: P R (x) = (S µ ({x 1,λ } λ Λ ),..., S µ ({x m,λ } λ Λ )) with {x l,λ } λ Λ S µ ({x l,λ } λ Λ ) = max( {x l,λ } λ Λ {x l,λ } λ Λ l2 (λ) µ, 0) l2 (λ)

Gerd Teschke (7. Juni 2010) 21/68 Compressively sensed data and recovery Compressively sensed data

Gerd Teschke (7. Juni 2010) 22/68 Compressively sensed data and recovery Compressed Sensing Idea: Suppose there is a signal x. x

Gerd Teschke (7. Juni 2010) 23/68 Compressively sensed data and recovery Compressed Sensing Idea: = Ã Suppose we measure y just a few linear samples. Can we reconstruct x? x

Gerd Teschke (7. Juni 2010) 24/68 Compressively sensed data and recovery Compressed Sensing Idea: y = Ã B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d d

Gerd Teschke (7. Juni 2010) 25/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d

Gerd Teschke (7. Juni 2010) 26/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d k-sparse, then p 2k data points are required (stability: p ck log(m/p)) d

Gerd Teschke (7. Juni 2010) 27/68 Compressively sensed data and recovery Compressed Sensing Idea: y = à B Yes! If x is sparse in a given uncoherent basis or frame,i.e. x = B d à is p m matrix with p m d k-sparse, then p 2k data points are required (stability: p ck log(m/p)) A = ÃB has to fulfill a restricted isometry property d

Gerd Teschke (7. Juni 2010) 28/68 Compressively sensed data and recovery Definition (restricted isometry property RIP) For each integer k = 1, 2,..., define the isometry constant δ k of a sensing matrix A as the smallest number such that (1 δ k ) d 2 l 2 Ad 2 l 2 (1 + δ k ) d 2 l 2 holds for all k-sparse vectors d. A vector is said to be k-sparse if it has at most k non-vanishing entries.

Gerd Teschke (7. Juni 2010) 29/68 Compressively sensed data and recovery Optimization problem: min d d l1 subject to y = Ad or y δ Ad δ ( )

Gerd Teschke (7. Juni 2010) 30/68 Compressively sensed data and recovery Optimization problem: min d d l1 subject to y = Ad or y δ Ad δ ( ) (E. Candes 2008) Let A satisfy RIP with δ 2k < 2 1 and d k denote the best k-term approximation, then the solution d to ( ) obeys d d l2 C 0 k 1/2 d k d l1 or in case of noisy data y δ with y δ y l2 δ d d l2 C 0 k 1/2 d k d l1 + C 1 δ

Gerd Teschke (7. Juni 2010) 31/68 Ill-posed sensing model and sparse recovery Ill-posed sensing model and sparse recovery

Gerd Teschke (7. Juni 2010) 32/68 Ill-posed sensing model and sparse recovery Indirect measurement Kx: y = Ã K B d Consequences K may be ill-conditioned RIP is usually lost.

Gerd Teschke (7. Juni 2010) 33/68 Ill-posed sensing model and sparse recovery Indirect measurement Kx: y = Ã K B d Consequences K may be ill-conditioned RIP is usually lost. Question: How to overcome this problem and how to adjust the finite-dimensional CS setting to inverse problems?

Gerd Teschke (7. Juni 2010) 34/68 Ill-posed sensing model and sparse recovery (setup by Y. Eldar, 2009) Let X be a Hilbert space and X m X the reconstruction space { } m X m = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l=1 λ Λ

Gerd Teschke (7. Juni 2010) 35/68 Ill-posed sensing model and sparse recovery (setup by Y. Eldar, 2009) Let X be a Hilbert space and X m X the reconstruction space { } m X m = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l=1 λ Λ Sparse structure: support set I {1,..., m} X k = x X, x = d l,λ a l,λ, d (l 2 (Λ)) m l I, I =k λ Λ

Gerd Teschke (7. Juni 2010) 36/68 Ill-posed sensing model and sparse recovery Analysis and synthesis operators T φ : X l 2 via x x = { x, φ λ } λ Λ T φ : l 2 X via x λ Λ x λ φ λ.

Gerd Teschke (7. Juni 2010) 37/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d

Gerd Teschke (7. Juni 2010) 38/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d

Gerd Teschke (7. Juni 2010) 39/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d compressed sampling: s 1,λ. s p,λ = A v 1,λ. v m,λ for all λ Λ

Gerd Teschke (7. Juni 2010) 40/68 Ill-posed sensing model and sparse recovery representation of solution: x = T a d data model: y = T v KT a d compressed sampling: s 1,λ. s p,λ = A v 1,λ. v m,λ for all λ Λ compressed data model: y = T s KT a d

Gerd Teschke (7. Juni 2010) 41/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d

Gerd Teschke (7. Juni 2010) 42/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ

Gerd Teschke (7. Juni 2010) 43/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ )

Gerd Teschke (7. Juni 2010) 44/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d = ADd Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ )

Gerd Teschke (7. Juni 2010) 45/68 Ill-posed sensing model and sparse recovery compressed data model: y = T s KT a d = ADd Assume v l,λ, Ka l,λ = κ l,λδ l,l δ λ,λ then for all λ Λ: y λ = AD λ d λ with D λ = diag(κ 1,λ,..., κ m,λ ) Problem: AD does not satisfy RIP

Gerd Teschke (7. Juni 2010) 46/68 Ill-posed sensing model and sparse recovery For all λ Λ consider optimization problem min yλ δ AD λd λ 2 l d λ B 2 + α d λ 2 l 2, R where B R = {d λ l 2 : d λ l1 R}

Gerd Teschke (7. Juni 2010) 47/68 Ill-posed sensing model and sparse recovery For all λ Λ consider optimization problem min yλ δ AD λd λ 2 l d λ B 2 + α d λ 2 l 2, R where B R = {d λ l 2 : d λ l1 R} Corresponding model operator L 2 := D λ A AD λ + αi, leading to stabilized RIP (κ 2 min(1 δ k ) + α) x 2 Lx 2 (κ 2 max(1 + δ k ) + α) x 2

Gerd Teschke (7. Juni 2010) 48/68 Ill-posed sensing model and sparse recovery Theorem (Herrholz, T. 2010) Define d λ as the B R-best approximation to the true solution. Assume R was chosen such that the solution d λ B R and that 0 δ 2k < (1 + 2)κ 2 min κ2 max + 2α (1 + 2)κ 2 min + κ2 max. Then the minimizer d λ satisfies d λ d λ l2 C 0 k 1/2 d k λ d λ l1 +C 1 δ+c 2 L(d λ d λ) l2 +C 3 αr.

Gerd Teschke (7. Juni 2010) 49/68 Ill-posed sensing model and sparse recovery Treatment of all λ simultaneously by a joint sparsity formulation: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R

Gerd Teschke (7. Juni 2010) 50/68 Ill-posed sensing model and sparse recovery Treatment of all λ simultaneously by a joint sparsity formulation: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R where C 2,1 R = {d (l 2(Λ)) m : Ψ 2,1 (d) R}

Gerd Teschke (7. Juni 2010) 51/68 Ill-posed sensing model and sparse recovery Theorem (Herrholz, T. 2010) Let d be the B R -best approximation to the true solution, R as before and 0 δ 2k < (1 + 2)κ 2 min κ2 max + 2α (1 + 2)κ 2 min + κ2 max. Then the minimizer d satisfies d d (l2 (Λ)) m C 0k 1/2 Ψ 2,1 (d k d) + C 1 δ +C 2 L(d d) (l2 (Λ)) m + C 3 αr.

Gerd Teschke (7. Juni 2010) 52/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α:

Gerd Teschke (7. Juni 2010) 53/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α

Gerd Teschke (7. Juni 2010) 54/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min > 1 + 2.

Gerd Teschke (7. Juni 2010) 55/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min > 1 + 2. If δ 2k 1, lower bound is always positive

Gerd Teschke (7. Juni 2010) 56/68 Ill-posed sensing model and sparse recovery Discussion on the balancing on δ 2k of α: Given K and sensing matrix A: (1 + δ 2k )κ 2 max (1 + 2)(1 δ 2k )κ 2 min 2 < α If δ 2k < 1, a stabilization becomes necessary if 1 + δ 2k 1 δ 2k κ2 max κ 2 min > 1 + 2. If δ 2k 1, lower bound is always positive Specializing the case δ 2k < 1 to κ 2 max = κ 2 min = 1: δ 2k > 2 1

Gerd Teschke (7. Juni 2010) 57/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α.

Gerd Teschke (7. Juni 2010) 58/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < 1 + 2

Gerd Teschke (7. Juni 2010) 59/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < 1 + 2 δ 2k < 1 + 2 κ 2 max/κ 2 min 1 + 2 + κ 2 max/κ 2 min

Gerd Teschke (7. Juni 2010) 60/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < 1 + 2 δ 2k < 1 + 2 κ 2 max/κ 2 min 1 + 2 + κ 2 max/κ 2 min Stabilization necessary (α > 0): 1 + 2 κ 2 max/κ 2 min

δ 2k < 1 + 2 κ 2 max/κ 2 min + 2α/κ 2 min 1 + 2 + κ 2 max/κ 2 min Gerd Teschke (7. Juni 2010) 61/68 Ill-posed sensing model and sparse recovery Given K, stabilize first and suppose A can be chosen accordingly: κ 2 max (1 + 2)κ 2 min 2 < α. No stabilization necessary (α = 0): 1 κ 2 max/κ 2 min < 1 + 2 δ 2k < 1 + 2 κ 2 max/κ 2 min 1 + 2 + κ 2 max/κ 2 min Stabilization necessary (α > 0): 1 + 2 κ 2 max/κ 2 min

Gerd Teschke (7. Juni 2010) 62/68 Numerical experiment Numerical experiment (MATLAB)

Numerical experiment a l,λ - sinc- wavelets, l { 3, 2,..., 4, 5}, i.e. m = 9 K convolution operator D λ x is 2-sparse, d 1,21 = 3, d 4,20 = 1, d 4,25 = 2 yλ δ = AD λd λ + δ Gerd Teschke (7. Juni 2010) 63/68

Gerd Teschke (7. Juni 2010) 64/68 Numerical experiment K corresponds to diagonal entries κ l,λ :

Gerd Teschke (7. Juni 2010) 65/68 Numerical experiment minimization of yields: min y δ ADd 2 (l 2 (Λ)) p + α d 2 (l 2 (Λ)) m d C 2,1 R

Gerd Teschke (7. Juni 2010) 66/68 Numerical experiment Algorithmic aspects:

Gerd Teschke (7. Juni 2010) 67/68 Short summary Summary: Algorithms for sparse recovery: - projected steepest descent, step length control, vector-valued, joint sparsity Suggestion for a setup for incomplete/compressed data: - semi-infinite dimensional In the compressed data model: - balance between the sensing properties of A and stabilization by α - choice of α restricted (typically α 0 to ensure CS recovery) Future work: - apply infinite CS model (A. Hansen, 2010) - combine compressed data model with adaptive operator evaluation

Gerd Teschke (7. Juni 2010) 68/68 Short summary Workshop on CS, Sparsity and Inverse Problems September 6-7, 2010 at TU Braunschweig, Germany http://www.dfg-spp1324.de/nuhagtools/event/make.php?event=cssip10