A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property



Similar documents
Sublinear Algorithms for Big Data. Part 4: Random Topics

When is missing data recoverable?

Stable Signal Recovery from Incomplete and Inaccurate Measurements

6. Cholesky factorization

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Applied Algorithm Design Lecture 5

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Sparse recovery and compressed sensing in inverse problems

Linear Codes. Chapter Basics

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Notes 11: List Decoding Folded Reed-Solomon Codes

Two classes of ternary codes and their weight distributions

How To Understand The Problem Of Decoding By Linear Programming

We shall turn our attention to solving linear systems of equations. Ax = b

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach

Numerical Analysis Lecture Notes

Mathematical finance and linear programming (optimization)

Orthogonal Projections

ISOMETRIES OF R n KEITH CONRAD

2.1 Complexity Classes

Similarity and Diagonalization. Similar Matrices

160 CHAPTER 4. VECTOR SPACES

Chapter 6. Orthogonality

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery

Factorization Theorems

1 Formulating The Low Degree Testing Problem

α = u v. In other words, Orthogonal Projection

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Functional-Repair-by-Transfer Regenerating Codes

Algorithmic Techniques for Big Data Analysis. Barna Saha AT&T Lab-Research

3. Linear Programming and Polyhedral Combinatorics

SECRET sharing schemes were introduced by Blakley [5]

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Survey of the Mathematics of Big Data

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Inner Product Spaces and Orthogonality

Inner Product Spaces

Solving Linear Systems, Continued and The Inverse of a Matrix

COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Sketch As a Tool for Numerical Linear Algebra

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Lecture 4 Online and streaming algorithms for clustering

Chapter 6. Cuboids. and. vol(conv(p ))

2.3 Convex Constrained Optimization Problems

Adaptive Online Gradient Descent

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

1 Review of Least Squares Solutions to Overdetermined Systems

7 Gaussian Elimination and LU Factorization

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Research Article Two-Period Inventory Control with Manufacturing and Remanufacturing under Return Compensation Policy

1 Solving LPs: The Simplex Algorithm of George Dantzig

GROUPS ACTING ON A SET

Practical Guide to the Simplex Method of Linear Programming

Linear Programming I

Inner products on R n, and more

( ) which must be a vector

Group Testing a tool of protecting Network Security

Lecture 4: AC 0 lower bounds and pseudorandomness

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

LS.6 Solution Matrices

Direct Methods for Solving Linear Systems. Matrix Factorization

Linear Programming: Chapter 11 Game Theory

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Methods for Finding Bases

Continued Fractions and the Euclidean Algorithm

The Ideal Class Group

NOTES ON LINEAR TRANSFORMATIONS

An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks

24. The Branch and Bound Method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Question 2: How do you solve a matrix equation using the matrix inverse?

Notes on Factoring. MA 206 Kurt Bryan

Lecture 4: Partitioned Matrices and Determinants

Inner product. Definition of inner product

= = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa

A Brief Introduction to Property Testing

Row Echelon Form and Reduced Row Echelon Form

Workforce scheduling with logical constraints: theory and applications in call centers

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Similar matrices and Jordan form

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Lecture 6 Online and streaming algorithms for clustering

Transcription:

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve good performance with respect to the Restricted Isometry Property (RIP). Most currently known deterministic constructions of matrices satisfying the RIP fall into this category, and hence these constructions suffer inherent limitations. In particular, we show that DeVore s construction of matrices satisfying the RIP is close to optimal once we add the constraint that all entries of the matrix are 0 or 1. 1 Introduction Compressive sensing [CRT04,CRT06,Don06] has been put forward as a new paradigm for data acquisition. Compressive sensing is based on the observation that if a vector x R n is sparse, in the sense that only k n entries of x are nonzero, then we can recover x exactly using far fewer than n linear measurements. Formally, we represent the measurements by an m n measurement matrix A. If x is the input signal, we measure Ax R m, and our goal is to recover the original vector x from Ax. Of course, if A is invertible then this is simple - the goal is to make m substantially smaller than n. The price we pay is that we will not be able to recover all x R n. Instead, we will only be able to recover k-sparse vectors, i.e., vectors with at most k nonzero entries. One of the remarkable results of compressive sensing is that there exist matrices A with only m = O(k log (n/k)) rows such that for all k-sparse x, we can recover x exactly from Ax. In [CT05], the restricted isometry property (RIP) was introduced as a useful tool for proving this result. In this paper, we will use the following definition of the RIP. Definition 1. Let A be an m n matrix. Then, A satisfies (k, D) RIP if there exists c > 0 such that for all k-sparse x R n, c Ax 2 x 2 cd Ax 2. Here 2 denotes the L2 norm. c is a scaling constant that we include for convenience in what follows. 1

The RIP is useful because it can be shown that if a matrix A satisfies the RIP, then linear programming can be used recover a k-sparse signal x from Ax. We note that there are other properties besides the RIP that can be used to prove that a matrix can be used for compressive sensing (cf. [KT07]). However, one nice aspect of the RIP is that if a matrix satisfies the RIP, it can be shown that recovery is possible even if there is some noise in the measurements. Given that matrices satisfying the RIP are good for compressive sensing, it is natural to ask how to construct such matrices, and what type of performance we can achieve, e.g., how few rows can A have. It is well-known that there exist matrices satisfying the RIP with O(k log (n/k)) rows, and this is within a constant factor of a lower bound which states that Ω(k log (n/k)) rows are necessary. However, the proof that such matrices exist uses the probabilistic method, and hence an explicit construction of such a matrix remains elusive. The currently known explicit constructions ( [M06], [CM06], [DeV07]) use many more rows than random matrices. Specifically, these constructions requires Ω(k 2 ) rows. One would hope that these constructions could be improved to get explicit matrices with close to O(k log (n/k)) rows. For example, the results of [GIKS07,BI08] show that if we modify the definition of the RIP by using an Lp norm with p close to 1 instead of the L2 norm, then explicit matrices can be constructed that satisfy the modified RIP. Specifically, there exist (non-explicit) matrices with only O(k log (n/k)) rows which satisfy the modified RIP. In addition, there are explicit matrices with O(k2 (log log n)o(1) ) rows that satisfy the modified RIP. In this note, we show that for L2, a fundamentally different approach is needed. In particular, the previously mentioned explicit constructions all use matrices whose entries are 0 or 1. We show that 0, 1-matrices require substantially more than O(k log (n/k)) rows to satisfy the RIP. 2 0, 1-Matrices Are Bad With Respect to the RIP In this section we prove our main result, which shows that 0, 1-matrices cannot achieve good performance with respect to the RIP. First, we make a couple of definitions that will be used in what follows. Given a 0, 1-matrix A, let f be the minimum fraction of 1 s in any column of A, i.e., fm is the minimum number of ones in a column of A. Also, let r be the maximum number of 1 s in any row of A. By permuting the rows and columns of A, we may assume that A 1,i = 1 for 1 i r, i.e., that the first row of A has a 1 in the first r entries. In the following proofs, we assume that this reordering has already been done. The following theorem shows that 0, 1-matrices require substantially more rows that optimal RIP matrices. Theorem{ 1. Let A be an m n 0, 1-matrix that satisfies (k, D) RIP. k m min 2, }. n D 4 D 2 Then, Before we prove Theorem 1, we need the following lemma. 2

Lemma 1. Let A be an m n 0, 1-matrix that satisfies (k, D) RIP. Then, f D2 k. Proof. To start, we square the inequalities defining (k, D) RIP to obtain c 2 Ax 2 2 x 2 2 c 2 D 2 Ax 2 2. We can bound c by considering the 1-sparse vector x = e i, where e i is a standard basis vector with a 1 in a coordinate corresponding to a column of A with fm 1 s. Then, the inequalities above give c 2 fm 1 c 2 D 2 fm. We will only need the second inequality, which we rewrite as 1 c 2 D2 fm. (1) Now, consider the vector x with 1 s in the first k positions and 0 s elsewhere, i.e., x = k i=1 e i. Let w(i) denote the number of 1 s in row i and in the first k columns of A. From the definition of f, the first k columns of A contain at least fmk 1 s, so w(i) fmk. Applying the Cauchy-Schwarz inequality, we obtain Ax 2 2 = m i=1 w(i) 2 ( m i=1 w(i))2 m (fk) 2 m. But Ax 2 2 x 2 2 c 2 D 2 fmk, so putting the two inequalities together gives (fk) 2 m D 2 fmk. Cancelling out fmk from both sides gives f D2 k. Now, we prove Theorem 1. Proof. Lemma 1 gives us a bound on f that will be useful when r > k. We can obtain a second bound on f from the obvious inequality fmn number of 1 s in A rm. Thus, f r. As we will see, this bound is useful when r k. n For notational convenience, let s = min{r, k}. Consider the vector x with 1 s in the first s positions and 0 s elsewhere, i.e., x = s i=1 e i. For this choice of x, we see that Ax 2 2 s 2 because the first entry of Ax is s. Note that because s k, the inequalities defining (k, D) RIP apply to x. Thus, we can apply equation 1 to get s 2 Ax 2 2 x 2 2 c 2 D 2 fms. We now plug in our two bounds on f. If r > k, then s = k, and we use the bound from Lemma 1. This gives k 2 D 2 D2 k mk, 3

so m k2 D 4. Similarly, if r k, then s = r, so using our second bound on f gives r 2 D 2 r n mr. Thus, in this case m n D 2. Putting the two bounds together, m min{ k2 D 4, n D 2 }, completing the proof. 3 Conclusion We have shown that 0, 1-matrices cannot be optimal with respect to the RIP. Note that our lower bounds on m are essentially tight for a large range of parameters. Specifically, assume that D = O(1) and that k = Ω(n α ) for some constant 0 < α <.5. Then, the matrices constructed in [DeV07] achieve m = O(( k α )2 ), so these matrices are within a constant factor of our lower bound. References [BI08] R. Berinde and P. Indyk. Sparse recovery using sparse random matrices. MIT CSAIL Technical Report, available at http://hdl.handle.net/1721.1/40089. [CRT04] E. Candès, J. Romberg, and T. Tao. Exact signal reconstruction from highly incomplete frequency information. Submitted for publication, June 2004. [CRT06] E. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(1):489509, February 2006. [CT05] E. Candès and T. Tao. Decoding by linear programming. IEEE Trans. on Information Theory, 51(12), pp. 4203-4215, December 2005. [CM06] G. Cormode and S. Muthukrishnan. Combinatorial algorithms for compressed sensing. In Paola Flocchini and Leszek Gasieniec, editors, Structural Information and Communication Complexity, 13th International Colloquium, SIROCCO 2006, Chester, UK, July 2-5, 2006, Proceedings, volume 4056 of Lecture Notes in Computer Science, pages 280294. Springer, 2006. [DeV07] R. DeVore. Deterministic Constructions Of Compressed Sensing Matrices. Preprint, 2007. [Don06] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):12891306, 2006. [GIKS07] A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: a unified approach to sparse signal recovery. Manuscript, 2007. [KT07] B.Kashin and V. Temlyakov. A remark on compressed sensing. Preprint, 2007. 4

[M06] S. Muthukrishnan. Some algorithmic problems and results in compressed sensing. In Allerton Conference, 2006. 5