Sparse Signal Recovery: Theory, Applications and Algorithms
|
|
|
- Bertina Pitts
- 10 years ago
- Views:
Transcription
1 Sparse Signal Recovery: Theory, Applications and Algorithms Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Collaborators: I. Gorodonitsky, S. Cotter, D. Wipf, J. Palmer, K. Kreutz-Delgado Acknowledgement: Travel support provided by Office of Naval Research Global under Grant Number: N The research was supported by several grants from the National Science Foundation, most recently CCF / 42
2 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 2 / 42
3 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 3 / 42
4 Importance of Problem DSPl7: DSP EDUCATION VOLUME I11 DSP Instead of Circuits? - Transition to Undergraduate DSP Education at Rose-Hulman W. Padgett, M. Yoder (Rose-Hulman Institute of Tecnology, USA) Organized A with Java Signal Prof. Analysis Tool Bresler for Signal Processing a Special Experiments... Session at ~...,..,...,.,...,..,...,,...III-l~9 the 1998 IEEE A. Clausen, A. Spanias, A. Xavier, M. Tampi (Arizona State University, USA) International Visualization Conference of Signal Processing on Concepts Acoustics,... Speech and Signal Processing J. Shaffer, J. Hamaker, J. Picone (Mississippi State University, USA) An Experimental Architecture for Interactive Web-based DSP Education M. Rahkila, M. Karjalainen (Helsinki University of Technology, Finland) SPEC-DSP: SIGNAL PROCESSING WITH SPARSENESS CONSTRAINT Signal Processing with the Sparseness Constraint B. Rao (University of California, San Diego, USA) Application of Basis Pursuit in Spectrum Estimation S. Chen (IBM, USA); D. Donoho (Stanford University, USA) Parsimony and Wavelet Method for Denoising H. Krim (MIT, USA); J. Pesquet (University Paris Sud, France); I. Schick (GTE Ynternetworking and Harvard Univ., USA) Parsimonious Side Propagation P. Bradley, 0. Mangasarian (University of Wisconsin-Madison, USA) Fast Optimal and Suboptimal Algorithms for Sparse Solutions to Linear Inverse Problems G. Harikumar (Tellabs Research, USA); C. Couvreur, Y. Bresler (University of Illinois, Urbana-Champaign. USA) Measures and Algorithms for Best Basis Selection K. Kreutz-Delgado, B. Rao (University of Califomia, San Diego, USA) Sparse Inverse Solution Methods for Signal and Image Processing Applications B. Jeffs (Brigham Young University, USA) Image Denoising Using Multiple Compaction Domains P. Ishwar, K. Ratakonda, P. Moulin, N. Ahuja (University of Illinois, Urbana-Champaign, USA) SPEC-EDUCATION: FUTURE DIRECTIONS IN SIGNAL PROCESSING EDUCATION A Different First Course in Electrical Engineering D. Johnson, J. Wise, Jr. (Rice University, USA) 4 / 42
5 Talk Goals Sparse Signal Recovery is an interesting area with many potential applications Tools developed for solving the Sparse Signal Recovery problem are useful for signal processing practitioners to know 5 / 42
6 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 6 / 42
7 Problem Description t is N 1measurementvector ΦisN M Dictionary matrix. M >> N. w is M 1desiredvectorwhichissparsewithK non-zero entries is the additive noise modeled as additive white Gaussian 7 / 42
8 Problem Statement Noise Free Case: Givenatargetsignalt and a dictionary Φ, find the weights w that solve: min w M I (w i =0) suchthat t =Φw i=1 where I (.) istheindicatorfunction Noisy Case: Givenatargetsignalt and a dictionary Φ, find the weights w that solve: min w M I (w i =0) suchthat t Φw 2 2 β i=1 8 / 42
9 Complexity Search over all possible subsets, which would mean a search over atotalof( M C K ) subsets. Problem NP hard. With M =30, N =20, and K =10thereare subsets (Very Complex) Abranchandboundalgorithmcanbeusedtofindtheoptimal solution. The space of subsets searched is pruned but the search may still be very complex. Indicator function not continuous and so not amenable to standard optimization tools. Challenge: Findlowcomplexitymethodswithacceptable performance 9 / 42
10 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 10 / 42
11 Applications Signal Representation (Mallat, Coifman, Wickerhauser, Donoho,...) EEG/MEG (Leahy, Gorodnitsky, Ioannides,...) Functional Approximation and Neural Networks (Chen, Natarajan, Cun, Hassibi,...) Bandlimited extrapolations and spectral estimation (Papoulis, Lee, Cabrera, Parks,...) Speech Coding (Ozawa, Ono, Kroon, Atal,...) Sparse channel equalization (Fevrier, Greenstein, Proakis, ) Compressive Sampling (Donoho, Candes, Tao..) 11 / 42
12 DFT Example N chosen to be 64 in this example. Measurement t: Dictionary Elements: t[n] = cos ω 0 n, n =0, 1, 2,.., N 1 ω 0 = 2π φ k = [1, e jω k, e j2ω k,.., e j(n 1)ω k ] T,ω k = 2π M Consider M =64, 128, 256 and 512 NOTE: The frequency components are included in the dictionary Φ for M =128, 256, and / 42
13 FFT Results FFT Results with Different M M=64 M=128 M=256 M= / 42
14 Magnetoencephalography (MEG) Given measured magnetic MEG fields outside Example of the head, the goal is to locate thegiven responsible measured current magnetic sources fields outside insideof the head, head. we would like to locate the responsible current sources inside the head. = MEG measurement point = Candidate source location 14 / 42
15 MEG Example At any given time, typicallymeg only aexample few sources are active (SPARSE). At any given time, typically only a few sources are active. = MEG measurement point = Candidate source location = Active source location 15 / 42
16 MEG Formulation Forming the overcomplete dictionary Φ The number of rows equals the number of sensors. The number of columns equals the number of possible source locations. Φ ij = the magnetic field measured at sensor i produced by a unit current at location j. We can compute Φ using a boundary element brain model and Maxwells equations. Many different combinations of current sources can produce the same observed magnetic field t. By finding the sparsest signal representation/basis, we find the smallest number of sources capable of producing the observed field. Such a representation is of neurophysiological significance 16 / 42
17 Compressive Sampling 17 / 42
18 Compressive Sampling Transform Coding Compressed Sensing w b A w A b t 17 / 42
19 Compressive Sampling Transform Coding Compressed Sensing w b Compressed Sensing w b A w A b t Compressive Sensing A w A b t 17 / 42
20 Compressive Sampling Transform Coding Compressed Sensing w b Compressed Sensing w b A w A b t Compressive Sensing A w A b t Computation : t w b 17 / 42
21 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 18 / 42
22 Potential Approaches Problem NP hard and so need alternate strategies Greedy Search Techniques: Matching Pursuit,Orthogonal Matching Pursuit Minimizing Diversity Measures: Indicator function not continuous. Define Surrogate Cost functions that are more tractable and whose minimization leads to sparse solutions, e.g. 1 minimization Bayesian Methods: Make appropriate Statistical assumptions on the solution and apply estimation techniques to identify the desired sparse solution 19 / 42
23 Greedy Search Methods: Matching Pursuit Select a column that is most aligned with the current residual t w Remove its contribution Stop when residual becomes small enough or if we have exceeded some sparsity threshold. Some Variations Matching Pursuit [Mallat & Zhang] Orthogonal Matching Pursuit [Pati et al.] Order Recursive Matching Pursuit (ORMP) 20 / 42
24 Inverse techniques For the systems of equations Φw = t, the solution set is characterized by {w s : w s =Φ + t + v, v N(Φ)}, where N (Φ) denotes the null space of Φ and Φ + =Φ T (ΦΦ T ) 1. Minimum Norm solution :Theminimum 2 norm solution w mn =Φ + t is a popular solution Noisy Case: regularized 2 norm solution often employed and is given by w reg =Φ T (ΦΦ T + λi ) 1 t Problem: Solution is not Sparse 21 / 42
25 Diversity Measures Functionals whose minimization leads to sparse solutions Many examples are found in the fields of economics, social science and information theory These functionals are concave which leads to difficult optimization problems 22 / 42
26 Examples of Diversity Measures (p 1) Diversity Measure E (p) (w) = M w l p, p 1 l=1 Gaussian Entropy Shannon Entropy H G (w) = M ln w l 2 l=1 H S (w) = M l=1 w l ln w l. where w l = w l 2 w 2 23 / 42
27 Diversity Minimization Noiseless Case min w E (p) (w) = M w l p subject to Φw = t l=1 Noisy Case min w t Φw 2 + λ M w l p l=1 p =1isaverypopularchoicebecauseoftheconvexnatureofthe optimization problem (Basis Pursuit and Lasso). 24 / 42
28 Bayesian Methods Maximum Aposteriori Approach (MAP) Assume a sparsity inducing prior on the latent variable w Develop an appropriate MAP estimation algorithm Empirical Bayes Assume a parameterized prior on the latent variable w (hyperparameters) Marginalize over the latent variable w and estimate the hyperparameters Determine the posterior distribution of w and obtain a point as the mean, mode or median of this density 25 / 42
29 Generalized Generalized Gaussian Gaussian Distribution Distribution Density function: Subgaussian: p > 2 and Supergaussian p : p < 2 ( f (x) ) p p x = 2σΓ( 1) exp exp x px 2 1 p σ p -p < 2: Super Gaussian -p > 2: Sub Gaussian p=0.5 p=1 p=2 p= / 42
30 Student t Distribution Density function: x px ( ) ν+1 Γ( 2 f (x) = ) 1 πνγ( ν 1+ x 2 ( ν+1 ) ν 2 2 ) = 1 = 2 = 5 Gaussian / 42
31 MAP using a Supergaussian prior Assuming a Gaussian likelihood model for f (t w), we can find MAP weight estimates w MAP = argmaxlog f (w t) w = argmax(log f (t w)+logf (w)) w = arg min w Φw t 2 + λ M w l p This is essentially a regularized LS framework. Interesting range for p is p 1. l=1 28 / 42
32 MAP Estimate: FOCal Underdetermined System Solver (FOCUSS) Approach involves solving a sequence of Regularized Weighted Minimum Norm problems q k+1 =argmin Φk+1 q t 2 + λq 2 q where Φ k+1 =ΦM k+1, and M k+1 =diag( w k.l 1 p 2. w k+1 = M k+1 q k+1. p =0isthe 0 minimization and p =1is 1 minimization 29 / 42
33 FOCUSS summary For p < 1, the solution is initial condition dependent Prior knowledge can be incorporated Minimum norm is a suitable choice Can retry with several initial conditions Computationally more complex than Matching Pursuit algorithms Sparsity versus tolerance tradeoff more involved Factor p allows a trade-off between the speed of convergence and the sparsity obtained 30 / 42
34 Convergence Errors vs. Structural Errors Convergence Errors vs. Structural Errors Convergence Error Structural Error -log p(w t) -log p(w t) w * w 0 w * w 0 w * = solution we have converged to w 0 = maximally sparse solution 31 / 42
35 Shortcomings of these Methods p =1 p < 1 Basis Pursuit/Lasso often suffer from structural errors. Therefore, regardless of initialization, we may never find the best solution. The FOCUSS class of algorithms suffers from numerous suboptimal local minima and therefore convergence errors. In the low noise limit, the number of local minima K satisfies M 1 M K +1, N N At most local minima, the number of nonzero coefficients is equal to N, thenumberofrowsinthedictionary. 32 / 42
36 Empirical Bayesian Method Main Steps Parameterized prior f (w γ) Marginalize f (t γ) = f (t, w γ)dw = f (t w)f (w γ)dw Estimate the hyperparameter ˆγ Determine the posterior density of the latent variable f (w t, ˆγ) Obtain point estimate of w Example: Sparse Bayesian Learning (SBL by Tipping) 33 / 42
37 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 34 / 42
38 DFT ExampleDFT Results Using FOCUSS 35 / 42
39 Empirical Tests Random overcomplete bases Φ and sparse weight vectors w 0 were generated and used to create target signals t, i.e., t =Φw 0 + SBL (Empirical Bayes) was compared with Basis Pursuit and FOCUSS (with various p values) in the task of recovering w / 42
40 Experiment Experiment 1: Comparison I: Noiseless withcomparison Noiseless Data Randomized Φ (20 rows by 40 40columns). Diversity of the true w 0 is 7. Results are from 1000 independent trials. NOTE: An error occurs whenever an algorithm converges to a solution NOTE: w An not error equaloccurs to w 0. whenever an algorithm converges to a solution w not equal to w / 42
41 Experiment II: Comparison with Noisy Data Experiment II: Noisy Comparisons Randomized Φ (20 (20 rows rows by by columns). Diversity 20 db of AWGN. the true w 0 is db Diversity AWGNof the true w is 7. Results Results are are from from independent trials. NOTE: We no longer distinguish convergence errors from structural NOTE: We no longer distinguish convergence errors from errors. structural errors. 38 / 42
42 39 / 42
43 MEG Example Data based on CTF MEG system at UCSF with 275 scalp sensors. Forward model based on 40, 000 points (vertices of a triangular grid) and 3 different scale factors. Dimensions of lead field matrix (dictionary): 275 by 120, 000. Overcompleteness ratio approximately 436. Up to 40 unknown dipole components were randomly placed throughout the sample space. SBL was able to resolve roughly twice as many dipoles as the next best method (Ramirez 2005). 40 / 42
44 Outline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational Algorithms 5 Performance Evaluation 6 Conclusion 41 / 42
45 Summary Discussed role of sparseness in linear inverse problems Discussed Applications of Sparsity Discussed methods for computing sparse solutions Matching Pursuit Algorithms MAP methods (FOCUSS Algorithm and 1 minimization) Empirical Bayes (Sparse Bayesian Learning (SBL)) Expectation is that there will be continued growth in the application domain as well as in the algorithm development. 42 / 42
THE problem of computing sparse solutions (i.e., solutions
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 53, NO. 7, JULY 2005 2477 Sparse Solutions to Linear Inverse Problems With Multiple Measurement Vectors Shane F. Cotter, Member, IEEE, Bhaskar D. Rao, Fellow,
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Zhilin Zhang and Bhaskar D. Rao Technical Report University of California at San Diego September, Abstract Sparse Bayesian
RECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth Kreutz-Delgado, Senior Member,
Part II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
Sparsity-promoting recovery from simultaneous data: a compressive sensing approach
SEG 2011 San Antonio Sparsity-promoting recovery from simultaneous data: a compressive sensing approach Haneet Wason*, Tim T. Y. Lin, and Felix J. Herrmann September 19, 2011 SLIM University of British
When is missing data recoverable?
When is missing data recoverable? Yin Zhang CAAM Technical Report TR06-15 Department of Computational and Applied Mathematics Rice University, Houston, TX 77005 October, 2006 Abstract Suppose a non-random
THE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
A Stochastic 3MG Algorithm with Application to 2D Filter Identification
A Stochastic 3MG Algorithm with Application to 2D Filter Identification Emilie Chouzenoux 1, Jean-Christophe Pesquet 1, and Anisia Florescu 2 1 Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est,
CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks
Vol.8, No.1 (14), pp.1-8 http://dx.doi.org/1.1457/ijsia.14.8.1.1 An Improved Reconstruction methods of Compressive Sensing Data Recovery in Wireless Sensor Networks Sai Ji 1,, Liping Huang, Jin Wang, Jian
PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS
PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUM OF REFERENCE SYMBOLS Benjamin R. Wiederholt The MITRE Corporation Bedford, MA and Mario A. Blanco The MITRE
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1 Active-Set Newton Algorithm for Overcomplete Non-Negative Representations of Audio Tuomas Virtanen, Member,
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
MIMO CHANNEL CAPACITY
MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Christfried Webers. Canberra February June 2015
c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Sublinear Algorithms for Big Data. Part 4: Random Topics
Sublinear Algorithms for Big Data Part 4: Random Topics Qin Zhang 1-1 2-1 Topic 1: Compressive sensing Compressive sensing The model (Candes-Romberg-Tao 04; Donoho 04) Applicaitons Medical imaging reconstruction
Bag of Pursuits and Neural Gas for Improved Sparse Coding
Bag of Pursuits and Neural Gas for Improved Sparse Coding Kai Labusch, Erhardt Barth, and Thomas Martinetz University of Lübec Institute for Neuro- and Bioinformatics Ratzeburger Allee 6 23562 Lübec, Germany
Linear Classification. Volker Tresp Summer 2015
Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong
STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION
STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION Adiel Ben-Shalom, Michael Werman School of Computer Science Hebrew University Jerusalem, Israel. {chopin,werman}@cs.huji.ac.il
System Identification for Acoustic Comms.:
System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation
Stable Signal Recovery from Incomplete and Inaccurate Measurements
Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University
Background: State Estimation
State Estimation Cyber Security of the Smart Grid Dr. Deepa Kundur Background: State Estimation University of Toronto Dr. Deepa Kundur (University of Toronto) Cyber Security of the Smart Grid 1 / 81 Dr.
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Message-passing sequential detection of multiple change points in networks
Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal
A Statistical Framework for Operational Infrasound Monitoring
A Statistical Framework for Operational Infrasound Monitoring Stephen J. Arrowsmith Rod W. Whitaker LA-UR 11-03040 The views expressed here do not necessarily reflect the views of the United States Government,
Introduction to Sparsity in Signal Processing 1
Introduction to Sparsity in Signal Processing Ivan Selesnick November, NYU-Poly Introduction These notes describe how sparsity can be used in several signal processing problems. A common theme throughout
Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014
Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about
The p-norm generalization of the LMS algorithm for adaptive filtering
The p-norm generalization of the LMS algorithm for adaptive filtering Jyrki Kivinen University of Helsinki Manfred Warmuth University of California, Santa Cruz Babak Hassibi California Institute of Technology
Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property
A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
A Network Flow Approach in Cloud Computing
1 A Network Flow Approach in Cloud Computing Soheil Feizi, Amy Zhang, Muriel Médard RLE at MIT Abstract In this paper, by using network flow principles, we propose algorithms to address various challenges
Introduction to Deep Learning Variational Inference, Mean Field Theory
Introduction to Deep Learning Variational Inference, Mean Field Theory 1 Iasonas Kokkinos [email protected] Center for Visual Computing Ecole Centrale Paris Galen Group INRIA-Saclay Lecture 3: recap
Course: Model, Learning, and Inference: Lecture 5
Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 [email protected] Abstract Probability distributions on structured representation.
Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets
Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets Chris Kreucher a, J. Webster Stayman b, Ben Shapo a, and Mark Stuff c a Integrity Applications Incorporated 900 Victors
Regularized Logistic Regression for Mind Reading with Parallel Validation
Regularized Logistic Regression for Mind Reading with Parallel Validation Heikki Huttunen, Jukka-Pekka Kauppi, Jussi Tohka Tampere University of Technology Department of Signal Processing Tampere, Finland
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph
Numerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
Detection of changes in variance using binary segmentation and optimal partitioning
Detection of changes in variance using binary segmentation and optimal partitioning Christian Rohrbeck Abstract This work explores the performance of binary segmentation and optimal partitioning in the
Semi-Supervised Support Vector Machines and Application to Spam Filtering
Semi-Supervised Support Vector Machines and Application to Spam Filtering Alexander Zien Empirical Inference Department, Bernhard Schölkopf Max Planck Institute for Biological Cybernetics ECML 2006 Discovery
Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery
: Distribution-Oblivious Support Recovery Yudong Chen Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 7872 Constantine Caramanis Department of Electrical
Cortical Source Localization of Human Scalp EEG. Kaushik Majumdar Indian Statistical Institute Bangalore Center
Cortical Source Localization of Human Scalp EEG Kaushik Majumdar Indian Statistical Institute Bangalore Center Cortical Basis of Scalp EEG Baillet et al, IEEE Sig Proc Mag, Nov 2001, p 16 Mountcastle,
ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa
ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS Mikhail Tsitsvero and Sergio Barbarossa Sapienza Univ. of Rome, DIET Dept., Via Eudossiana 18, 00184 Rome, Italy E-mail: [email protected], [email protected]
Cyber-Security Analysis of State Estimators in Power Systems
Cyber-Security Analysis of State Estimators in Electric Power Systems André Teixeira 1, Saurabh Amin 2, Henrik Sandberg 1, Karl H. Johansson 1, and Shankar Sastry 2 ACCESS Linnaeus Centre, KTH-Royal Institute
Less naive Bayes spam detection
Less naive Bayes spam detection Hongming Yang Eindhoven University of Technology Dept. EE, Rm PT 3.27, P.O.Box 53, 5600MB Eindhoven The Netherlands. E-mail:[email protected] also CoSiNe Connectivity Systems
MUSICAL INSTRUMENT FAMILY CLASSIFICATION
MUSICAL INSTRUMENT FAMILY CLASSIFICATION Ricardo A. Garcia Media Lab, Massachusetts Institute of Technology 0 Ames Street Room E5-40, Cambridge, MA 039 USA PH: 67-53-0 FAX: 67-58-664 e-mail: rago @ media.
Lecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
High Quality Image Deblurring Panchromatic Pixels
High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical
Fast Solution of l 1 -norm Minimization Problems When the Solution May be Sparse
Fast Solution of l 1 -norm Minimization Problems When the Solution May be Sparse David L. Donoho and Yaakov Tsaig October 6 Abstract The minimum l 1 -norm solution to an underdetermined system of linear
Bayesian Image Super-Resolution
Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA [email protected]
Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA [email protected], hlv@gmu.
POSERIOR CRAMÉR-RAO BOUND FOR RACKING ARGE BEARING Kristine L. Bell and Harry L. Van rees Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA [email protected], [email protected] ABSRAC
8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
IN THIS paper, we address the classic image denoising
3736 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 12, DECEMBER 2006 Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries Michael Elad and Michal Aharon Abstract We
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : [email protected].
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : [email protected] 1 1 Outline The LMS algorithm Overview of LMS issues
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
Compressed Sensing of Correlated Social Network Data
Compressed Sensing of Correlated Social Network Data Abstract In this paper, we present preliminary work on compressed sensing of social network data and identify its non-trivial sparse structure. basis
Binary Image Reconstruction
A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands [email protected] Abstract We present a new algorithm
Image Super-Resolution via Sparse Representation
1 Image Super-Resolution via Sparse Representation Jianchao Yang, Student Member, IEEE, John Wright, Student Member, IEEE Thomas Huang, Life Fellow, IEEE and Yi Ma, Senior Member, IEEE Abstract This paper
Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
Narayana P. Santhanam Department of Electrical Engineering, University of Hawaii, Honolulu HI 96822
Narayana P. Santhanam Department of Electrical Engineering, University of Hawaii, Honolulu HI 96822 Professional experience Assistant Professor, University of Hawaii, 2009-. Postdoctoral researcher, UC
Associative Memory via a Sparse Recovery Model
Associative Memory via a Sparse Recovery Model Arya Mazumdar Department of ECE University of Minnesota Twin Cities [email protected] Ankit Singh Rawat Computer Science Department Carnegie Mellon University
Advanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
Data Mining - Evaluation of Classifiers
Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
Neural Networks Lesson 5 - Cluster Analysis
Neural Networks Lesson 5 - Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt. - Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm [email protected] Rome, 29
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001,
Superresolution images reconstructed from aliased images
Superresolution images reconstructed from aliased images Patrick Vandewalle, Sabine Süsstrunk and Martin Vetterli LCAV - School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne
CS 688 Pattern Recognition Lecture 4. Linear Models for Classification
CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(
UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision
UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision D.B. Grimes A.P. Shon R.P.N. Rao Dept. of Computer Science and Engineering University of Washington Seattle, WA
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
Statistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics
Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics Dimitri Van De Ville Ecole Polytechnique Fédérale de Lausanne Biomedical Imaging Group [email protected]
Ericsson T18s Voice Dialing Simulator
Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of
Image Super-Resolution as Sparse Representation of Raw Image Patches
Image Super-Resolution as Sparse Representation of Raw Image Patches Jianchao Yang, John Wright, Yi Ma, Thomas Huang University of Illinois at Urbana-Champagin Beckman Institute and Coordinated Science
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
