Signal Processing Tools for General chairs Big Data Analy5cs Jean-Luc Dugelay, EURECOM Dirk Slock, EURECOM Technical chairs Marc Antonini, I3S/UNS/CNRS Nicholas Evans, EURECOM Cédric Richard, UNS/OCA Plenary Talks Sergios Theodoridis, UoA Josiane Zerubia, INRIA Special Sessions Marco Carli, U. Roma Thierry Dutoit, UMONS Jean-Yves Tourneret, IRIT/ENSEEIHT Tutorials G. B. Giannakis, K. Slavakis, and G. Mateos Touradj Ebrahimi, EPFL Patrick Naylor, Imperial Finance Bernard Merialdo, EURECOM Publicity Acknowledgments: NSF Grants EARS- 1343248, EAGER- 1343860, Benoit Huet, EURECOM Nikos Nikolaidis, AUTH CyberSEES- 1442686, EAGER- 1500713 Publications MURI Grant No. AFOSR FA9550-10- 1-0567 Patrizio Campisi, U. Roma Tre Claude Delpha, U. Paris Sud Student activities Christophe Beaugeant, Intel Ana Perez, UPC/CTTC Awards Marc Moonen, KU Leuven Sponsorship Lionel Fillatre, I3S/UNS/CNRS Call for Papers EUSIPCO is the flagship conference of the Eur EURASIP. The 23rd edition will be held in Nice, 4th September 2015. EUSIPCO 2015 will featu sessions, keynotes, exhibitions, demonstrations the order of 600 leading researchers and industry Technical scope The focus will be on signal processing theory, alg submission of original, unpublished technical pa to: Audio and acoustic signal processing Speech and language processing Image and video processing Multimedia signal processing Signal processing theory and methods Sensor array and multichannel signal processing Signal processing for communications Nonlinear signal processing Accepted papers will be submitted for inclusion are available at the conference website: www.eu Best student paper awards Two EUSIPCO Best Student Paper Awards will Papers will be selected by a committee compose Nice, France Location and venue Nestled August between the 31, foot 2015 of 1 the Alps and the accessed easily from the Nice Côte d'azur interna Paris, with direct connections to almost 100 Eu destinations including New York, JFK and Dubai. Acropolis Convention Centre, named Europe's n
Big Data: A growing torrent Source: McKinsey Global InsPtute, Big Data: The next fronper for innovapon, compeppon, and producpvity, May 2011. 2
Big Data: Capturing its value Source: McKinsey Global InsPtute, Big Data: The next fronper for innovapon, compeppon, and producpvity, May 2011. 3
Big Data and NetSci analypcs Online social media Robot and sensor networks Internet Biological networks Clean energy and grid analypcs Square kilometer array telescope q Desiderata: Process, analyze, and learn from large pools of network data 4
Challenges BIG q Sheer volume of data Ø Decentralized and parallel processing Ø Security and privacy measures q Modern massive datasets involve many a]ributes Ø Parsimonious models to ease interpretability Ø Enhanced predicpve performance Fast q Real- Pme streaming data Ø Online processing Ø Quick- rough answer vs. slow- accurate answer? q Outliers and misses Ø Robust imputapon algorithms q Good news: Ample research opportunipes arise! Messy K. Slavakis, G. B. Giannakis, and G. Mateos, Modeling and oppmizapon for big data analypcs, IEEE Signal Processing Magazine, vol. 31, no. 5, pp. 18-31, Sep. 2014. 5
OpportuniPes Big tensor data models and factorizapons High- dimensional stapspcal SP Network data visualizapon Theore5cal and Sta5s5cal Founda5ons of Big Data Analy5cs Pursuit of low- dimensional structure Analysis of mulp- relaponal data Common principles across networks Resource tradeoffs Randomized algorithms Scalable online, decentralized oppmizapon Convergence and performance guarantees InformaPon processing over graphs Algorithms and Implementa5on Pla@orms to Learn from Massive Datasets Novel architectures for large- scale data analypcs Robustness to outliers and missing data Graph SP 6
Roadmap q Context and mopvapon q CriPcal Big Data tasks Ø Encompassing and parsimonious data modeling Ø Dimensionality reducpon, data visualizapon Ø Data cleansing, anomaly detecpon, and inference q OpPmizaPon algorithms for Big Data q Randomized learning q Scalable compupng placorms for Big Data q Conclusions and future research direcpons 7
Encompassing model Background (low rank) Pa]erns, innovapons, (co- )clusters, outliers Observed data DicPonary Noise Sparse matrix q Subset of observapons and projecpon operator allow for misses q Large- scale data and/or h q Any of unknown 8
Subsumed paradigms q Structure leveraging criterion Nuclear norm: : singular val. of - norm (With or without misses) Ø known Compressive sampling (CS) [Candes- Tao 05] Ø Ø Ø Ø DicPonary learning (DL) [Olshausen- Field 97] Non- negapve matrix factorizapon (NMF) [Lee- Seung 99] Principal component pursuit (PCP) [Candes etal 11] Principal component analysis (PCA) [Pearson 1901] 9
PCA formulapons q Training data q Minimum reconstrucpon error Ø Compression Ø Reconstruction y t t ŷ t q Component analysis model SoluPon: 10
Dual and kernel PCA Ma q SVD: Q Gram matrix Inner products Q Q. What if approximapng low- dim space not a hyperplane? Q A1. Stretch it to become linear: Kernel PCA; e.g., [Scholkopf- Smola 01] Ø Maps to, and leverages dual PCA in high- dim spaces A2. General (non)linear models; e.g., union of hyperplanes, or, locally linear Ø TangenPal hyperplanes ŷ t B. Schölkopf and A. J. Smola, Learning with Kernels, Cambridge, MIT Press, 2001 11
IdenPficaPon of network communipes q Kernel PCA instrumental for parpponing of large graphs (spectral clustering) Ø Relies on graph Laplacian to capture nodal correlapons Facebook egonet 744 nodes, 30,023 edges arxiv collaborapon network (General RelaPvity) 4,158 nodes, 13,422 edges q For random sketching and validapon reduces complexity to P. A. TraganiPs, K. Slavakis, and G. B. Giannakis, Spectral clustering of large- scale communipes via random sketching and validapon, in Proc. CISS, BalPmore, Maryland, Mar. 18-20, 2015. 12
MulP- dimensional scaling Given dissimilaripes or distances, idenpfy low- dim vectors that preserve LS or Kruskal- Shephard MDS: If s.t. Classical MDS: Solu5on Dual PCA: q Distance- preserving downscaling of data dimension I. Borg and P. J. Groenen, Modern MulIdimensional Scaling: Theory and ApplicaIons, NY, Springer, 2005. 13
Local linear embedding q For each find neighborhood, e.g., k- nearest neighbors q Weight matrix captures local affine relapons Sparse and [Elhamifar- Vidal 11] q IdenPfy low- dimensional vectors preserving local geometry [Saul- Roweis 03] Solu5on: The rows of are the minor, excluding, L. K. Saul and S. T. Roweis, Think globally, fit locally: Unsupervised learning of low dimensional manifolds, J. Machine Learning Research, vol. 4, pp. 119-155, 2003. 14
ApplicaPon to graph visualizapon q Undirected graph Nodes or verpces Edges or communicapng pairs of nodes All nodes that communicate with node (neighborhood) Ø Given centrality metrics q IdenPfy weights or geometry, centrality- constrained (CC) LLE q Visualize Centrality constraints, e.g., node degree Only corr. or dissimilaripes need be known B. Baingana and G. B. Giannakis, Embedding graphs under centrality constraints for network visualizapon, 2014, [Online]. Available: arxiv:1401.4408 15
Visualizing the Gnutella network Gnutella: Peer- to- peer file- sharing network CC- LLE: centrality captured by node degree (08/04/2012) Processing Pme: sec (08/24/2012) Processing Time: sec B. Baingana and G. B. Giannakis, Embedding graphs under centrality constraints for network visualizapon, 2014, [Online]. Available: arxiv:1401.4408 16
DicPonary learning q Solve for dicponary and sparse : (Lasso task; sparse coding) (Constrained LS task) q AlternaPng minimizapon; both and are convex q Special case of block coordinate descent methods (BCDMs) [Tseng 01] q Under certain condipons, converges to a staponary point of B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res., vol. 37, no. 23, pp. 3311 3325, 1997. 17
Joint DL- LLE paradigm DL fit LLE fit Sparsity regularizapon q DicPonary morphs data to a smooth basis; reduces noise and complexity q InpainPng DL- LLE offers by local- affine- geometry- preserving data- driven non- linear embedding DL; for robust misses; (de- )compression PSNR db K. Slavakis, G. B. Giannakis, and G. Leus, Robust sparse embedding and reconstrucpon via dicponary learning, In Proc. CISS, BalPmore: USA, 2013. 18
Online dicponary learning q Data arrive sequenpally q InpainPng of 12- Mpixel damaged image S1. Learn dicponary from clean patches S2. Remove text by sparse coding from clean J. Mairal, F. Bach, J. Ponce, and G. Sapiro, Online learning for matrix factorizapon and sparse coding, J. Machine Learning Research, vol. 11, pp. 19 60, Mar. 2010. 19
Union of subspaces and subspace clustering q Parsimonious data model q Given Subspace clustering (SC) idenpfies No. of clusters Bases Data- cluster assoc. q SC as DL with block- sparse with q If then SC PCA R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52 68, March 2011 20
Modeling outliers q Outlier variables s.t. outlier otherwise Ø Nominal data obey ; outliers something else Ø Linear regression [Fuchs 99], [Wright- Ma 10], [Giannakis et al 11] Ø Both and unknown, under- determined q typically sparse! 21
RobusPfying PCA q Natural (sparsity- leverging) espmator Ø Tuning parameter controls sparsity in number of outliers Q: Does (P1) yield robust espmates? A: Yap! Huber espmator is a special case Ø Formally juspfies the outlier- aware model and its espmator Ø Ties sparse regression with robust stapspcs G. Mateos and G. B. Giannakis, ``Robust PCA as bilinear decomposipon with outlier sparsity regularizapon,'' IEEE TransacIons on Signal Processing, pp. 5176-5190, Oct. 2012. 22
Prior art q Robust covariance matrix espmators [Campbell 80], [Huber 81] Ø M- type espmators in computer vision [Xu- Yuille 95], [De la Torre- Black 03] q Rank minimizapon with the nuclear norm, e.g., [Recht- Fazel- Parrilo 10] Ø Matrix decomposipon [Candes et al 10], [Chandrasekaran et al 11] Principal Component Pursuit (PCP) Observed Low rank Sparse Ø Singing voice separation [Huang Broadening et al 12] Ø Face recognition [Wright-Ma 10] the model = + = + 23
Video surveillance q Background modeling from video feeds [De la Torre- Black 01] Data PCA Robust PCA Outliers Data: h]p://www.cs.cmu.edu/~ orre/ 24
Big Five personality factors q Measure five broad dimensions of personality traits [Costa- McRae 92] Eugene- Springfield BFI sample (WEIRD) Data: courtesy of Prof. L. Goldberg, provided by Prof. N. Waller q Big Five Inventory (BFI) Ø Short- quesponnaire (44 items) Ø Rate 1-5, e.g., `I see myself as someone who is talkapve is full of energy Outlier ranking Handbook of personality: Theory and research, O. P. John, R. W. Robins, and L. A. Pervin, Eds. New York, NY: Guilford Press, 2008. 25
Robust unveiling of communipes q Robust kernel PCA for idenpficapon of cohesive subgroups q Network: NCAA football teams (verpces), Fall 00 games (edges) Par55oned graph with outliers Row/column- permuted adjacency matrix ARI=0.8967 Ø IdenPfied exactly: Big 10, Big 12, MWC, SEC, ; Outliers: Independent teams Data: h]p://www- personal.umich.edu/~mejn/netdata/ 26
Robust unveiling of communipes q Robust kernel PCA for idenpficapon of cohesive subgroups q Network: NCAA football teams (verpces), Fall 00 games (edges) Par55oned graph with outliers Row/column- permuted adjacency matrix ARI=0.8967 Ø IdenPfied exactly: Big 10, Big 12, MWC, SEC, ; Outliers: Independent teams Data: h]p://www- personal.umich.edu/~mejn/netdata/ 27
Robust unveiling of communipes q Robust kernel PCA for idenpficapon of cohesive subgroups q Network: NCAA football teams (verpces), Fall 00 games (edges) Par55oned graph with outliers Row/column- permuted adjacency matrix ARI=0.8967 Ø IdenPfied exactly: Big 10, Big 12, MWC, SEC, ; Outliers: Independent teams Data: h]p://www- personal.umich.edu/~mejn/netdata/ 28
Load curve data cleansing and imputapon q Load curve: Electric power consumppon recorded periodically Ø Reliable data: Key to realize smart grid vision [Hauser 09] Ø Missing data: Faulty meters, communicapon errors, few PMUs Ø Outliers: Unscheduled maintenance, strikes, sport events [Chen et al 10] Low-rank nominal load profiles Sparse outliers across buses and time q Approach: Load cleansing and imputapon via distributed R- PCA G. Mateos and G. B. Giannakis, Load curve data cleansing and imputapon via sparsity and low rank," IEEE TransacIons on Smart Grid, pp. 2347-2355, Dec. 2013. 29
NorthWrite data q Power consumppon of schools, government building, grocery store ( 05-10) Cleansing Imputation Ø Outliers: Building operaponal transipon shoulder periods Ø PredicPon error: 6% for 30% missing data (8% for 50%) Data: Courtesy of NorthWrite Energy Group. 30
Anomalies in social networks q Approach: Graph data, decompose the egonet feature matrix using PCP σ i (Y) 15000 10000 5000 Low rank arxiv Collabora5on Network (General Rela5vity) 0 2 4 6 8 Index (i) q Payoff: Unveil anomalous nodes and features q Outlook: Change detecpon, macro analysis to idenpfy outlying graphs 3- regular 6- regular 3- regular 3- regular 3- regular 31
Modeling Internet traffic anomalies q Anomalies: Changes in origin- despnapon (OD) flows [Lakhina et al 04] Ø Failures, congespons, DoS a]acks, intrusions, flooding q Graph G (N, L) with N nodes, L links, and F flows (F >> L); OD flow z f,t q Packet counts per link l and Pme slot t Anomaly 1 0.9 0.8 0.7 0.6 f 2 l 0.5 0.4 f 1 0.3 є {0,1} 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 q Matrix model across T Pme slots: M. Mardani, G. Mateos, and G. B. Giannakis, Recovery of low- rank plus compressed sparse matrices with applicapon to unveiling traffic anomalies," IEEE TransacIons on InformaIon Theory, pp. 5186-5205 Aug. 2013. 32
Low- rank plus sparse matrices q Z (and X:=RZ) low rank, e.g., [Zhang et al 05]; A is sparse across Pme and flows 4 x 108 a f,t 2 0 0 200 400 600 800 1000 Time index(t) (P1) Data: h]p://math.bu.edu/people/kolaczyk/datasets.html 33
Internet2 data q Real network data, Dec. 8-28, 2003 1 Detection probability 0.8 0.6 [Lakhina04], rank=1 [Lakhina04], rank=2 0.4 [Lakhina04], rank=3 Proposed method [Zhang05], rank=1 0.2 [Zhang05], rank=2 [Zhang05], rank=3 0 0 0.2 0.4 0.6 0.8 1 False alarm probability Anomaly volume 6 5 4 3 2 1 0 100 Flows 50 0 0 100 200 ---- True ---- Estimated Time P fa = 0.03 P d = 0.92 300 400 500 Ø Improved performance by leveraging sparsity and low rank Ø Succinct depicpon of the network health state across flows and Pme Data: h]p://www.cs.bu.edu/~crovella/links.html 34
Online espmator q Construct an espmated map of anomalies in real Pme Ø Streaming data model: q Approach: Regularized exponenpally- weighted LS formulapon 5 Tracking cleansed link traffic ATLA--HSTN 4 Real time unveiling of anomalies CHIN--ATLA 2 Link traffic level 0 20 10 0 20 10 DNVR--KSCY HSTN--ATLA ---- Estimated ---- True Anomaly amplitude 0 40 20 0 30 20 10 WASH--STTL WASH--WASH o---- Estimated ---- True 0 Time index (t) 0 0 1000 2000 3000 4000 5000 6000 Time index (t) M. Mardani, G. Mateos, and G. B. Giannakis, "Dynamic anomalography: Tracking network anomalies via sparsity and low rank," IEEE Journal of Selected Topics in Signal Processing, pp. 50-66, Feb. 2013. 35
Low- rank tensor complepon q Data cube, e.g., sub- sampled MRI frames a r A= α i = +...+ b r X t B= β i q PARAFAC decomposipon per slab t [Harshman 70] c r q Tensor subspace comprises R rank- one matrices C= γ i Goal: Given streaming, learn the subspace matrices (A,B) recursively, and impute possible misses of Y t 36
Online tensor subspace learning n Image domain low tensor rank n Tikhonov regularizapon promotes low rank ProposiPon [Bazerque- GG 13]: With n StochasPc alternapng minimizapon; parallelizable across bases n Real- Pme reconstrucpon (FFT per iterapon) M. Mardani, G. Mateos, and G. B. Giannakis, "Subspace learning and imputapon for streaming big data matrices and tensors," IEEE Trans. on Signal Processing, 2015. 37
Dynamic cardiac MRI test n in vivo dataset: 256 k- space 200x256 frames Ground- truth frame Sampling trajectory R=100, 90% misses R=150, 75% misses n PotenPal for accelerapng MRI at high spapo- temporal resolupon n Low- rank plus can also capture mopon effects M. Mardani and G. B. Giannakis, "AcceleraPng dynamic MRI via tensor subspace learning, Proc. of ISMRM 23rd Annual MeeIng and ExhibiIon, Toronto, Canada, May 30 - June 5, 2015. 38
Roadmap q Context and mopvapon q CriPcal Big Data tasks q OpPmizaPon algorithms for Big Data Ø Decentralized (in- network) operapon Ø Parallel processing Ø Streaming analypcs q Randomized learning q Scalable compupng placorms for Big Data q Conclusions and future research direcpons 39
Decentralized processing paradigms Goal: Learning over networks. Why? Decentralized data, privacy Fusion Center (FC) Incremental In- network q LimitaPons of FC- based architectures Ø Lack of robustness (isolated point of failure, non- ideal links) Ø High Tx power and roupng overhead (as geographical area grows) Ø Less suitable for real- Pme applicapons q LimitaPons of incremental processing Ø Non- robust to node failures Ø (Re- ) roupng? Hamiltonian routes NP- hard to establish 40
In- network decentralized processing q Network anomaly detecpon: SpaPally- distributed link count data Agent 1 Centralized: Decentralized: Agent N In-network processing model: n q Local processing and single- hop communicapons q Given local link counts per agent, unveil anomalies in a decentralized fashion Ø Challenge: not separable across rows (links/agents) 41
Separable rank regularizapon q Neat idenpty [Srebro 05] q Nonconvex, separable formulapon equivalent to (P1) Lxρ rank[x] (P2) Proposi5on: If stat. pt. of (P2) and, then is a global opimum of (P1). Ø Key for parallel [Recht- Re 12], decentralized and online rank min. [Mardani et al 12] 42
Decentralized algorithm q AlternaPng- direcpon method of mulppliers (ADMM) solver for (P2) Ø Method [Glowinski- Marrocco 75], [Gabay- Mercier 76] Ø Learning over networks [Schizas- Ribeiro- Giannakis 07] Consensus- based op5miza5on AZains centralized performance q PotenPal for scalable compupng M. Mardani, G. Mateos, and G. B. Giannakis, Decentralized sparsity regularized rank minimizapon: Algorithms and applicapons," IEEE TransacIons on Signal Processing, pp. 5374-5388, Nov. 2013. 43
AlternaPng direcpon method of mulppliers q Canonical problem Ø Two sets of variables, separable cost, affine constraints Augmented Lagrangian ADMM Ø One Gauss- Seidel pass over primal variables + dual ascent D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed ComputaIon: Numerical Methods, 1997. 44
Scaled form and variants q Complete the squares in, define ADMM (scaled) Ø Proximal- spliƒng: q Variants Ex: Ø More than two sets of variables Ø MulPple Gauss- Seidel passes, or, inexact primal updates Ø Strictly convex : S. Boyd et al, Distributed oppmizapon and stapspcal learning via the alternapng direcpon method of mulppliers," FoundaIons and Trends in Machine Learning, vol. 3, 2011. 45
Convergence Theorem: If and have closed and convex epigraphs, and has a saddle point, then as Ø Feasibility Ø ObjecPve convergence Ø Dual variable convergence q Under addiponal assumppons Ø Primal variable convergence Ø Linear convergence q No results for nonconvex objecpves Ø Good empirical performance for e.g., bi- convex problems M. Hong and Z. Q. Luo, On the linear convergence of the alternapng direcpon method of mulpplers," MathemaIcal Programming Series A, 2014. 46
Decentralized consensus oppmizapon q Generic learning problem In- network Ø Local costs per agent, graph q Neat trick: local copies of primal variables + consensus constraints Ø Equivalent problems for connected graph Ø Amenable to decentralized implementapon with ADMM I. D. Schizas, A. Ribeiro, and G. B. Giannakis, Consensus in ad hoc WSNs with noisy links - - Part 1: Distributed espmapon of determinispc signals," IEEE TransacIons on Signal Processing, pp. 350-364, Jan. 2008. 47
ADMM for in- network oppmizapon ADMM (in- network oppmizapon at node ) Ø Auxiliary variables eliminated! Ø CommunicaPon of primary variables within neighborhoods only q A]racPve features Ø Fully decentralized, devoid of coordinapon Ø Robust to non- ideal links; addipve noise and intermi]ent edges [Zhu et al 10] Ø Provably convergent, a]ains centralized performance G. Mateos, J. A. Bazerque, and G. B. Giannakis, Distributed sparse linear regression," IEEE TransacIons on Signal Processing, pp. 5262-5276, Oct. 2010. 48
Example 1: Decentralized SVM Ø ADMM- based D- SVM a]ains centralized performance; outperforms local SVM Ø Nonlinear discriminant funcpons effected via kernels P. Forero, A. Cano, and G. B. Giannakis, ``Consensus- based distributed support vector machines,'' Journal of Machine Learning Research, pp. 1663-1707, May 2010. 49
Example 2: RF cartography Idea: Collaborate to form a spapal map of the spectrum Goal: find s.t. is the spectrum at posipon Approach: Basis expansion for, decentralized, nonparametric basis pursuit Identify idle bands across space and frequency J. A. Bazerque, G. Mateos, and G. B. Giannakis, ``Group- Lasso on splines for spectrum cartography,'' IEEE TransacIons on Signal Processing, pp. 4648-4663, Oct. 2011. 50
Parallel algorithms for BD oppmizapon q Computer clusters offer ample opportunipes for parallel processing (PP) q Recent so ware placorms promote PP Smooth loss Non- smooth regularizer q Main idea: Divide into blocks and conquer; e.g., [Kim- GG 11] ParallelizaPon All blocks other than S.- J. Kim and G. B. Giannakis, "OpPmal Resource AllocaPon for MIMO Ad Hoc CogniPve Radio Networks, IEEE TransacIons on InformaIon Theory, vol. 57, no. 5, pp. 3117-3131, May 2011. 51
A challenging parallel oppmizapon paradigm (As1) Smooth + non- convex Non- smooth + convex + separable Ex. q Having available, find Non- convex in general! ComputaPonally cumbersome! q Approximate locally by s.t. (As2) Convex in Ex. q Find QuadraPc proximal term renders task strongly convex! 52
Flexible parallel algorithm FLEXA S1. In parallel, find for all blocks a solupon of with accuracy S2. RelaPve error Regression matrix of dims. Sparse vector; only non- zero entries F. Facchinei, S. Sagratella, and G. Scutari, Flexible parallel algorithms for big data oppmizapon, Proc. ICASSP, Florence: Italy, 2014. 53
Streaming BD analypcs Data arrive sequenpally, Pmely response needed 400M tweets/day 500TB data/day Limits of storage and computaponal capability Process one new datum at a Pme ApplicaPons: Goal: Develop simple online algorithms with performance guarantees K. Slavakis, S. J. Kim, G. Mateos, and G. B. Giannakis, StochasPc approximapon vis- à- vis online learning for Big Data, IEEE Signal Processing Magazine, vol. 31, no. 6, Nov. 2014. 54
Roadmap of stochaspc approximapon Newton- Raphson iterapon Law of large numbers Det. variable Pdfs not available! Solve Rand. variable StochasPc oppmizapon AdapPve filtering LMS and RLS as special cases ApplicaPons EsPmaPon of pdfs, Gaussian mixtures maximum likelihood espmapon, etc. 55
Numerical analysis basics q Problem 1: Find root of ; i.e., Ø Newton- Raphson iterapon: select and run Ø Popular choice of step- size Newton- Raphson iterapon q Problem 2: Find min or max of with gradient method q StochasPc counterparts of these determinispc iterapons? 56
Robbins- Monro algorithm q Robbins- Monro (R- M) iterapon q espmated on- the- fly by sample averaging Find a root of using online espmates and Q: How general can the class of such problems be? A: As general as adappve algorithms are in: Ø SP, communicapons, control, machine learning, pa]ern recognipon q FuncPon and data pdfs unknown! H. Robbins and S. Monro, A stochaspc approximapon method, Annals Math. StaIsIcs, vol. 22, no. 3, pp. 400-407, 1951 57
Convergence analysis Theorem If then R- M converges in the m.s.s., i.e., (As1) ensures unique (two lines with +Pve slope cross at ) (As2) is a finite variance requirement q Diminishing step- size (e.g., Ø Limit does not oscillate around (as in LMS) Ø But convergence ( ) should not be too fast q I.i.d. and (As1) assumppons can be relaxed! ) H. Kushner and G. G. Yin, StochasIc ApproximaIon and Recursive Algorithms and ApplicaIons, Second ed., Springer, 2003. 58
Online learning and SA Given basis funcpons, learn the nonlinear funcpon by, and training data Solve Normal equa5ons q Batch LMMSE solupon: R- M iterapon: [least mean- squares (LMS)] 59
RLS as SA Scalar step size 1 st order iterapon Matrix step size 2 nd order iterapon Usually unavailable! : Sample correla5on : RLS 60
SA vis- a- vis stochaspc oppmizapon Find q Online convex oppmizapon uplizes projecpon onto the convex R- M iterapon: Strong convexity Theorem Under (As1), (As2) with i.i.d. and with Corollary If is - Lipschitz, then A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, Robust stochaspc approximapon approach to stochaspc programming, SIAM J. OpImizaIon, vol. 19, No. 4. pp. 1574-1609, Jan. 2009. 61
Online convex oppmizapon q Mo5va5ng example: Website adverpsing company Ø Spends adverpsing on different sites Ø Obtain clicks at the end of week q Online learning: A repeated game between learner and nature (adversary) For OCO q Learner updates q Nature provides a convex loss q Learner suffers loss 62
Regret as performance metric Def. 1 Def. 2 (Worst case regret) Goal: Select s.t. Sublinear! q How to update? 63
Online gradient descent OGD Let and for Ø Choose a step- size Ø Update and a (sub)gradient Theorem For convex, it holds that If is - Lipschitz cont. w.r.t., and s.t., then for, and with, regret is sublinear q Unlike SA, no stochaspc assumppons; but also no primal var. convergence S. Shalev- Shwartz, Online learning and online convex oppmizapon, Found. Trends Machine Learn., vol. 4, no. 2, pp. 107 194, 2012. 64
Gradient- free OCO q Se]ng: Form of is unknown; only samples can be obtained q Main idea: Obtain unbiased gradient espmate on- the- fly Def. Given, define a smoothed version of as : uniform distribupon over Unit ball Theorem It holds that If is - Lipschitz conpnuous, then If is differenpable, then Unit sphere A. D. Flaxman, A. T. Kalai, and H. B. McMahan, "Online convex oppmizapon in the bandit seƒng: Gradient descent without a gradient," Proc. ACM- SIAM Symp. Discrete Algorithms, 2004. 65
When OCO meets SA For, S1. Find S2. Pick, and form S3. Let S4. Theorem If are convex, - Lipschitz conpnuous, and is convex, with and, then In parpcular, if and, then regret is bounded by Ø Compare in the agnospc vs in the full informapon case A. D. Flaxman, A. T. Kalai, and H. B. McMahan, "Online convex oppmizapon in the bandit seƒng: Gradient descent without a gradient," Proc. ACM- SIAM Symp. Discrete Algorithms, 2004. 66
Roadmap q Context and mopvapon q CriPcal Big Data tasks q OpPmizaPon algorithms for Big Data q Randomized learning Ø Randomized linear regression, leverage scores Ø Johnson- Lindenstrauss lemma Ø Randomized classificapon and clustering q Scalable compupng placorms for Big Data q Conclusions and future research direcpons 67
Randomized linear algebra q Basic tools: Random sampling and random projecpons q A]racPve features Ø Reduced dimensionality to lower complexity with Big Data Ø Rigorous error analysis at reduced dimension Ordinary least- squares (LS) Given If SVD LS- oppmal predicpon q SVD incurs complexity. Q: What if? M. W. Mahoney, Randomized Algorithms for Matrices and Data, FoundaIons and Trends In Machine Learning, vol. 3, no. 2, pp. 123-224, Nov. 2011. 68
Randomized LS for linear regression q LS espmate using (pre- condiponed) random projecpon matrix R d x D R q Random diagonal w/ z } { {z} R 2 {z} R 1 and Hadamard matrix Ø Subsets of data obtained by uniform sampling/scaling via yield LS espmates of comparable quality q Select reduced dimension q Complexity reduced from to N. Ailon and B. Chazelle, The fast Johnson- Lindenstrauss transform and approximate nearest neighbors, SIAM Journal on CompuIng, 39(1):302 322, 2009. 69
Johnson- Lindenstrauss lemma q The workhorse for proofs involving random projecpons JL lemma: If, integer, and reduced dimension sapsfies then for any there exists a mapping s.t. Almost preserves pairwise distances! ( ) q If with i.i.d. entries of and reduced dimension, then ( ) holds w.h.p. [Indyk-Motwani'98] q If with i.i.d. uniform over {+1,- 1} entries of and reduced dimension as in JL lemma, then ( ) holds w.h.p. [Achlioptas 01] W. B. Johnson and J. Lindenstrauss, Extensions of Lipschitz maps into a Hilbert space, Contemp. Math, vol. 26, pp. 189 206, 1984. 70
Performance of randomized LS Theorem For any, if, then w.h.p. condipon number of ; and q Uniform sampling vs Hadamard precondiponing Ø D = 10,000 and p =50 Ø Performance depends on X M. W. Mahoney, Randomized Algorithms for Matrices and Data, FoundaIons and Trends In Machine Learning, vol. 3, no. 2, pp. 123-224, Nov. 2011. 71
Large- scale regression: Streaming and censoring q Key idea: SequenPally test and update RLS espmates only for informapve data q Censoring threshold controls avg. data reducpon q AdapPve censoring (AC) rule ( y c n = 1, n x T n n 1 apple 0, otherwise. q AC- RLS online espmate D. K. Berberidis, G. Wang, G. B. Giannakis, and V. Kekatos, "AdapPve EsPmaPon from Big Data via Censored StochasPc ApproximaPon," Proc. of Asilomar Conf., Pacific Grove, CA, Nov. 2014. 72
Censoring vis- a- vis random projecpons q Random projecpons for linear regression [Mahoney 11] Ø Data- agnospc reducpon decoupled from LS solupon q AdapPve censoring (AC- RLS) Ø Data- driven measurement selecpon Ø Suitable for streaming data Ø Minimal memory requirements 73
Performance comparison q Synthe5c: D=10,000, p=300 (50 MC runs); Real data: espmated from full set Highly non- uniform data q AC- RLS outperforms alternapves at comparable complexity q Robustness to uniform data (all rows of X equally important ) 74
Big data classificapon q Support vector machines (SVM): The workhorse for linear discriminant analysis Ø Data: Ø Goal: SeparaPng hyperplane with labels to max between- class margin q Binary classificapon Ø Primal problem Margin: 75
Randomized SVM classifier q Dual problem q Random projecpons approach Ø Given, let Ø Solve the transformed dual problem S. Paul, C. Boutsidis, M. Magdon- Ismail, P. Drineas, Random projecpons for linear support vector machines, ACM Trans. Knowledge Discovery from Data, 2014. 76
Joint subspace tracking- classificapon q Q: Streaming, structured data? Missing features? Goal: Given and jointly find classifier and subspace online. LS fit Convex surrogate for rank Margin- maximizer q AlternaPng minimizapon (at Pme t) Ø Step 1: ProjecPon coefficient updates Ø Step 2: Subspace tracking Ø Step 3: Classifier update Tractable as Low- rank prudent also for F. Sheikholeslami, M. Mardani, and G. B. Giannakis, "ClassificaPon of Streaming Big Data with Misses," Proc. of Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2-5, 2014. 77
Simulated tests v SynthePc data v 70% uniformly missing features Generalizationn Error Probability 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0 1000 2000 3000 4000 5000 Iteration index Full Data Data with imputed entries Data with missing entries 10-5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 v Joint classificapon design outperforms its compepng peers Generalization Error Probability 10 0 10-1 10-2 10-3 10-4 Full data Data with imputed entries Data with missing entries Incomplete data- Adjusted SVM Standard Deviation of Noise Adjusted SVM: G. Chechik,G. Heitz, G. Elidan, P. Abbeel, and D. Koller. "Max- margin classificapon of data with absent features," J. of Machine Learning Research, vol. 9, pp. 1-21, 2008. 78
Big data clustering q Given with assign them to clusters q Key idea: Reduce dimensionality via random projecpons q Desiderata: Preserve the pairwise data distances in lower dimensions Feature extrac5on q Construct combined features (e.g., via ) q Apply K- means to - space Feature selec5on q Select of input features (rows of ) q Apply K- means to - space C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas, Randomized dimensionality reducpon for K- means clustering, arxiv:1110.2897, 2013. 79
Random sampling and consensus q RANSAC: for robust LS espmapon with outliers Toy example: Line espmapon noise unknown Ø For v Sample dimensions v LS espmate v Consensus set: Other costs possible (e.g., ML) Ø Ø Our ideas: a) Sketch and validate with affordable ( ) consensus subset b) Employ Ransac- like hypotheses tespng for large- scale clustering M. Fisher and R. Bolles, Random sample consensus: A paradigm for model fiƒng with applicapons to image analysis and automated cartography, Comm. ACM, vol. 24, pp. 381 395, June 1981. 80
Random sketching and validapon (SkeVa) q Randomly select informapve dimensions q Algorithm For v Sketch dimensions: v Run k- means on v Re- sketch dimensions v Augment centroids, v Validate using consensus set Ø q Similar approaches possible for q SequenPal and kernel variants available P. A. TraganiPs, K. Slavakis, and G. B. Giannakis, Clustering High- Dimensional Data via Random Sampling and Consensus, Proc. of GlobalSIP, Atlanta, GA, December 2014. 81
TesPng randomized clustering KDDb dataset (subset) D = 2,990,384, T = 10,000, K = 2 RP: [Boutsidis etal 13] SkeVa: Sketch and validate P. TraganiPs, K. Slavakis, and G. B. Giannakis, Sketch and validate for big data clustering, IEEE Journal of Selected Topics in Signal Processing, June 2015. 82
Roadmap q Context and mopvapon q CriPcal Big Data tasks q OpPmizaPon algorithms for Big Data q Randomized learning q Scalable compupng placorms for Big Data Ø MapReduce placorm Ø Least- squares and k- means in MapReduce Ø Graph algorithms q Conclusions and future research direcpons 83
Parallel programming issues q Challenges Ø Load balancing; communicapons Ø Sca]ered data; synchronizapon (deadlocks, read- write racing) q General vs applicapon- specific implementapons Ø Efficient vs development overhead tradeoffs Choosing the right tool is important! 84
MapReduce q General parallel programming (PP) paradigm for large- scale problems q Easy parallelizapon at a high level Ø SynchronizaPon, communicapon challenges are taken care of q IniPally developed by Google Processing divided in two stages (S1 and S2) S1 (Map): PP for porpons of the input data S2 (Reduce): AggregaPon of S1 results (possibly in parallel) Example Reduce Map per processor 85
Basic ingredients of MapReduce q All data are in the form <key,value> e.g., key = machine, value = 5 q Map- Reduce model consists of [Lin- Dryer 10] Ø Input key- value pairs Ø Mappers Programs applying a Map operapon to a subset of input data Run same operapon in parallel Produce intermediate key- value pairs Ø Reducers Programs applying a Reduce operapon (e.g., aggregapon) to a subset of intermediate key- value pairs with the same key Reducers execute same operapon in parallel to generate output key- value pairs Ø ExecuPon framework Transparent to the programmer; performs tasks, e.g., sync and com J. Lin and C. Dryer, Data- Intensive Text Processing with MapReduce, Morgan & Claypool Pubs., 2010. 86
Running MapReduce q Each job has three phases Input Input Input q Phase 1: Map Ø Receive input key- value pairs to produce intermediate key- value pairs q Phase 2: Shuffle and sort Ø Intermediate key- value pairs sorted; sent to proper reducers Ø Handled by the execupon framework Ø Single point of communicapon Ø Slowest phase of MapReduce q Phase 3: Reduce Ø Receive intermediate key- value pairs; determine the final ones Ø All mappers must have finished for reducers to start! Shuffle Map Map Map Reduce Output Reduce 87
The MapReduce model Aggregates intermediate key- value pairs of each mapper Map phase Reduce phase Assign intermediate key- value pairs to reducers e.g. key mod #reducers 88
Least- squares in MapReduce q Given find q Map Phase Ø Mappers (here 3) calculate parpal sums q Reduce Phase Ø Reducer aggregates parpal sums Ø Computes Ø Finds Aggregate parpal sums Calculate C. Chu, S. K. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, K. Olokotun, Map- Reduce for machine learning on mulpcore, pp. 281-288, NIPS, 2006. 89
K- means in MapReduce q Serial k- means Ø Pick K random inipal centroids Ø Iterate: - Assigning points to cluster of closest centroid - UpdaPng centroids by averaging data per clusterning points to cluster of closest centroid - UpdaPng centroids by averaging data per cluster Ø A central unit picks K random inipal centroids Ø Each iterapon is a MapReduce job (mulpple required!) q MapReduce k- means q Mappers Ø Each assigned a subset of data Ø Find closest centroid, parpal sum of data, and #data per cluster Ø Generate key- value pairs; key = cluster, value = parpal sum, #points q Reducers Ø Each assigned a subset of clusters Ø Aggregate parpal sums for corresponding clusters Ø Update centroid locapon : centroids : Mapper 1 : Mapper 2 : Reducer 1 : Reducer 2 90
Graph algorithms q Require communicapon between processing nodes across mulpple iterapons; MapReduce not ideal! q Pregel Ø Based on Bulk Synchronous Parallel (BSP) model Ø Developed by Google for use on large graph problems Ø Each vertex is characterized by a user- defined value Ø Each edge is characterized by source vertex, despnapon vertex and a user- defined value q ComputaPon is performed in Supersteps Ø A user- defined funcpon is executed on each vertex (parallel) Ø Messages from previous superstep are received Ø Messages for the next superstep are sent Ø Supersteps are separated by global synchronizapon points Superstep i Vertex 1 Vertex 2 Vertex 3 Superstep i+1 Vertex 1 Vertex 2 Vertex 3 More efficient than MapReduce! Sync barrier G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N. Leiser, G. Czajkowski, Pregel: A system for large- scale graph processing, SIGMOD Indianapolis, 2010. 91
Wrap- up for scalable compupng q MapReduce Ø Simple parallel processing Ø Code easy to understand Ø Great for batch problems Ø Applicable to many problems q However Ø Framework can be restricpve Ø Not designed for online algorithms Ø Inefficient for graph processing q Pregel Ø Tailored for graph algorithms Ø More efficient relapve to MapReduce q Available open source implementapons Ø Hadoop (MapReduce) Ø Hama (Pregel) Ø Giraph (Pregel) q Machine learning algorithms can be found at Mahout! 92
Roadmap q Context and mopvapon q CriPcal Big Data tasks q OpPmizaPon algorithms for Big Data q Randomized learning q Scalable compupng placorms for Big Data q Conclusions and future research direcpons 93
Tutorial summary q Big Data modeling and tasks Ø Dimensionality reducpon Ø Succinct representapons Ø Vectors, matrices, and tensors q Op5miza5on algorithms Ø Decentralized, parallel, streaming Ø Data sketching q Implementa5on pla@orms Ø Scalable compupng placorms Ø AnalyPcs in the cloud 94
Timeliness q Special sessions in recent IEEE SP Society / EURASIP meepngs Signal and Informa5on Processing for Big Data Challenges in High- Dimensional Learning Informa5on Processing over Networks Trends in Sparse Signal Processing Sparse Signal Techniques for Web Informa5on Processing Informa5on Processing for Big Data SP for Big Data Op5miza5on Algorithms for High- Dimensional SP New Direc5ons in High- Dimensional Op5miza5on Advances in Manifold- based Signal and Informa5on Processing 95
Importance to funding agencies q NSF, NIH, DARPA, DoD, DoE, and US Geological Survey q Sample programs: Ø NSF 14-543 (BIGDATA) Ø DARPA ADAMS (Anomaly DetecPon on MulPple Scales) q DoD: 20 + solicitapons Ø Data to decisions Ø Autonomy Ø Human- machine systems Source: www.whitehouse.gov/sites/default/files/microsites/ostp/big_data_press_release_final_2.pdf 96
NSF- ECCS sponsored workshop NSF Workshop on Big Data From Signal Processing to Systems Engineering March 21 22, 2013 Arlington, Virginia, USA q Workshop program and slides hzp://www.dtc.umn.edu/bigdata/program.php q Workshop final report hzp://www.dtc.umn.edu/bigdata/report.php Sponsored by Electrical, CommunicaPons and Cyber Systems (ECCS) 97
Recent IEEE SP Magazine issue q Special issue on Signal Processing for Big Data IEEE Signal Processing Magazine September 2014 Ø Tutorials on: Modeling and oppmizapon for Big Data Advances in convex oppmizapon algorithms Outlier- robust, sequenpal detecpon for Big Data Parallel algorithms for decomposing big tensors Signal processing on large graphs Sparse Fourier transforms CollaboraPve bike sensing for geographic enrichment IEEE SIGNAL PROCESSING MAGAZINE BIG DATA VOLUME 31 NUMBER 5 SEPTEMBER 2014 [ VOLUME 31 NUMBER 5 SEPTEMBER 2014 ] q June 15 issue of IEEE J. of Selected Topics in Signal Processing (also on Big Data) q Open SP for Big Data special issue: EURASIP J. of Advances in Signal Processing 98
QuesPons? hzp://spincom.umn.edu B. Baingana UofM Dr. J. A. Bazerque UTE D. Berberidis UofM Dr. P. A. Forero SPAWAR Prof. V. Kekatos VaTech Prof. S.- J. Kim UMBC M. Mardani Stanford Y. Zhang UofM F. Sheikholeslami UofM P. TraganiPs UofM G. Wang UofM Prof. H. Zhu UIUC 99