Signal Processing for Big Data


 Amberly Bruce
 1 years ago
 Views:
Transcription
1 Signal Processing for Big Data G. B. Giannakis, K. Slavakis, and G. Mateos Acknowledgments: NSF Grants EARS , EAGER MURI Grant No. AFOSR FA Lisbon, Portugal 1 September 1, 2014
2 Big Data: A growing torrent Source: McKinsey Global Institute, Big Data: The next frontier for innovation, competition, and productivity, May
3 Big Data: Capturing its value Source: McKinsey Global Institute, Big Data: The next frontier for innovation, competition, and productivity, May
4 Big Data and NetSci analytics Online social media Internet Clean energy and grid analytics Robot and sensor networks Biological networks Square kilometer array telescope Desiderata: process, analyze, and learn from large pools of network data 4
5 Challenges BIG Sheer volume of data Distributed and parallel processing Security and privacy concerns Modern massive datasets involve many attributes Parsimonious models to ease interpretability Enhanced predictive performance Fast Realtime streaming data Online processing Quickrough answer vs. slowaccurate answer? Outliers and misses Robust imputation algorithms Good news: Ample research opportunities arise! Messy 5
6 Opportunities Big tensor data models and factorizations Highdimensional statistical SP Network data visualization Theoretical and Statistical Foundations of Big Data Analytics Pursuit of lowdimensional structure Analysis of multirelational data Common principles across networks Resource tradeoffs Randomized algorithms Scalable online, decentralized optimization Algorithms and Implementation Platforms to Learn from Massive Datasets Convergence and performance guarantees Information processing over graphs Novel architectures for largescale data analytics Robustness to outliers and missing data Graph SP 6
7 SPrelevant Big Data themes K. Slavakis, G. B. Giannakis, and G. Mateos, Modeling and optimization for Big Data analytics, IEEE Signal Processing Magazine, vol. 31, no. 5, pp , Sep
8 Roadmap Context and motivation Critical Big Data tasks Encompassing and parsimonious data modeling Dimensionality reduction, data visualization Data cleansing, anomaly detection, and inference Optimization algorithms for Big Data Randomized learning Scalable computing platforms for Big Data Conclusions and future research directions 8
9 Encompassing model Background (low rank) Patterns, innovations, (co)clusters, outliers Observed data Dictionary Noise Sparse matrix Subset of observations and projection operator allow for misses Largescale data and/or h Any of unknown 9
10 Subsumed paradigms Structure leveraging criterion Nuclear norm: : singular val. of norm (With or without misses) known Compressive sampling (CS) [CandesTao 05] Dictionary learning (DL) [OlshausenField 97] Nonnegative matrix factorization (NMF) [LeeSeung 99] Principal component pursuit (PCP) [Candes etal 11] Principal component analysis (PCA) [Pearson 1901] 10
11 PCA formulations Training data Minimum reconstruction error Compression Reconstruction Maximum variance Component analysis model Solution: 11
12 Dual and kernel PCA SVD: Gram matrix Inner products Kernel (K)PCA; e.g., [ScholkopfSmola 01] Mapping to stretches data geometry Kernel trick e.g., B. Schölkopf and A. J. Smola, Learning with Kernels, Cambridge, MIT Press,
13 Multidimensional scaling Given dissimilarities or distances, identify lowdim vectors that preserve LS or KruskalShephard MDS: If s.t. Classical MDS: Solution Dual PCA: Distancepreserving downscaling of data dimension I. Borg and P. J. Groenen, Modern Multidimensional Scaling: Theory and Applications, NY, Springer,
14 Local linear embedding For each find neighborhood, e.g., knearest neighbors Weight matrix captures local affine relations Sparse and [ElhamifarVidal 11] Identify lowdimensional vectors preserving local geometry [SaulRoweis 03] Solution: The rows of are the minor, excluding, L. K. Saul and S. T. Roweis, Think globally, fit locally: Unsupervised learning of low dimensional manifolds, J. Machine Learning Research, vol. 4, pp ,
15 Application to graph visualization Undirected graph Nodes or vertices Edges or communicating pairs of nodes All nodes that communicate with node (neighborhood) Given centrality metrics Identify weights or geometry, centralityconstrained (CC) LLE Visualize Centrality constraints, e.g., node degree Only corr. or dissimilarities need be known B. Baingana and G. B. Giannakis, Embedding graphs under centrality constraints for network visualization, 2014, [Online]. Available: arxiv:
16 Visualizing the Gnutella network Gnutella: Peertopeer filesharing network CCLLE: centrality captured by node degree (08/04/2012) Processing time: sec (08/24/2012) Processing Time: sec B. Baingana and G. B. Giannakis, Embedding graphs under centrality constraints for network visualization, 2014, [Online]. Available: arxiv:
17 Dictionary learning Solve for dictionary and sparse : (Lasso task; sparse coding) (Constrained LS task) Alternating minimization; both and are convex Special case of block coordinate descent methods (BCDMs) [Tseng 01] Under certain conditions, converges to a stationary point of B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res., vol. 37, no. 23, pp ,
18 Joint DLLLE paradigm Image inpainting by localaffinegeometrypreserving DL missing values; PSNR db K. Slavakis, G. B. Giannakis, and G. Leus, Robust sparse embedding and reconstruction via dictionary learning, In Proc. CISS, Baltimore: USA,
19 Online dictionary learning Data arrive sequentially Inpainting of 12Mpixel damaged image S1. Learn dictionary from clean patches S2. Remove text by sparse coding from clean J. Mairal, F. Bach, J. Ponce, and G. Sapiro, Online learning for matrix factorization and sparse coding, J. Machine Learning Research, vol. 11, pp , Mar
20 Union of subspaces and subspace clustering Parsimonious model for clustered data Subspace clustering (SC) Identify Bases for Datacluster associations SC as DL with blocksparse with If then SC PCA R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, vol. 28, no. 2, pp , March
21 Modeling outliers Outlier variables s.t. outlier otherwise Nominal data obey ; outliers something else Linear regression [Fuchs 99], [WrightMa 10], [Giannakis et al 11] Both and unknown, underdetermined typically sparse! 21
22 Robustifying PCA Natural (sparsityleveraging) estimator Tuning parameter controls sparsity in number of outliers Q: Does (P1) yield robust estimates? A: Yap! Huber estimator is a special case Formally justifies the outlieraware model and its estimator Ties sparse regression with robust statistics G. Mateos and G. B. Giannakis, ``Robust PCA as bilinear decomposition with outlier sparsity regularization,'' IEEE Transactions on Signal Processing, pp , Oct
23 Prior art Robust covariance matrix estimators [Campbell 80], [Huber 81] Mtype estimators in computer vision [XuYuille 95], [De la TorreBlack 03] Rank minimization with the nuclear norm, e.g., [RechtFazelParrilo 10] Matrix decomposition [Candes et al 10], [Chandrasekaran et al 11] Principal Component Pursuit (PCP) Observed Low rank Sparse Singing voice separation [Huang Broadening et al 12] Face recognition [WrightMa 10] the model = + = + 23
24 Video surveillance Background modeling from video feeds [De la TorreBlack 01] Data PCA Robust PCA Outliers Data: 24
25 Big Five personality factors Measure five broad dimensions of personality traits [CostaMcRae 92] EugeneSpringfield BFI sample (WEIRD) Data: courtesy of Prof. L. Goldberg, provided by Prof. N. Waller Big Five Inventory (BFI) Shortquestionnaire (44 items) Rate 15, e.g., `I see myself as someone who is talkative is full of energy Outlier ranking Handbook of personality: Theory and research, O. P. John, R. W. Robins, and L. A. Pervin, Eds. New York, NY: Guilford Press,
26 Robust unveiling of communities Robust kernel PCA for identification of cohesive subgroups Network: NCAA football teams (vertices), Fall 00 games (edges) Partitioned graph with outliers Row/columnpermuted adjacency matrix ARI= Identified exactly: Big 10, Big 12, MWC, SEC, ; Outliers: Independent teams Data: 26
27 Load curve data cleansing and imputation Load curve: electric power consumption recorded periodically Reliable data: key to realize smart grid vision [Hauser 09] Missing data: Faulty meters, communication errors, few PMUs Outliers: Unscheduled maintenance, strikes, sport events [Chen et al 10] Lowrank nominal load profiles Sparse outliers across buses and time Approach: load cleansing and imputation via distributed RPCA G. Mateos and G. B. Giannakis, Load curve data cleansing and imputation via sparsity and low rank," IEEE Transactions on Smart Grid, pp , Dec
28 NorthWrite data Power consumption of schools, government building, grocery store ( 0510) Cleansing Imputation Outliers: Building operational transition shoulder periods Prediction error: 6% for 30% missing data (8% for 50%) Data: courtesy of NorthWrite Energy Group. 28
29 Anomalies in social networks Approach: graph data, decompose the egonet feature matrix using PCP i (Y) Low rank arxiv Collaboration Network (General Relativity) Index (i) Payoff: unveil anomalous nodes and features Outlook: change detection, macro analysis to identify outlying graphs 3regular 6regular 3regular 3regular 3regular 29
30 Modeling Internet traffic anomalies Anomalies: changes in origindestination (OD) flows [Lakhina et al 04] Failures, congestions, DoS attacks, intrusions, flooding Graph G (N, L) with N nodes, L links, and F flows (F >> L); OD flow z f,t Packet counts per link l and time slot t Anomaly f 2 l f є {0,1} Matrix model across T time slots: M. Mardani, G. Mateos, and G. B. Giannakis, Recovery of lowrank plus compressed sparse matrices with application to unveiling traffic anomalies," IEEE Transactions on Information Theory, pp Aug
31 Lowrank plus sparse matrices Z (and X:=RZ) low rank, e.g., [Zhang et al 05]; A is sparse across time and flows 4 x 108 a f,t Time index(t) (P1) Data: 31
32 Anomaly volume Internet2 data Real network data, Dec. 828, 2003 Detection probability [Lakhina04], rank=1 [Lakhina04], rank=2 0.4 [Lakhina04], rank=3 Proposed method [Zhang05], rank=1 0.2 [Zhang05], rank=2 [Zhang05], rank= False alarm probability Flows True  Estimated Time P fa = 0.03 P d = Improved performance by leveraging sparsity and low rank Succinct depiction of the network health state across flows and time Data: 32
33 Online estimator Construct an estimated map of anomalies in real time Streaming data model: Approach: regularized exponentiallyweighted LS formulation 5 Tracking cleansed link traffic ATLAHSTN 4 Real time unveiling of anomalies CHINATLA 2 Link traffic level DNVRKSCY HSTNATLA  Estimated  True Anomaly amplitude WASHSTTL WASHWASH o Estimated  True 0 Time index (t) Time index (t) M. Mardani, G. Mateos, and G. B. Giannakis, "Dynamic anomalography: Tracking network anomalies via sparsity and low rank," IEEE Journal of Selected Topics in Signal Processing, pp , Feb
34 Roadmap Context and motivation Critical Big Data tasks Optimization algorithms for Big Data Decentralized (innetwork) operation Parallel processing Streaming analytics Randomized learning Scalable computing platforms for Big Data Conclusions and future research directions 34
35 Decentralized processing paradigms Goal: Learning over networks. Why? Decentralized data, privacy Fusion Center (FC) Incremental Innetwork Limitations of FCbased architectures Lack of robustness (isolated point of failure, nonideal links) High Tx power and routing overhead (as geographical area grows) Less suitable for realtime applications Limitations of incremental processing Nonrobust to node failures (Re) routing? Hamiltonian routes NPhard to establish 35
36 Innetwork decentralized processing Network anomaly detection: spatiallydistributed link count data Agent 1 Centralized: Decentralized: Agent N Innetwork processing model: n Local processing and singlehop communications Given local link counts per agent, unveil anomalies in a decentralized fashion Challenge: not separable across rows (links/agents) 36
37 Separable rank regularization Neat identity [Srebro 05] Lxρ rank[x] Nonconvex, separable formulation equivalent to (P1) (P2) Proposition: If stat. pt. of (P2) and, then is a global optimum of (P1). Key for parallel [RechtRe 12], decentralized and online rank min. [Mardani et al 12] 37
38 Decentralized algorithm Alternatingdirection method of multipliers (ADMM) solver for (P2) Method [GlowinskiMarrocco 75], [GabayMercier 76] Learning over networks [SchizasRibeiroGiannakis 07] Consensusbased optimization Attains centralized performance Potential for scalable computing M. Mardani, G. Mateos, and G. B. Giannakis, Decentralized sparsity regularized rank minimization: Algorithms and applications," IEEE Transactions on Signal Processing, pp , Nov
39 Alternating direction method of multipliers Canonical problem Two sets of variables, separable cost, affine constraints Augmented Lagrangian ADMM One GaussSeidel pass over primal variables + dual ascent D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods,
40 Scaled form and variants Complete the squares in, define ADMM (scaled) Proximalsplitting: Variants Ex: More than two sets of variables Multiple GaussSeidel passes, or, inexact primal updates Strictly convex : S. Boyd et al, Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3,
41 Convergence Theorem: If and have closed and convex epigraphs, and has a saddle point, then as Feasibility Objective convergence Dual variable convergence Under additional assumptions Primal variable convergence Linear convergence No results for nonconvex objectives Good empirical performance for e.g., biconvex problems M. Hong and Z. Q. Luo, On the linear convergence of the alternating direction method of multiplers," Mathematical Programming Series A,
42 Decentralized consensus optimization Generic learning problem Innetwork Local costs per agent, graph Neat trick: local copies of primal variables + consensus constraints Equivalent problems for connected graph Amenable to decentralized implementation with ADMM I. D. Schizas, A. Ribeiro, and G. B. Giannakis, Consensus in ad hoc WSNs with noisy links  Part 1: Distributed estimation of deterministic signals," IEEE Transactions on Signal Processing, pp , Jan
43 ADMM for innetwork optimization ADMM (innetwork optimization at node ) Auxiliary variables eliminated! Communication of primary variables within neighborhoods only Attractive features Fully decentralized, devoid of coordination Robust to nonideal links; additive noise and intermittent edges [Zhu et al 10] Provably convergent, attains centralized performance G. Mateos, J. A. Bazerque, and G. B. Giannakis, Distributed sparse linear regression," IEEE Transactions on Signal Processing, pp , Oct
44 Example 1: Decentralized SVM ADMMbased DSVM attains centralized performance; outperforms local SVM Nonlinear discriminant functions effected via kernels P. Forero, A. Cano, and G. B. Giannakis, ``Consensusbased distributed support vector machines,'' Journal of Machine Learning Research, pp , May
45 Example 2: RF cartography Idea: collaborate to form a spatial map of the spectrum Goal: find s.t. is the spectrum at position Approach: Basis expansion for, decentralized, nonparametric basis pursuit Identify idle bands across space and frequency J. A. Bazerque, G. Mateos, and G. B. Giannakis, ``GroupLasso on splines for spectrum cartography,'' IEEE Transactions on Signal Processing, pp , Oct
46 Parallel algorithms for BD optimization Computer clusters offer ample opportunities for parallel processing (PP) Recent software platforms promote PP Smooth loss Nonsmooth regularizer Main idea: Divide into blocks and conquer; e.g., [KimGG 11] Parallelization All blocks other than S.J. Kim and G. B. Giannakis, "Optimal Resource Allocation for MIMO Ad Hoc Cognitive Radio Networks, IEEE Transactions on Information Theory, vol. 57, no. 5, pp , May
47 A challenging parallel optimization paradigm (As1) Smooth + nonconvex Nonsmooth + convex + separable Ex. Having available, find Nonconvex in general! Computationally cumbersome! Approximate locally by s.t. (As2) Convex in Ex. Find Quadratic proximal term renders task strongly convex! 47
48 Flexible parallel algorithm FLEXA S1. In parallel, find for all blocks a solution of with accuracy S2. Theorem In addition to (As1) and (As2), if satisfy,,, and for some, then every cluster point of converges to a stationary solution of F. Facchinei, S. Sagratella, and G. Scutari, Flexible parallel algorithms for big data optimization, Proc. ICASSP, Florence: Italy, 2014 [Online] Available: arxiv.org/abs/
49 Flexa on Lasso Relative error Regression matrix of dimensions Sparse vector with only nonzero entries F. Facchinei, S. Sagratella, and G. Scutari, Flexible parallel algorithms for big data optimization, Proc. ICASSP, Florence: Italy, 2014 [Online] Available: arxiv.org/abs/
50 Streaming BD analytics Data arrive sequentially, timely response needed 400M tweets/day 500TB data/day Limits of storage and computational capability Process one new datum at a time Applications: Goal: Develop simple online algorithms with performance guarantees K. Slavakis, S. J. Kim, G. Mateos, and G. B. Giannakis, Stochastic approximation visàvis online learning for Big Data, IEEE Signal Processing Magazine, vol. 31, no. 6, Nov
51 Roadmap of stochastic approximation NewtonRaphson iteration Law of large numbers Det. variable Pdfs not available! Solve Rand. variable Stochastic optimization Adaptive filtering LMS and RLS as special cases Applications Estimation of pdfs, Gaussian mixtures maximum likelihood estimation, etc. 51
52 Numerical analysis basics Problem 1: Find root of ; i.e., NewtonRaphson iteration: select and run Popular choice of stepsize NewtonRaphson iteration Problem 2: Find min or max of with gradient method Stochastic counterparts of these deterministic iterations? 52
53 RobbinsMonro algorithm RobbinsMonro (RM) iteration estimated onthefly by sample averaging Find a root of using online estimates and Q: How general can the class of such problems be? A: As general as adaptive algorithms are in: SP, communications, control, machine learning, pattern recognition Function and data pdfs unknown! H. Robbins and S. Monro, A stochastic approximation method, Annals Math. Statistics, vol. 22, no. 3, pp ,
54 Convergence analysis Theorem If then RM converges in the m.s.s., i.e., (As1) ensures unique (two lines with +tive slope cross at ) (As2) is a finite variance requirement Diminishing stepsize (e.g., ) Limit does not oscillate around (as in LMS) But convergence ( ) should not be too fast I.i.d. and (As1) assumptions can be relaxed! H. Kushner and G. G. Yin, Stochastic Approximation and Recursive Algorithms and Applications, Second ed., Springer,
55 Online learning and SA Given basis functions, learn the nonlinear function by, and training data Solve Normal equations Batch LMMSE solution: RM iteration: [least meansquares (LMS)] 55
56 RLS as SA Scalar step size 1 st order iteration Matrix step size 2 nd order iteration Usually unavailable! : Sample correlation : RLS 56
57 SA visavis stochastic optimization Find Online convex optimization utilizes projection onto the convex RM iteration: Strong convexity Theorem Under (As1), (As2) with i.i.d. and with Corollary If is Lipschitz, then A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, Robust stochastic approximation approach to stochastic programming, SIAM J. Optimization, vol. 19, No. 4. pp , Jan
58 Online convex optimization Motivating example: Website advertising company Spends advertising on different sites Obtain clicks at the end of week Online learning: A repeated game between learner and nature (adversary) For OCO Learner updates Nature provides a convex loss Learner suffers loss 58
59 Regret as performance metric Def. 1 Def. 2 (Worst case regret) Goal: Select s.t. Sublinear! How to update? 59
60 Online gradient descent OGD Let and for Choose a stepsize Update and a (sub)gradient Theorem For convex, it holds that If is Lipschitz cont. w.r.t., and s.t., then for, and with, regret is sublinear Unlike SA, no stochastic assumptions; but also no primal var. convergence S. ShalevShwartz, Online learning and online convex optimization, Found. Trends Machine Learn., vol. 4, no. 2, pp ,
61 Online mirror descent OMD Given a function,, then for Update via Ex. 1 Online gradient descent! Ex. 2 Negative entropy Exponentiated gradient! S. ShalevShwartz, Online learning and online convex optimization, Found. Trends Machine Learn., vol. 4, no. 2, pp ,
62 OMD regret Def. is called strongly convex w.r.t., if s.t. Theorem If is stongly convex w.r.t. a norm, is Lipschitz cont. w.r.t. the same norm, and s.t., then If in addition and, then S. ShalevShwartz, Online learning and online convex optimization, Found. Trends Machine Learn., vol. 4, no. 2, pp ,
63 Gradientfree OCO Setting: Form of is unknown; only samples can be obtained Main idea: Obtain unbiased gradient estimate onthefly Def. Given, define a smoothed version of as : uniform distribution over Unit ball Theorem It holds that If is Lipschitz continuous, then If is differentiable, then Unit sphere A. D. Flaxman, A. T. Kalai, and H. B. McMahan, "Online convex optimization in the bandit setting: Gradient descent without a gradient," Proc. ACMSIAM Symp. Discrete Algorithms,
64 When OCO meets SA For, S1. Find S2. Pick, and form S3. Let S4. Theorem If are convex, Lipschitz continuous, and is convex, with and, then In particular, if and, then regret is bounded by Compare in the agnostic vs in the full information case A. D. Flaxman, A. T. Kalai, and H. B. McMahan, "Online convex optimization in the bandit setting: Gradient descent without a gradient," Proc. ACMSIAM Symp. Discrete Algorithms,
65 Roadmap Context and motivation Critical Big Data tasks Optimization algorithms for Big Data Randomized learning Randomized linear regression, leverage scores JohnsonLindenstrauss lemma Randomized classification and clustering Scalable computing platforms for Big Data Conclusions and future research directions 65
66 Randomized linear algebra Basic tools: Random sampling and random projections Attractive features Reduced dimensionality to lower complexity with Big Data Rigorous error analysis at reduced dimension Ordinary leastsquares (LS) Given If SVD LSoptimal prediction SVD incurs complexity. Q: What if? M. W. Mahoney, Randomized Algorithms for Matrices and Data, Foundations and Trends In Machine Learning, vol. 3, no. 2, pp , Nov
67 Randomized linear regression Key idea: Sample and rescale (down to ) important observations Diagonal rescaling matrix Random sampling matrix Random sampling: For Pick th row (entry) of w.p. Q: How to determine 67
68 Statistical leverage scores Prediction error, where and Large Small contribution of to prediction error! Definition: Statistical leverage scores (SLS) Sample rows (entries) of according to Theorem For any, if, then w.h.p. condition number of ; and SLS complexity M. W. Mahoney, Randomized Algorithms for Matrices and Data, Foundations and Trends In Machine Learning, vol. 3, no. 2, pp , Nov
69 JohnsonLindenstrauss lemma The workhorse for proofs involving random projections JL lemma: If, integer, and reduced dimension satisfies then for any there exists a mapping s.t. Almost preserves pairwise distances! ( ) If with i.i.d. entries of and reduced dimension, then ( ) holds w.h.p. [IndykMotwani'98] If with i.i.d. uniform over {+1,1} entries of and reduced dimension as in JL lemma, then holds w.h.p. [Achlioptas 01] ( ) W. B. Johnson and J. Lindenstrauss, Extensions of Lipschitz maps into a Hilbert space, Contemp. Math, vol. 26, pp ,
70 Bypassing cumbersome SLS Preconditioned LS estimate Hadamard matrix Random diagonal matrix with Select reduced dimension Sample/rescale via with uniform pdf Complexity lower than incurred by SLS computation N. Ailon and B. Chazelle, The fast JohnsonLindenstrauss transform and approximate nearest neighbors, SIAM Journal on Computing, 39(1): ,
71 Testing randomized linear regression M. W. Mahoney, Randomized Algorithms for Matrices and Data, Foundations and Trends In Machine Learning, vol. 3, no. 2, pp , Nov
72 Big data classification Support vector machines (SVM): The workhorse for linear discriminant analysis Data: Goal: Separating hyperplane with labels to max betweenclass margin Binary classification Primal problem Margin: 72
73 Randomized SVM classifier Dual problem Random projections approach Given, let Solve the transformed dual problem S. Paul, C. Boutsidis, M. MagdonIsmail, P. Drineas, Random projections for linear support vector machines, ACM Trans. Knowledge Discovery from Data,
74 Performance analysis Theorem Given, accuracy,, and there exists such that if and are SVMoptimal margins corresponding to and, respectively, then w.p. Lemma VapnikChernovenkis dimension almost unchanged! Outofsample error kept almost unchanged in the reduceddimension solution S. Paul, C. Boutsidis, M. MagdonIsmail, P. Drineas, Random projections for linear support vector machines, ACM Trans. Knowledge Discovery from Data, to appear,
75 Testing randomized SVM 295 datasets from Open Directory Project (ODP) Each dataset includes = 150 to 280 documents = 10,000 to 40,000 words (features) Binary classification Performance in terms of Outofsample error ( ) Margin ( ) Projection and run time ( and ) Insample error matches for full/reduced dimensions and improves as increases S. Paul, C. Boutsidis, M. MagdonIsmail, P. Drineas, Random projections for linear support vector machines, ACM Trans. Knowledge Discovery from Data, to appear,
76 Big data clustering Given with assign them to clusters Key idea: Reduce dimensionality via random projections Desiderata: Preserve the pairwise data distances in lower dimensions Feature extraction Construct combined features (e.g., via ) Apply Kmeans to space Feature selection Select of input features (rows of ) Apply Kmeans to space 76
77 Randomized Kmeans Cluster with centroid Datacluster association matrix Kmeans criterion: Def. approximation algorithm: Any algorithm yielding s.t. w.h.p. Given a projection matrix and C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas, Randomized dimensionality reduction for Kmeans clustering, arxiv: ,
78 Feature extraction via random projections Theorem Given number of clusters, accuracy, and, construct i.i.d. Bernoulli with. Then, any approximation means on, yielding, satisfies w.h.p. Complexity: Related results [Boutsidis et al 14] For, ( : principal comp. of ); compl. Approx. by rand. alg. ; complexity C. Boutsidis, P. Drineas, and M. MagdonIsmail, Nearoptimal columnbased matrix reconstruction, SIAM Journal of Computing,
79 Feature selection via random projections Given select features by constructing S1. S2. S3. S3a. Leverage scores! S3b. and for Pick w.p. Complexity: S4. Theorem Any approximation means on, yielding, satisfies w.p. C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas, Randomized dimensionality reduction for Kmeans clustering, arxiv: ,
80 Random projection matrices Random projection matrix Reduced dimension Projection cost I.i.d. random Gaussian (RG) Random sign matrix (RS) Random Hadamard transform (FHT) Sparse embedding matrix* [Clarkson, Woodruff 13] * Exploits sparsity in data matrix ** Number of nonzero entries in Polynomial complexity w.r.t. * K. L. Clarkson and D. P. Woodruff, Low rank approximation and regression in input sparsity time, arxiv: v4,
81 Testing randomized clustering T = 1,000, D = 2,000, and K = 5 well separated clusters C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas, Randomized dimensionality reduction for Kmeans clustering, arxiv: ,
82 Roadmap Context and motivation Critical Big Data tasks Optimization algorithms for Big Data Randomized learning Scalable computing platforms for Big Data MapReduce platform Leastsquares and kmeans in MapReduce Graph algorithms Conclusions and future research directions 82
83 Parallel programming issues Challenges Load balancing; communications Scattered data; synchronization (deadlocks, readwrite racing) General vs applicationspecific implementations Efficient vs development overhead tradeoffs Choosing the right tool is important! 83
84 MapReduce General parallel programming (PP) paradigm for largescale problems Easy parallelization at a high level Synchronization, communication challenges are taken care of Initially developed by Google Processing divided in two stages (S1 and S2) S1 (Map): PP for portions of the input data S2 (Reduce): Aggregation of S1 results (possibly in parallel) Example Reduce Map per processor 84
85 Basic ingredients of MapReduce All data are in the form <key,value> e.g., key = machine, value = 5 MapReduce model consists of [LinDryer 10] Input keyvalue pairs Mappers Programs applying a Map operation to a subset of input data Run same operation in parallel Produce intermediate keyvalue pairs Reducers Programs applying a Reduce operation (e.g., aggregation) to a subset of intermediate keyvalue pairs with the same key Reducers execute same operation in parallel to generate output keyvalue pairs Execution framework Transparent to the programmer; performs tasks, e.g., sync and com J. Lin and C. Dryer, DataIntensive Text Processing with MapReduce, Morgan & Claypool Pubs.,
86 Running MapReduce Each job has three phases Input Input Input Phase 1: Map Receive input keyvalue pairs to produce intermediate keyvalue pairs Phase 2: Shuffle and sort Intermediate keyvalue pairs sorted; sent to proper reducers Handled by the execution framework Single point of communication Slowest phase of MapReduce Phase 3: Reduce Receive intermediate keyvalue pairs the final ones All mappers must have finished for reducers to start! Shuffle Map Map Map Reduce Output Reduce 86
87 The MapReduce model Map phase Aggregates intermediat e keyvalue pairs of each mapper Reduce phase Assign intermediate keyvalue pairs to reducers e.g. key mod #reducers 87
88 Leastsquares in MapReduce Given find Map Phase Mappers (here 3) calculate partial sums Reduce Phase Reducer aggregates partial sums Computes Finds Aggregate partial sums Calculate C. Chu, S. K. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, K. Olokotun, MapReduce for machine learning on multicore, pp , NIPS,
89 Kmeans in MapReduce Serial kmeans Pick K random initial centroids Iterate:  Assigning points to cluster of closest centroid  Updating centroids by averaging data per clusterning points to cluster of closest centroid  Updating centroids by averaging data per cluster A central unit picks K random initial centroids Each iteration is a MapReduce job (multiple required!) MapReduce kmeans Mappers Each assigned a subset of data Find closest centroid, partial sum of data, and #data per cluster Generate keyvalue pairs; key = cluster, value = partial sum, #points Reducers Each assigned a subset of clusters Aggregate partial sums for corresponding clusters Update centroid location : centroids : Mapper 1 : Mapper 2 : Reducer 1 : Reducer 2 89
90 Graph algorithms Require communication between processing nodes across multiple iterations; MapReduce not ideal! Pregel Based on Bulk Synchronous Parallel (BSP) model Developed by Google for use on large graph problems Each vertex is characterized by a userdefined value Each edge is characterized by source vertex, destination vertex and a userdefined value Computation is performed in Supersteps A userdefined function is executed on each vertex (parallel) Messages from previous superstep are received Messages for the next superstep are sent Supersteps are separated by global synchronization points Superstep i Vertex 1 Vertex 2 Vertex 3 Superstep i+1 Vertex 1 Vertex 2 Vertex 3 More efficient than MapReduce! Sync barrier G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N. Leiser, G. Czajkowski, Pregel: A system for largescale graph processing, SIGMOD Indianapolis,
91 Wrapup for scalable computing MapReduce Simple parallel processing Code easy to understand Great for batch problems Applicable to many problems However Framework can be restrictive Not designed for online algorithms Inefficient for graph processing Pregel Tailored for graph algorithms More efficient relative to MapReduce Available open source implementations Hadoop (MapReduce) Hama (Pregel) Giraph (Pregel) Machine learning algorithms can be found at Mahout! 91
92 Roadmap Context and motivation Critical Big Data tasks Optimization algorithms for Big Data Randomized learning Scalable computing platforms for Big Data Conclusions and future research directions 92
93 Tutorial summary Big Data modeling and tasks Dimensionality reduction Succinct representations Vectors, matrices, and tensors Optimization algorithms Decentralized, parallel, streaming Data sketching Implementation platforms Scalable computing platforms Analytics in the cloud 93
94 Timeliness Special sessions in recent IEEE SP Society / EURASIP meetings Signal and Information Processing for Big Data Challenges in High Dimensional Learning Information Processing over Networks Trends in Sparse Signal Processing Sparse Signal Techniques for Web Information Processing SP for Big Data Optimization Algorithms for HighDimensional SP Tensorbased SP Information Processing for Big Data 94
95 Importance to funding agencies NSF, NIH, DARPA, DoD, DoE, and US Geological Survey Sample programs: NSF (BIGDATA) DARPA ADAMS (Anomaly Detection on Multiple Scales) DoD: 20 + solicitations Data to decisions Autonomy Humanmachine systems Source: 95
96 NSFECCS sponsored workshop NSF Workshop on Big Data From Signal Processing to Systems Engineering March 21 22, 2013 Arlington, Virginia, USA Workshop program and slides Workshop final report Sponsored by Electrical, Communications and Cyber Systems (ECCS) 96
97 Open JSTSP special issue Special issue on Signal Processing for Big Data IEEE Journal of Selected Topics in Signal Processing Guest editors Georgios B. Giannakis University of Minnesota, USA Raphael Cendrillon Google, USA Volkan Cevher EPFL, Switzerland Ananthram Swami Army Research Laboratory, USA Zhi Tian Michigan Tech Univ. and NSF, USA Full manuscripts due by July 1, 2014 Sept. 14 issue of the IEEE Signal Processing Magazine (also on SP for Big Data) 97
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 63, NO. 10, MAY 15, 2015 2663
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 63, NO 10, MAY 15, 2015 2663 Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors Morteza Mardani, StudentMember,IEEE, Gonzalo Mateos,
More informationConvex Optimization for Big Data
Convex Optimization for Big Data Volkan Cevher, Stephen Becker, and Mark Schmidt September 2014 This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce
More informationSee All by Looking at A Few: Sparse Modeling for Finding Representative Objects
See All by Looking at A Few: Sparse Modeling for Finding Representative Objects Ehsan Elhamifar Johns Hopkins University Guillermo Sapiro University of Minnesota René Vidal Johns Hopkins University Abstract
More informationEfficient Projections onto the l 1 Ball for Learning in High Dimensions
Efficient Projections onto the l 1 Ball for Learning in High Dimensions John Duchi Google, Mountain View, CA 94043 Shai ShalevShwartz Toyota Technological Institute, Chicago, IL, 60637 Yoram Singer Tushar
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationTop 10 algorithms in data mining
Knowl Inf Syst (2008) 14:1 37 DOI 10.1007/s1011500701142 SURVEY PAPER Top 10 algorithms in data mining Xindong Wu Vipin Kumar J. Ross Quinlan Joydeep Ghosh Qiang Yang Hiroshi Motoda Geoffrey J. McLachlan
More informationAn Introduction to Variable and Feature Selection
Journal of Machine Learning Research 3 (23) 11571182 Submitted 11/2; Published 3/3 An Introduction to Variable and Feature Selection Isabelle Guyon Clopinet 955 Creston Road Berkeley, CA 9478151, USA
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationBIGDATA: Collaborative Research:F: Manifold Hypotheses and Foundations of Data Science with Applications
BIGDATA: Collaborative Research:F: Manifold Hypotheses and Foundations of Data Science with Applications It is often said that progress in science is characterized by successive steps of measurement, arithmetization,
More informationLearning Invariant Features through Topographic Filter Maps
Learning Invariant Features through Topographic Filter Maps Koray Kavukcuoglu Marc Aurelio Ranzato Rob Fergus Yann LeCun Courant Institute of Mathematical Sciences New York University {koray,ranzato,fergus,yann}@cs.nyu.edu
More informationGroup Sparsity in Nonnegative Matrix Factorization
Group Sparsity in Nonnegative Matrix Factorization Jingu Kim Renato D. C. Monteiro Haesun Park Abstract A recent challenge in data analysis for science and engineering is that data are often represented
More information2 Basic Concepts and Techniques of Cluster Analysis
The Challenges of Clustering High Dimensional Data * Michael Steinbach, Levent Ertöz, and Vipin Kumar Abstract Cluster analysis divides data into groups (clusters) for the purposes of summarization or
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1 ActiveSet Newton Algorithm for Overcomplete NonNegative Representations of Audio Tuomas Virtanen, Member,
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationData Mining with Big Data
Data Mining with Big Data Xindong Wu 1,2, Xingquan Zhu 3, GongQing Wu 2, Wei Ding 4 1 School of Computer Science and Information Engineering, Hefei University of Technology, China 2 Department of Computer
More informationStreaming Submodular Maximization: Massive Data Summarization on the Fly
Streaming Submodular Maximization: Massive Data Summarization on the Fly Ashwinkumar Badanidiyuru Cornell Unversity ashwinkumarbv@gmail.com Baharan Mirzasoleiman ETH Zurich baharanm@inf.ethz.ch Andreas
More informationIntroduction to partitioningbased clustering methods with a robust example
Reports of the Department of Mathematical Information Technology Series C. Software and Computational Engineering No. C. 1/2006 Introduction to partitioningbased clustering methods with a robust example
More informationContributions to Large Scale Data Clustering and Streaming with Affinity Propagation. Application to Autonomic Grids.
THÈSE DE DOCTORAT DE L UNIVERSITÉ PARISSUD SPÉCIALITÉ : INFORMATIQUE présentée par Xiangliang ZHANG pour obtenir le grade de DOCTEUR DE L UNIVERSITÉ PARISSUD Sujet de la thèse : Contributions to Large
More informationOnline Learning for Big Data Analytics
Online Learning for Big Data Analytics Irwin King, Michael R. Lyu and Haiqin Yang Department of Computer Science & Engineering The Chinese University of Hong Kong Tutorial presentation at IEEE Big Data,
More informationAnomaly Detection Approaches for Communication Networks
Anomaly Detection Approaches for Communication Networks Marina Thottan, Guanglei Liu, Chuanyi Ji Abstract In recent years network anomaly detection has become an important area for both commercial interests
More informationThe Pyramid Match Kernel: Discriminative Classification with Sets of Image Features
The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features Kristen Grauman and Trevor Darrell Massachusetts Institute of Technology Computer Science and Artificial Intelligence
More informationMINING DATA STREAMS WITH CONCEPT DRIFT
Poznan University of Technology Faculty of Computing Science and Management Institute of Computing Science Master s thesis MINING DATA STREAMS WITH CONCEPT DRIFT Dariusz Brzeziński Supervisor Jerzy Stefanowski,
More informationLearning over Sets using Kernel Principal Angles
Journal of Machine Learning Research 4 (2003) 913931 Submitted 3/03; Published 10/03 Learning over Sets using Kernel Principal Angles Lior Wolf Amnon Shashua School of Engineering and Computer Science
More informationRepresentation Learning: A Review and New Perspectives
1 Representation Learning: A Review and New Perspectives Yoshua Bengio, Aaron Courville, and Pascal Vincent Department of computer science and operations research, U. Montreal also, Canadian Institute
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationMizan: A System for Dynamic Load Balancing in Largescale Graph Processing
: A System for Dynamic Load Balancing in Largescale Graph Processing Zuhair Khayyat Karim Awara Amani Alonazi Hani Jamjoom Dan Williams Panos Kalnis King Abdullah University of Science and Technology,
More informationMultiway Clustering on Relation Graphs
Multiway Clustering on Relation Graphs Arindam Banerjee Sugato Basu Srujana Merugu Abstract A number of realworld domains such as social networks and ecommerce involve heterogeneous data that describes
More informationMore Constraints, Smaller Coresets: Constrained Matrix Approximation of Sparse Big Data
More Constraints, Smaller Coresets: Constrained Matrix Approximation of Sparse Big Data ABSTRACT Dan Feldman Computer Science Department University of Haifa Haifa, Israel dannyf.post@gmail.com We suggest
More informationMore Effective Distributed ML via a Stale Synchronous Parallel Parameter Server
More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server Qirong Ho, James Cipar, Henggang Cui, Jin Kyu Kim, Seunghak Lee, Phillip B. Gibbons, Garth A. Gibson, Gregory R. Ganger,
More informationScaling Datalog for Machine Learning on Big Data
Scaling Datalog for Machine Learning on Big Data Yingyi Bu, Vinayak Borkar, Michael J. Carey University of California, Irvine Joshua Rosen, Neoklis Polyzotis University of California, Santa Cruz Tyson
More information