Factoring Tensors in the Cloud: A Tutorial on Big Tensor Data Analytics Nikos Sidiropoulos, UMN, and Evangelos Papalexakis, CMU ICASSP Tutorial, 5/5/2014 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 1 / 55
Web, papers, software, credits Nikos Sidiropoulos, UMN Signal and Tensor Analytics Research (STAR) group https://sites.google.com/a/umn.edu/nikosgroup/home Signal processing, communications, optimization Tensor decomposition, factor analysis Big data Spectrum sensing, cognitive radio Sidekicks: preference measurement, conference review & session assignment Vangelis Papalexakis, Christos Faloutsos, CMU http://www.cs.cmu.edu/ epapalex/ and http://www.cs.cmu.edu/ christos/ Data mining Graph mining Tensor analysis, coupled matrix-tensor analysis Big data, Hadoop Christos Faloutsos, Tom Mitchell (CMU), Nikos Sidiropoulos, George Karypis (UMN), NSF-NIH/BIGDATA: Big Tensor Mining: Theory, Scalable Algorithms and Applications, NSF IIS-1247489/1247632. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 2 / 55
Matrices, rank decomposition A matrix (or two-way array) is a dataset X indexed by two indices, (i, j)-th entry X(i, j). Simple matrix S(i, j) = a(i)b(j), i, j; separable, every row (column) proportional to every other row (column). Can write as S = ab T. rank(x) := smallest number of simple (separable, rank-one) matrices needed to generate X - a measure of complexity. X(i, j) = F a f (i)b f (j); or X = f =1 F a f b T f = AB T. Turns out rank(x) = maximum number of linearly independent rows (or, columns) in X. Rank decomposition for matrices is not unique (except for matrices of rank = 1), as invertible M: ( ) X = AB T = (AM) M T B T = (AM) ( BM 1) T = Ã B T. f =1 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 3 / 55
Tensor? What is this? CS slang for three-way array: dataset X indexed by three indices, (i, j, k)-th entry X(i, j, k). In plain words: a shoebox! Warning: different meaning in Physics! For two vectors a (I 1) and b (J 1), a b is an I J rank-one matrix with (i, j)-th element a(i)b(j); i.e., a b = ab T. For three vectors, a (I 1), b (J 1), c (K 1), a b c is an I J K rank-one three-way array with (i, j, k)-th element a(i)b(j)c(k). The rank of a three-way array X is the smallest number of outer products needed to synthesize X. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 4 / 55
Motivating example: NELL @ CMU / Tom Mitchell Crawl web, learn language like children do : encounter new concepts, learn from context NELL triplets of subject-verb-object naturally lead to a 3-mode tensor subject verb X c 1 b 1 + a 1 c 2 b 2 + a 2... + c F a F b F object Each rank-one factor corresponds to a concept, e.g., leaders or tools E.g., say a 1, b 1, c 1 corresponds to leaders : subjects/rows with high score on a 1 will be Obama, Merkel, Steve Jobs, objects/columns with high score on b 1 will be USA, Germany, Apple Inc., and verbs/fibers with high score on c 1 will be verbs, like lead, is-president-of, and is-ceo-of. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 5 / 55
Kronecker and Khatri-Rao products stands for the Kronecker product: BA(1, 1), BA(1, 2), A B = BA(2, 1), BA(2, 2),. stands for the Khatri-Rao (column-wise Kronecker) product: given A (I F ) and B (J F ), A B is the JI F matrix A B = [ A(:, 1) B(:, 1) A(:, F ) B(:, F ) ] vec(abc) = (C T A)vec(B) If D = diag(d), then vec(adc) = (C T A)d Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 6 / 55
Rank decomposition for tensors Tensor: Scalar: Slabs: Matrix: Tall vector: x (KJI) := vec X(i, j, k) = X = F a f b f c f f =1 F a i,f b j,f c k,f, f =1 i {1,, I} j {1,, J} k {1,, K } X k = AD k (C)B T, k = 1,, K X (KJ I) = (B C)A T ( X (KJ I)) = (A (B C)) 1 F 1 = (A B C) 1 F 1 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 7 / 55
Do I need to worry about higher-way (or, higher-order) tensors? Semantic analysis of Brain fmri data fmri semantic category scores fmri mode is vectorized (O(10 5 10 6 )) Spatial coherence - better to treat as three separate spatial modes 5-way array Temporal dynamics - include time as another dimension 6-way array Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 8 / 55
Rank decomposition for higher-order tensors Tensor: Scalar: Matrix: X = F f =1 a (1) f X(i 1,, i N ) = a (N) f F N a (n) i n,f f =1 n=1 X (I 1I 2 I N 1 I N ) = (A (N 1) A (N 2) A (1)) ( A (N)) T Tall vector: ( ) x (I 1 I N ) := vec X (I 1I 2 I N 1 I N ) = ( A (N) A (N 1) A (N 2) A (1)) 1 F 1 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 9 / 55
Tensor Singular Value Decomposition? For matrices, SVD is instrumental: rank-revealing, Eckart-Young So is there a tensor equivalent to the matrix SVD? Yes,... and no! In fact there is no single tensor SVD. Two basic decompositions: CANonical DECOMPosition (CANDECOMP), also known as PARAllel FACtor (PARAFAC) analysis, or CANDECOMP-PARAFAC (CP) for short: non-orthogonal, unique under certain conditions. Tucker3, orthogonal without loss of generality, non-unique except for very special cases. Both are outer product decompositions, but with very different structural properties. Rule of thumb: use Tucker3 for subspace estimation and tensor approximation, e.g., compression applications; use PARAFAC for latent parameter estimation - recovering the hidden rank-one factors. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 10 / 55
Tucker3 I J K three-way array X A : I L, B : J M, C : K N mode loading matrices G : L M N Tucker3 core Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 11 / 55
Tucker3, continued Consider an I J K three-way array X comprising K matrix slabs {X k } K k=1, arranged into matrix X := [vec(x 1),, vec(x K )]. The Tucker3 model can be written as X (B A)GC T, where G is the Tucker3 core tensor G recast in matrix form. The non-zero elements of the core tensor determine the interactions between columns of A, B, C. The associated model-fitting problem is min X (B A,B,C,G A)GCT 2 F, which is usually solved using an alternating least squares procedure. vec(x) (C B A) vec(g). Highly non-unique - e.g., rotate C, counter-rotate G using unitary matrix. Subspaces can be recovered; Tucker3 is good for tensor approximation, not latent parameter estimation. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 12 / 55
PARAFAC Figure : From Matlab Central / N-way toolbox (Rasmus Bro) http://www. mathworks.com/matlabcentral/fileexchange/1088-the-n-way-toolbox Low-rank tensor decomposition / approximation F X a f b f c f, f =1 PARAFAC [Harshman 70-72], CANDECOMP [Carroll & Chang, 70], now CP; also cf. [Hitchcock, 27] X k AD k (C)B T, where D k (C) is a diagonal matrix holding the k-th row of C in its diagonal. Combining slabs and using Khatri-Rao product, X (B A)C T vec(x) (C B A) 1 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 13 / 55
Uniqueness Under certain conditions, PARAFAC is essentially unique, i.e., (A, B, C) can be identified from X up to permutation and scaling of columns - there s no rotational freedom; cf. [Kruskal 77, Sidiropoulos et al 00-07, de Lathauwer 04-, Stegeman 06-, Chiantini, Ottaviani 11-,...] I J K tensor X of rank F, vectorized as IJK 1 vector x = (A B C) 1, for some A (I F ), B (J F), and C (K F ) - a PARAFAC model of size I J K and order F parameterized by (A, B, C). The Kruskal-rank of A, denoted k A, is the maximum k such that any k columns of A are linearly independent (k A r A := rank(a)). Given X ( x), if k A + k B + k C 2F + 2, then (A, B, C) are unique up to a common column permutation and scaling/counter-scaling (e.g., multiply first column of A by 5, divide first column of B by 5, outer product stays the same) - cf. [Kruskal, 1977] N-way case: N n=1 k A (n) 2F + (N 1) [Sidiropoulos & Bro, 2000] Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 14 / 55
Special case: Two slabs Assume square/tall, full column rank A(I F) and B(J F ); and distinct nonzero elements along the diagonal of D: X 2 X 1 = AB T, X 2 = ADB T Introduce the compact F -component SVD of [ ] [ ] X1 A = B T = UΣV H AD ([ A rank(b) = F implies that span(u) = span AD a nonsingular matrix P such that U = [ U1 ] [ A = U 2 AD ] P ]) ; hence there exists Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 15 / 55
Special case: Two slabs, continued Next, construct the auto- and cross-product matrices R 1 = U H 1 U 1 = P H A H AP =: QP, R 2 = U H 1 U 2 = P H A H ADP = QDP. All matrices on the right hand side are square and full-rank. Solving the first equation and substituting to the second, yields the eigenvalue problem ( ) R 1 1 R 2 P 1 = P 1 D P 1 (and P) recovered up to permutation and scaling From bottom of previous page, A likewise recovered, then B Starts making sense... Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 16 / 55
Alternating Least Squares (ALS) Based on matrix view: Multilinear LS problem: X (KJ I) = (B C)A T min A,B,C X(KJ I) (B C)A T 2 F NP-hard - even for a single component, i.e., vector a, b, c. See [Hillar and Lim, Most tensor problems are NP-hard, 2013] But... given interim estimates of B, C, can easily solve for conditional LS update of A: A CLS = ((B C) X (KJ I)) T Similarly for the CLS updates of B, C (symmetry); alternate until cost function converges (monotonically). Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 17 / 55
Other algorithms? Many! - first-order (gradient-based), second-order (Hessian-based) Gauss-Newton, line search, Levenberg-Marquardt, weighted least squares, majorization Algebraic initialization (matters) See Tomasi and Bro, 2006, for a good overview Second-order advantage when close to optimum, but can (and do) diverge First-order often prone to local minima, slow to converge Stochastic gradient descent (CS community) - simple, parallel, but very slow Difficult to incorporate additional constraints like sparsity, non-negativity, unimodality, etc. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 18 / 55
ALS No parameters to tune! Easy to program, uses standard linear LS Monotone convergence of cost function Does not require any conditions beyond model identifiability Easy to incorporate additional constraints, due to multilinearity, e.g., replace linear LS with linear NNLS for NN Even non-convex (e.g., FA) constraints can be handled with column-wise updates (optimal scaling lemma) Cons: sequential algorithm, convergence can be slow Still workhorse after all these years Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 19 / 55
Outliers, sparse residuals Instead of LS, Conditional update: LP min X (KJ I) (B C)A T 1 Almost as good: coordinate-wise, using weighted median filtering (very cheap!) [Vorobyov, Rong, Sidiropoulos, Gershman, 2005] PARAFAC CRLB: [Liu & Sidiropoulos, 2001] (Gaussian); [Vorobyov, Rong, Sidiropoulos, Gershman, 2005] (Laplacian, etc differ only in pdf-dependent scale factor). Alternating optimization algorithms approach the CRLB when the problem is well-determined (meaning: not barely identifiable). Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 20 / 55
Big (tensor) data Tensors can easily become really big! - size exponential in the number of dimensions ( ways, or modes ). Datasets with millions of items per mode - examples in second part of this tutorial. Cannot load in main memory; may reside in cloud storage. Sometimes very sparse - can store and process as (i,j,k,value) list, nonzero column indices for each row, runlength coding, etc. (Sparse) Tensor Toolbox for Matlab [Kolda et al]. Avoids explicitly computing dense intermediate results. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 21 / 55
Tensor partitioning? Parallel algorithms for matrix algebra use data partitioning Can we reuse some of these ideas? Low-hanging fruit? First considered in [Phan, Cichocki, Neurocomputing, 2011] Later revisited in [Almeida, Kibangou, CAMSAP 2013, ICASSP 2014] Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 22 / 55
Tensor partitioning In [Phan & Cichocki, Neurocomputing, 2011] each sub-block is independently decomposed, then low-rank factors of the big tensor are stitched together. The decomposition of each sub-block must be identifiable far more stringent ID conditions; Each sub-block has different permutation and scaling of the factors - all these must be reconciled before stitching (not discussed in paper); Noise affects sub-block decomposition more than full tensor decomposition. More recently, [Almeida & Kibangou, CAMSAP 2013, ICASSP 2014]: Also rely on partitioning, so sub-block identification conditions are required; But more robust, based on collaborative consensus-averaging, and the matching of different permutations and scales is explicitly taken into account; Requires considerable communication overhead across clusters and nodes; i.e., the parallel threads are not independent. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 23 / 55
Tensor compression Commonly used compression method for moderate -size tensors: fit orthogonal Tucker3 model, regress data onto fitted mode-bases. Implemented in COMFAC http://www.ece.umn.edu/ nikos/comfac.m Lossless if exact mode bases used [CANDELINC]; but Tucker3 fitting is itself cumbersome for big tensors (big matrix SVDs), cannot compress below mode ranks without introducing errors Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 24 / 55
Tensor compression Consider compressing x = vec(x) into y = Sx, where S is d IJK, d IJK. In particular, consider a specially structured compression matrix S = U T V T W T Corresponds to multiplying (every slab of) X from the I-mode with U T, from the J-mode with V T, and from the K -mode with W T, where U is I L, V is J M, and W is K N, with L I, M J, N K and LMN IJK M J L I I J X _ K K N L Y _ M N Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 25 / 55
Key Due to a property of the Kronecker product ( U T V T W T ) (A B C) = ( ) (U T A) (V T B) (W T C), from which it follows that ( ) ) y = (U T A) (V T B) (W T C) 1 = (à B C 1. i.e., the compressed data follow a PARAFAC model of size L M N and order F parameterized by (Ã, B, C), with à := UT A, B := V T B, C := W T C. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 26 / 55
Random multi-way compression can be better! Sidiropoulos & Kyrillidis, IEEE SPL Oct. 2012 Assume that the columns of A, B, C are sparse, and let n a (n b, n c ) be an upper bound on the number of nonzero elements per column of A (respectively B, C). Let the mode-compression matrices U (I L, L I), V (J M, M J), and W (K N, N K ) be randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R IL, R JM, and R KN, respectively. If min(l, k A ) + min(m, k B ) + min(n, k C ) 2F + 2, L 2n a, M 2n b, N 2n c, and then the original factor loadings A, B, C are almost surely identifiable from the compressed data. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 27 / 55
Proof rests on two lemmas + Kruskal Lemma 1: Consider à := UT A, where A is I F, and let the I L matrix U be randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R IL (e.g., multivariate Gaussian with a non-singular covariance matrix). Then kã = min(l, k A ) almost surely (with probability 1) [Sidiropoulos & Kyrillidis, 2012] Lemma 2: Consider à := UT A, where à and U are given and A is sought. Suppose that every column of A has at most n a nonzero elements, and that k U T 2n a. (The latter holds with probability 1 if the I L matrix U is randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R IL, and min(i, L) 2n a.) Then A is the unique solution with at most n a nonzero elements per column [Donoho & Elad, 03] Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 28 / 55
Complexity First fitting PARAFAC in compressed space and then recovering the sparse A, B, C from the fitted compressed factors entails complexity O(LMNF + (I 3.5 + J 3.5 + K 3.5 )F ) in practice. Using sparsity first and then fitting PARAFAC in raw space entails complexity O(IJKF + (IJK ) 3.5 ) - the difference is huge. Also note that the proposed approach does not require computations in the uncompressed data domain, which is important for big data that do not fit in memory for processing. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 29 / 55
Further compression - down to O( F) in 2/3 modes Sidiropoulos & Kyrillidis, IEEE SPL Oct. 2012 Assume that the columns of A, B, C are sparse, and let n a (n b, n c ) be an upper bound on the number of nonzero elements per column of A (respectively B, C). Let the mode-compression matrices U (I L, L I), V (J M, M J), and W (K N, N K ) be randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R IL, R JM, and R KN, respectively. If r A = r B = r C = F L(L 1)M(M 1) 2F (F 1), N F, L 2n a, M 2n b, N 2n c, and then the original factor loadings A, B, C are almost surely identifiable from the compressed data up to a common column permutation and scaling. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 30 / 55
Proof: Lemma 3 + results on a.s. ID of PARAFAC Lemma 3: Consider à = UT A, where A (I F) is deterministic, tall/square (I F ) and full column rank r A = F, and the elements of U (I L) are i.i.d. Gaussian zero mean, unit variance random variables. Then the distribution of à is nonsingular multivariate Gaussian. From [Stegeman, ten Berge, de Lathauwer 2006] (see also [Jiang, Sidiropoulos 2004], we know that PARAFAC is almost surely identifiable if the loading matrices Ã, B are randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R (L+M)F, C is full column rank, and L(L 1)M(M 1) 2F (F 1). Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 31 / 55
Generalization to higher-way arrays Theorem 3: Let x = (A 1 A δ ) 1 R δ d=1 I d, where Ad is I d F, and consider compressing it to y = ( U T 1 ) UT δ x = ( (U T 1 A 1 ) (U T δ A δ) ) 1 = (Ã1 Ãδ) 1 R δ d=1 L d, where the mode-compression matrices U d (I d L d, L d I d ) are randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R I d L d. Assume that the columns of Ad are sparse, and let n d be an upper bound on the number of nonzero elements per column of A d, for each d {1,, δ}. If δ min(l d, k Ad ) 2F + δ 1, and L d 2n d, d {1,, δ}, d=1 then the original factor loadings {A d } δ d=1 are almost surely identifiable from the compressed data y up to a common column permutation and scaling. Various additional results possible. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 32 / 55
Latest on PARAFAC uniqueness Luca Chiantini and Giorgio Ottaviani, On Generic Identifiability of 3-Tensors of Small Rank, SIAM. J. Matrix Anal. & Appl., 33(3), 1018 1037: Consider an I J K tensor X of rank F, and order the dimensions so that I J K Let i be maximal such that 2 i I, and likewise j maximal such that 2 i J If F 2 i+j 2, then X has a unique decomposition almost surely For I, J powers of 2, the condition simplifies to F IJ 4 More generally, condition implies: if F (I+1)(J+1) 16, then X has a unique decomposition almost surely Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 33 / 55
Even further compression Assume that the columns of A, B, C are sparse, and let n a (n b, n c ) be an upper bound on the number of nonzero elements per column of A (respectively B, C). Let the mode-compression matrices U (I L, L I), V (J M, M J), and W (K N, N K ) be randomly drawn from an absolutely continuous distribution with respect to the Lebesgue measure in R IL, R JM, and R KN, respectively. Assume L M N, and L, M are powers of 2, for simplicity If r A = r B = r C = F LM 4F, N M L, and L 2n a, M 2n b, N 2n c, then the original factor loadings A, B, C are almost surely identifiable from the compressed data up to a common column permutation and scaling. Allows compression down to order of F in all three modes Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 34 / 55
What if A, B, C are not sparse? If A, B, C are sparse with respect to known bases, i.e., A = RǍ, B = SˇB, and C = TČ, with R, S, T the respective sparsifying bases, and Ǎ, ˇB, Č sparse Then the previous results carry over under appropriate conditions, e.g., when R, S, T are non-singular. OK, but what if such bases cannot be found? Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 35 / 55
PARACOMP: PArallel RAndomly COMPressed Cubes L 1 N 1 Y_ 1 M 1 K I J X_ fork (U 2,V 2,W 2 ) N 2 L 2 Y_ 2 M 2 (A~ 2,B~ 2,C~ 2 ) join (LS) (A,B,C) L P Y_ P N P M p M P J V p T L p U p T I I J X_ K K N p W T p Lp N p Y_ p M p Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 36 / 55
PARACOMP Assume Ãp, B p, C p identifiable from Y p (up to perm & scaling of cols) Upon factoring Y p into F rank-one components, we obtain à p = U T p AΠ p Λ p. (1) Assume first 2 columns of each U p are common, let Ū denote this common part, and Āp := first two rows of Ãp. Then Ā p = ŪT AΠ p Λ p. Dividing each column of Āp by the element of maximum modulus in that column, denoting the resulting 2 F matrix Âp,  p = ŪT AΛΠ p. Λ does not affect the ratio of elements in each 2 1 column. If ratios are distinct, then permutations can be matched by sorting the ratios of the two coordinates of each 2 1 column of Âp. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 37 / 55
PARACOMP In practice using a few more anchor rows will improve perm-matching. When S anchor rows are used, the opt permutation matching cast as min Π Â1 ÂpΠ 2 F, Optimization over set of permutation matrices - hard? ) Â1 ÂpΠ 2 F ((Â1 = Tr ÂpΠ) T (Â1 ÂpΠ) = Â1 2 F + ÂpΠ 2 F 2Tr(ÂT 1 ÂpΠ) = Â1 2 F + Âp 2 F 2Tr(ÂT 1 ÂpΠ). max Π Tr(ÂT 1 ÂpΠ), Linear Assignment Problem (LAP), efficient soln via Hungarian Algorithm. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 38 / 55
PARACOMP After perm-matching, back to (1) and permute columns Ăp satisfying Ă p = U T p AΠΛ p. Remains to get rid of Λ p. For this, we can again resort to the first two common rows, and divide each column of Ăp with its top element Ǎ p = U T p AΠΛ. For recovery of A up to perm-scaling of cols, we then require that be full column rank. Ǎ 1. Ǎ P = U T 1. U T P AΠΛ (2) Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 39 / 55
PARACOMP This implies that 2 + P (L p 2) I. p=1 Every sub-matrix contains the two common anchor rows, duplicate rows do not increase rank. Once dim requirement met, full rank w.p. 1, non-redundant entries drawn from jointly cont. distribution (by design). Assuming L p = L, p {1,, P} P I 2 L 2. Common column perm of Ãp, B p, C p is common ( I 2 P max L 2, J M, K ) N Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 40 / 55
PARACOMP If compression ratios in different modes are similar, makes sense to use longest mode for anchoring; if this is the last mode, then ( I P max L, J M, K 2 ) N 2 Theorem: Assume that F I J K, and A, B, C are full column rank (F ). Further assume that L p = L, M p = M, N p = N, p {1,, P}, L M N, (L + 1)(M + 1) 16F, random {U p } P p=1, {V p} P p=1, each W p contains two common anchor columns, otherwise random {W p } P p=1. Then (Ãp, B p, C ) p unique up to column permutation and scaling. ( ) I If, in addition, P max L, J M, K 2 N 2, then (A, B, C) are almost surely identifiable from {(Ãp, B p, C )} P p up to a common column permutation and scaling. p=1 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 41 / 55
PARACOMP - Significance Indicative of a family of results that can be derived. Theorem shows that fully parallel computation of the big tensor decomposition is possible first result that guarantees ID of the big tensor decomposition from the small tensor decompositions, without stringent additional constraints. ( ) Corollary: If K 2 N 2 = max I L, J M, K 2 N 2, then the memory / storage and computational complexity savings afforded by PARACOMP relative to brute-force computation are of order IJ F. Note on complexity of solving master join equation: after removing redundant rows, system matrix in (2) will have approximately orthogonal columns for large I left pseudo-inverse its transpose, complexity I 2 F. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 42 / 55
PARACOMP Cases where F > min(i, J, K ) can be treated using Kruskal s condition; total storage and complexity gains still of order IJ F 2. Latent sparsity: Assume that every column of A (B, C) has at most n a (resp. n b, n c ) nonzero elements. A column of A can be uniquely recovered from only 2n a incoherent linear equations. Therefore, we may replace the condition ( I P max L, J M, K 2 ), N 2 with ( 2na P max L, 2n b M, 2n ) c 2. N 2 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 43 / 55
Color of compressed noise Y = X + Z, where Z: zero-mean additive white noise. y = x + z, with y := vec (Y), x := vec (X), z := vec (Z). Multi-way compression Y c y c := vec (Y c ) = ( U T V T W T ) y = ( ) ( ) U T V T W T x + U T V T W T z. Let z c := ( U T V T W T ) z. Clearly, E[z c ] = 0; it can be shown that E [ ] (( ) ( ) ( )) z c z T c = σ 2 U T U V T V W T W. If U, V, W are orthonormal, then noise in the compressed domain is white. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 44 / 55
Universal LS E[z c ] = 0, and E [ ] (( ) ( ) ( )) z c z T c = σ 2 U T U V T V W T W. For large I and U drawn from a zero-mean unit-variance uncorrelated distribution, U T U I by the law of large numbers. Furthermore, even if z is not Gaussian, z c will be approximately Gaussian for large IJK, by the Central Limit Theorem. Follows that least-squares fitting is approximately optimal in the compressed domain, even if it is not so in the uncompressed domain. Compression thus makes least-squares fitting universal! Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 45 / 55
Component energy preserved after compression Consider randomly compressing a rank-one tensor X = a b c, written in vectorized form as x = a b c. The compressed tensor is X, in vectorized form ( ) x = U T V T W T (a b c) = (U T a) (V T b) (W T c). Can be shown that, for moderate L, M, N and beyond, Frobenious norm of compressed rank-one tensor approximately proportional to Frobenious norm of the uncompressed rank-one tensor component of original tensor. In other words: compression approximately preserves component energy order. Low-rank least-squares approximation of the compressed tensor low-rank least-squares approximation of the big tensor, approximately. Match component permutations across replicas by sorting component energies. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 46 / 55
PARACOMP: Numerical results Nominal setup: I = J = K = 500; F = 5; A, B, C randn(500,5); L = M = N = 50 (each replica = 0.1% of big tensor); P = 12 replicas (overall cloud storage = 1.2% of big tensor). S = 3 (vs. S min = 2) anchor rows. Satisfy identifiability without much slack. + WGN std σ = 0.01. COMFAC www.ece.umn.edu/ nikos used for all factorizations, big and small Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 47 / 55
PARACOMP: MSE as a function of L = M = N Fix P = 12, vary L = M = N. I=J=K=500; F=5; sigma=0.01; P=12; S=3 10 3 10 4 1.2% of full data (P=12 processors, each w/ 0.1 % of full data) PARACOMP Direct, no compression A Ahat F 2 10 5 10 6 32% of full data 10 7 50 100 150 L=M=N Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 48 / 55
PARACOMP: MSE as a function of P Fix L = M = N = 50, vary P. I=J=K=500; L=M=N=50; F=5; sigma=0.01; S=3 10 3 1.2% of the full data in P=12 processors w/ 0.1% each PARACOMP Direct, no compression 10 4 A Ahat F 2 10 5 10 6 12% of the full data in P=120 processors w/ 0.1% each 10 7 20 30 40 50 60 70 80 90 100 110 P Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 49 / 55
PARACOMP: MSE vs AWGN variance σ 2 Fix L = M = N = 50, P = 12, vary σ 2. 10 2 I=J=K=500; F=5; L=M=N=50; P=12; S=3 10 0 10 2 A Ahat F 2 10 4 10 6 10 8 10 10 PARACOMP Direct, no compression 10 12 10 8 10 6 10 4 10 2 10 0 sigma 2 Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 50 / 55
Missing elements Recommender systems, NELL, many other datasets: over 90% of the values are missing! PARACOMP to the rescue: fortuitous fringe benefit of compression (rather: taking linear combinations)! Let T denote the set of all elements, and Ψ the set of available elements. Consider one element of the compressed tensor, as it would have been computed had all elements been available; and as it can be computed from the available elements (notice normalization - important!): Y ν (l, m, n) = 1 T Ỹ ν (l, m, n) = 1 E[ Ψ ] (i,j,k) T (i,j,k) Ψ u l (i)v m (j)w n (k)x(i, j, k) u l (i)v m (j)w n (k)x(i, j, k) Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 51 / 55
Missing elements Theorem: [Marcos & Sidiropoulos, IEEE ISCCSP 2014] Assume a Bernoulli i.i.d. miss model, with parameter ρ = Prob[(i, j, k) Ψ], and let X(i, j, k) = F f =1 a f (i)b f (j)c f (k), where the elements of a f, b f and c f are all i.i.d. random variables drawn from a f (i) P a (µ a, σ a ), b f (j) P b (µ b, σ b ), and c f (k) P c (µ c, σ c ), with p a := µ 2 a + σa, 2 p b := µ 2 b + σ2 b, p c := µ 2 c + σc 2, and F := (F 1). Then, for µ a, µ b, µ c all 0, E[ E ν 2 F ] (1 ρ) E[ Y ν 2 (1 + σ2 U F ] ρ T µ 2 )(1 + σ2 V u µ 2 )(1 + σ2 W v µ 2 )( F w F + p ap b p c Fµ 2 aµ 2 ) b µ2 c Additional results in paper. Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 52 / 55
Missing elements SNR of Y, for rank(x) = 1, (µ X,σ X ) = (1,1) 70 Actual SNR using expected number of available alements Actual SNR using actual number of available alements Theoretical SNR Lower bound 60 Theoretical SNR (Exact) 50 I = J = K = 200 SNRY (db) 40 30 20 I = 400 J = K = 50 10 I = J = K = 50 0 10 10 5 10 4 10 3 10 2 10 1 10 0 Bernoulli Density parameter (ρ) Figure : SNR of compressed tensor for different sizes of rank-one X Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 53 / 55
Missing elements SNR of loadings A, for rank(x) = 1, (µ X,σ X ) = (1,1) 45 (I,J,K) = (200,200,200) (I,J,K) = (100,100,100) 40 (I,J,K) = (50,50,50) (I,J,K) = (200,100,50) (I,J,K) = (400,50,50) 35 30 SNRA (db) 25 20 15 10 5 0 5 10 5 10 4 10 3 10 2 10 1 10 0 Bernoulli Density parameter (ρ) Figure : SNR of recovered loadings for different sizes of rank-one X Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 54 / 55
Missing elements Three-way (emission, excitation, sample) fluorescence spectroscopy data slab randn based imputation 500 500 400 400 300 300 200 200 100 100 0.3 0.25 0 0 0.2 100 300 200 100 50 100 100 300 200 100 50 100 0.15 0.1 0.05 0 0 0 0 0 0.05 0 50 100 150 200 250 Figure : Measured and imputed data; recovered latent spectra Works even with systematically missing data! Nikos Sidiropoulos, Evangelos Papalexakis Factoring Big Tensors in the Cloud ICASSP Tutorial, 5/5/2014 55 / 55