Understanding Big Data Spectral Clustering

Size: px
Start display at page:

Download "Understanding Big Data Spectral Clustering"

Transcription

1 Understanding Big Data Spectral Clustering Romain Couillet, Florent Benaych-Georges To cite this version: Romain Couillet, Florent Benaych-Georges Understanding Big Data Spectral Clustering 205 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Dec 205, Cancun, Mexico 205 <hal > HAL Id: hal Submitted on 25 Sep 205 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not The documents may come from teaching and research institutions in France or abroad, or from public or private research centers L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés

2 Understanding Big Data Spectral Clustering Romain Couillet, Florent Benaych-Georges CentraleSupélec LSS Université ParisSud, Gif sur Yvette, France MAP 5, UMR CNRS 845 Université Paris Descartes, Paris, France Abstract This article introduces an original approach to understand the behavior of standard kernel spectral clustering algorithms (such as the Ng Jordan Weiss method) for large dimensional datasets Precisely, using advanced methods from the field of random matrix theory and assuming Gaussian data vectors, we show that the Laplacian of the kernel matrix can asymptotically be well approximated by an analytically tractable equivalent random matrix The study of the latter unveils the mechanisms into play and in particular the impact of the choice of the kernel function and some theoretical limits of the method Despite our Gaussian assumption, we also observe that the predicted theoretical behavior is a close match to that experienced on real datasets (taken from the MNIST database) I INTRODUCTION Letting x,, x n R p be n data vectors, kernel spectral clustering consists in a variety of algorithms designed to cluster these data in an unsupervised manner by retrieving information from the leading eigenvectors of (a possibly modified version of) the so-called kernel matrix K = {K ij } n i,j= with eg, K ij = f( x i x j /p) for some (usually decreasing) f : R + R + There are multiple reasons (see eg, []) to expect that the aforementioned eigenvectors contain information about the optimal data clustering One of the most prominent of those was put forward by Ng Jordan Weiss in [2] who notice that, if the data are ideally well split in k classes C,, C k that ensure f( x i x j /p) = 0 if and only if x i and x j belong to distinct classes, then the eigenvectors associated with the k smallest eigenvalues of I n D 2 KD 2, D D(K n ), live in the span of the canonical class-wise basis vectors In the non-trivial case where such a separating f does not exist, one would thus expect the leading eigenvectors to be instead perturbed versions of indicator vectors We shall precisely study the matrix I n D 2 KD 2 in this article Nonetheless, despite this conspicuous argument, very little is known about the performance of kernel spectral clustering in actual working conditions In particular, to the authors knowledge, there exists no contribution addressing the case of arbitrary p and n In this article, we propose a new approach consisting in assuming that both p and n are large, and exploiting recent results from random matrix theory Our method is inspired by [3] which studies the asymptotic distribution of the eigenvalues of K for iid vectors x i We generalize here [3] by assuming that the x i s are drawn from a mixture of k Gaussian vectors having means µ,, µ k and covariances C,, C k We then go further by studying the resulting model and showing that L = D 2 KD 2 can be Couillet s work is supported by RMT4GRAPH (ANR-4-CE ) approximated by a matrix of the so-called spiked model type [4], [5], that is a matrix with clustered eigenvalues and a few isolated outliers Among other results, our main findings are: in the large n, p regime, only a very local aspect of the kernel function really matters for clustering; there exists a critical growth regime (with p and n) of the µ i s and C i s for which spectral clustering leads to non-trivial misclustering probability; we precisely analyze elementary toy models, in which the number of exploitable eigenvectors and the influence of the kernel function may vary significantly On top of these theoretical findings, we shall observe that, quite unexpectedly, the kernel spectral algorithms behave similar to our theoretical findings on real datasets We precisely see that clustering performed upon a subset of the MNIST (handwritten figures) database behaves as though the vectorized images were extracted from a Gaussian mixture Notations: The norm stands for the Euclidean norm for vectors and operator norm for matrices The vector m R m stands for the vector filled with ones The operator D(v) = D({v a a=) is the diagonal matrix having v,, v k (scalar or vectors) down its diagonal The Dirac mass at x is δ x Almost sure convergence is denoted, as and convergence in distribution D II MODEL AND THEORETICAL RESULTS Let x,, x n R p be independent vectors with x n+ +n l +,, x n+ +n l C l for each l {,, k}, where n 0 = 0 and n + + n k = n Class C a encompasses data x i = µ a + w i for some µ a R p and w i N (0, C a ), with C a R p p nonnegative definite We shall consider the large dimensional regime where both n and p grow simultaneously large In this regime, we shall require the µ i s and C i s to behave in a precise manner As a matter of fact, we may state as a first result that the following set of assumptions forms the exact regime under which spectral clustering is a non trivial problem n a n Assumption (Growth Rate): As n, n c 0 > 0, c a > 0 (we will write c = [c,, c k ] T ) Besides, ) For µ k n a a= n µ a and µ a = µ a µ, µ a = O() 2) For C k n a a= O() and tr C a = O( n) 3) As p, 2 p tr C τ > 0 n C a and C a = C a C, C a = The value τ is important since p x i x j 2 as τ uniformly on i j in {,, n} p

3 We now define the kernel function as follows Assumption 2 (Kernel function): Function f is three-times continuously differentiable around τ and f(τ) > 0 Having defined f, we introduce the kernel matrix as { ( )} n K f p x i x j 2 i,j= From the previous remark on τ, note that all non-diagonal elements of K tend to f(τ) and thus K can be point-wise developed using a Taylor expansion However, our interest is on (a slightly modified form of) the Laplacian matrix L nd 2 KD 2 where D = D(K n ) is usually referred to as the degree matrix Under Assumption, L is essentially a rank-one matrix with D 2 n for leading eigenvector (with n for eigenvalue) To avoid technical difficulties, we shall study the equivalent matrix 2 L nd 2 KD 2 n D 2 n T nd 2 T nd n () which we shall show to have all its eigenvalues of order O() Our main technical result shows that there is a matrix ˆL such that L ˆL as 0, where ˆL follows a tractable random matrix model Before introducing the latter, we need the following fundamental deterministic element notations 3 M [µ,, µ k] R p k { t p tr Ca R k T { p tr C ac b a= a,b= J [j,, j k ] R n k P I n n n T n R n n R k k where j a R n is the canonical vector of class C a, defined by (j a ) i = δ xi C a, and the random element notations W [w,, w n ] R p n Φ p W T M R n k ψ p { wi 2 E[ w i 2 ] } n i= Rn Theorem (Random Matrix Equivalent): Let Assumptions and 2 hold and L be defined by () Then, as n, L ˆL as 0 2 It is clearly equivalent to study L or L that have the same eigenvalueeigenvector pairs but for the pair (n, D 2 n) of L turned into (0, D 2 n) for L 3 Capital M stands here for means while t, T account for vector and matrix of traces, P for a projection matrix (onto the orthogonal of n T n) where ˆL is given by ˆL 2f [ (τ) P W T W P f(τ) p with F (τ) = f(0) f(τ)+τf (τ) [ ] U p J, Φ, ψ B B = M T M + ( 5f (τ) + UBU T ] + f(τ) F (τ)i n and ( ) B I k k c T 5f (τ) t I k c T k ) 0 k k 0 k t T 5f 0 (τ) k 8f(τ) f ) (τ) 2f tt T f (τ) (τ) f (τ) T + p n F (τ) k T k and the case f (τ) = 0 is obtained by extension by continuity (in the limit f (τ)b being well defined as f (τ) 0) From a mathematical standpoint, excluding the identity matrix, when f (τ) 0, ˆL follows a spiked random matrix model, that is its eigenvalues congregate in bulks but for a few isolated eigenvalues, the eigenvectors of which align to some extent to the eigenvectors of UBU T When f (τ) = 0, ˆL is merely a small rank matrix In both cases, the isolated eigenvalue-eigenvector pairs of ˆL are amenable to analysis From a practical aspect, note that U is notably constituted by the vectors j a, while B contains the information about the inter-class mean deviations through M, and about the inter-class covariance deviations through t and T As such, the aforementioned isolated eigenvalue-eigenvector pairs are expected to correlate to the canonical class basis J and all the more so that M, t, T have sufficiently strong norm From the point of view of the kernel function f, note that, if f (τ) = 0, then M vanishes from the expression of ˆL, thus not allowing spectral clustering to rely on differences in means Similarly, if f (τ) = 0, then T vanishes, and thus differences in shape between the covariance matrices cannot be discriminated upon Finally, if 5f (τ) 8f(τ) = f (τ), then differences in covariance traces are seemingly not exploitable Before introducing our main results, we need the following technical assumption which ensures that p P W T W P does not in general produce itself isolated eigenvalues (and thus, that the isolated eigenvalues of ˆL are solely due to UBU T ) Assumption 3 (Spike control): With λ (C a ) λ p (C a ) the eigenvalues of C a, for each a, as n, p p i= δ D λ i(c a) ν a, with support supp(ν a ), and max dist(λ i(c a ), supp(ν a )) 0 i p Theorem 2 (Isolated eigenvalues 4 ): Let Assumptions 3 hold and define, for z R, the k k matrix G z = h(τ, z)i k + D τ,z Γ z 4 Again here, the case f (τ) = 0 is obtained by extension by continuity

4 where h(τ, z) = + 8f(τ) f ) k (τ) 2f c i g i (z) 2 (τ) p tr C2 i i= k D τ,z = h(τ, z)m T I p + c j g j (z)c j M h(τ, z) f (τ) f (τ) T + j= Γ z = D {c a g a (z) a= { c a g a (z)c b g b (z) k i= c ig i (z) ) tt T a,b= and g (z),, g k (z) are, for well chosen z, the unique solutions to the system ( ) c 0 g a (z) = z + k p tr C a I p + c i g i (z)c i i= Let ρ, away from the eigenvalue support of p P W T W P, be such that h(τ, ρ) 0 and G ρ has a zero eigenvalue of multiplicity m ρ Then there exists m ρ eigenvalues of L asymptotically close to 2 f (τ) f(τ) ρ + f(0) f(τ) + τf (τ) f(τ) We now turn to the more interesting result concerning the eigenvectors This result is divided in two formulas, concerning (i) the eigenvector D 2 n associated with the eigenvalue n of L and (ii) the remaining eigenvectors associated with the eigenvalues exhibited in Theorem 2 Proposition (Eigenvector D 2 n ): Let Assumptions 2 hold true Then, for some ϕ N (0, I n ), almost surely, [ D 2 n = n + f (τ) T n D n n 2f(τ) {t a na a= n c 0 { 2 + D p tr(c2 a) na ϕ + o() a= Theorem 3 (Eigenvector projections): Let Assumptions 3 hold Let also λ p j,, λp j+m ρ be isolated eigenvalues of L all converging to ρ as per Theorem 2 and Π ρ the projector on the eigenspace associated to these eigenvalues Then, m p J ρ T h(τ, ρ)(v r,ρ ) i (V l,ρ ) T i Π ρ J = Γ(ρ) (V i= l,ρ ) T + o() i G ρ(v r,ρ ) i almost surely, where V r,ρ, V l,ρ C k mρ are sets of right and left eigenvectors of G ρ associated with the eigenvalue zero, and G ρ is the derivative of G z along z taken for z = ρ From Proposition, we get that D 2 n is centered around the sum of the class-wise vectors t a j a with fluctuations of amplitude 2 p tr(c2 a) As for Theorem 3, it states that, as p, n grow large, the alignment between the isolated eigenvectors of L and the canonical class-basis j,, j k tends to be ] deterministic in a theoretically tractable manner In particular, the quantity ( ) tr n D(c 2 )J T Π ρ J D(c 2 ) [0, m λ ] evaluates the alignment between Π ρ and the canonical class basis, thus providing a first hint on the expected performance of spectral clustering A second interest of Theorem 3 is that, for eigenvectors û of L of multiplicity one (so Π ρ = ûû T ), the diagonal elements of n D(c 2 )J T Π ρ J D(c 2 ) provide the squared mean values of the successive first j, then next j 2, etc, elements of û The off-diagonal elements of n D(c 2 )J T Π ρ J D(c 2 ) then allow to decide on the signs of û T j i for each i These pieces of information are crucial to estimate the expected performance of spectral clustering However, the statements of Theorems 2 and 3 are difficult to interpret as they stand These become more explicit when applied to simpler scenarios and allow one to draw interesting conclusions This is the target of the next section III SPECIAL CASES In this section, we apply Theorems 2 and 3 to the cases where: (i) C i = βi p for all i, with β > 0, (ii) all µ i s are equal and C i = ( + γi p )βi p Assume first that C i = βi p for all i Then, letting l be an isolated eigenvalue of βi p + M D(c)M T, we get that, if l β > β c 0 (2) then the matrix L has an eigenvalue (asymptotically) equal to 2f ( (τ) l + β l ) + f(0) f(τ) + τf (τ) (3) f(τ) c 0 l β f(τ) Besides, we find that n J T Π ρ J = ( l c 0β 2 l(β l) 2 ) D(c)M T Υ ρ Υ T ρ M D(c) + o() almost surely, where Υ ρ R p mρ are the eigenvectors of βi p + M D(c)M T associated with eigenvalue l Aside from the very simple result in itself, note that the choice of f is (asymptotically) irrelevant here Note also that M D(c)M T plays an important role as its eigenvectors rule the behavior of the eigenvectors of L used for clustering Assume now instead that for each i, µ i = µ and C i = ( + γ p i )βi p for some γ,, γ k R fixed, and we shall denote γ = [γ,, γ k ] T Then, if condition (2) is met, we now find after calculus that there exists at most one isolated eigenvalue in L (beside n) again ) equal ( in the limit) to (3) but now for l = β 2 ( 5f (τ) 8f(τ) f (τ) 2f(τ) 2 + k i= c iγ 2 i β 2 (β l) 2 Moreover, n J T Π ρ J = c k i= c D(c)γγ T D(c) + o P () iγi 2 If (2) is not met, there is no isolated eigenvalue beside n We note here the importance of an appropriate choice of f Also observe that n D(c 2 )J T Π ρ J D(c 2 ) is proportional to

5 Fig Samples from the MNIST database, without and with 0dB noise D(c 2 )γγ T D(c 2 ) and thus the eigenvector aligns strongly to D(c 2 )γ itself Thus the entries of D(c 2 )γ should be quite distinct to achieve good clustering performance IV SIMULATIONS We complete this article by demonstrating that our results, that apply in theory only to Gaussian x i s, show a surprisingly similar behavior when applied to real datasets Here we consider the clustering of n = 3 64 vectorized images of size p = 784 from the MNIST training set database (numbers 0,, and 2, as shown in Figure ) Means and covariance are empirically obtained from the full set of MNIST images The matrix L is constructed based on f(x) = exp( x/2) Figure 2 shows that the eigenvalues of L and ˆL, both in the main bulk and outside, are quite close to one another (precisely L ˆL / L 0) As for the eigenvectors (displayed in decreasing eigenvalue order), they are in an almost perfect match, as shown in Figure 3 In the latter is also shown in thick (blue) lines the theoretical approximated (signed) diagonal values of n D(c 2 )J T Π ρ J D(c 2 ), which also show an extremely accurate match to the empirical classwise means Here, the k-means algorithm applied to the four displayed eigenvectors has a correct clustering rate of 86% Introducing a 0dB random additive noise to the same MNIST data (see images in Figure ) brings the approximation error down to L ˆL / L 004 and the k-means correct clustering probability to 78% (with only two theoretically exploitable eigenvectors instead of previously four) matching eigenvalues Eigenvalues of L Eigenvalues of ˆL Fig 2 Eigenvalues of L and ˆL, MNIST data, p = 784, n = 92 V CONCLUDING REMARKS The random matrix analysis of kernel matrices constitutes a first step towards a precise understanding of the underlying Fig 3 Leading four eigenvectors of L (red) versus ˆL (black) and theoretical class-wise means (blue); MNIST data mechanism of kernel spectral clustering Our first theoretical findings allow one to already have a partial understanding of the leading kernel matrix eigenvectors on which clustering is based Notably, we precisely identified the (asymptotic) linear combination of the class-basis canonical vectors around which the eigenvectors are centered Currently on-going work aims at studying in addition the fluctuations of the eigenvectors around the identified means With all these informations, it shall then be possible to precisely evaluate the performance of algorithms such as k-means on the studied datasets This innovative approach to spectral clustering analysis, we believe, will subsequently allow experimenters to get a clearer picture of the differences between the various classical spectral clustering algorithms (beyond the present Ng Jordan Weiss algorithm), and shall eventually allow for the development of finer and better performing techniques, in particular when dealing with high dimensional datasets REFERENCES [] U Von Luxburg, A tutorial on spectral clustering, Statistics and computing, vol 7, no 4, pp , 2007 [2] A Y Ng, M Jordan, and Y Weiss, On spectral clustering: Analysis and an algorithm, Proceedings of Advances in Neural Information Processing Systems Cambridge, MA: MIT Press, vol 4, pp , 200 [3] N El Karoui, The spectrum of kernel random matrices, The Annals of Statistics, vol 38, no, pp 50, 200 [4] F Benaych-Georges and R R Nadakuditi, The singular values and vectors of low rank perturbations of large rectangular random matrices, Journal of Multivariate Analysis, vol, pp 20 35, 202 [5] F Chapon, R Couillet, W Hachem, and X Mestre, The outliers among the singular values of large rectangular random matrices with additive fixed rank deformation, Markov Processes and Related Fields, vol 20, pp , 204

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Fabienne Comte, Celine Duval, Valentine Genon-Catalot To cite this version: Fabienne

More information

Mobility management and vertical handover decision making in heterogeneous wireless networks

Mobility management and vertical handover decision making in heterogeneous wireless networks Mobility management and vertical handover decision making in heterogeneous wireless networks Mariem Zekri To cite this version: Mariem Zekri. Mobility management and vertical handover decision making in

More information

ibalance-abf: a Smartphone-Based Audio-Biofeedback Balance System

ibalance-abf: a Smartphone-Based Audio-Biofeedback Balance System ibalance-abf: a Smartphone-Based Audio-Biofeedback Balance System Céline Franco, Anthony Fleury, Pierre-Yves Guméry, Bruno Diot, Jacques Demongeot, Nicolas Vuillerme To cite this version: Céline Franco,

More information

A usage coverage based approach for assessing product family design

A usage coverage based approach for assessing product family design A usage coverage based approach for assessing product family design Jiliang Wang To cite this version: Jiliang Wang. A usage coverage based approach for assessing product family design. Other. Ecole Centrale

More information

QASM: a Q&A Social Media System Based on Social Semantics

QASM: a Q&A Social Media System Based on Social Semantics QASM: a Q&A Social Media System Based on Social Semantics Zide Meng, Fabien Gandon, Catherine Faron-Zucker To cite this version: Zide Meng, Fabien Gandon, Catherine Faron-Zucker. QASM: a Q&A Social Media

More information

A graph based framework for the definition of tools dealing with sparse and irregular distributed data-structures

A graph based framework for the definition of tools dealing with sparse and irregular distributed data-structures A graph based framework for the definition of tools dealing with sparse and irregular distributed data-structures Serge Chaumette, Jean-Michel Lepine, Franck Rubi To cite this version: Serge Chaumette,

More information

Minkowski Sum of Polytopes Defined by Their Vertices

Minkowski Sum of Polytopes Defined by Their Vertices Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal

More information

FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data

FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez To cite this version: Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez. FP-Hadoop:

More information

Online vehicle routing and scheduling with continuous vehicle tracking

Online vehicle routing and scheduling with continuous vehicle tracking Online vehicle routing and scheduling with continuous vehicle tracking Jean Respen, Nicolas Zufferey, Jean-Yves Potvin To cite this version: Jean Respen, Nicolas Zufferey, Jean-Yves Potvin. Online vehicle

More information

ANIMATED PHASE PORTRAITS OF NONLINEAR AND CHAOTIC DYNAMICAL SYSTEMS

ANIMATED PHASE PORTRAITS OF NONLINEAR AND CHAOTIC DYNAMICAL SYSTEMS ANIMATED PHASE PORTRAITS OF NONLINEAR AND CHAOTIC DYNAMICAL SYSTEMS Jean-Marc Ginoux To cite this version: Jean-Marc Ginoux. ANIMATED PHASE PORTRAITS OF NONLINEAR AND CHAOTIC DYNAMICAL SYSTEMS. A.H. Siddiqi,

More information

Distributed network topology reconstruction in presence of anonymous nodes

Distributed network topology reconstruction in presence of anonymous nodes Distributed network topology reconstruction in presence of anonymous nodes Thi-Minh Dung Tran, Alain Y Kibangou To cite this version: Thi-Minh Dung Tran, Alain Y Kibangou Distributed network topology reconstruction

More information

Aligning subjective tests using a low cost common set

Aligning subjective tests using a low cost common set Aligning subjective tests using a low cost common set Yohann Pitrey, Ulrich Engelke, Marcus Barkowsky, Romuald Pépion, Patrick Le Callet To cite this version: Yohann Pitrey, Ulrich Engelke, Marcus Barkowsky,

More information

Faut-il des cyberarchivistes, et quel doit être leur profil professionnel?

Faut-il des cyberarchivistes, et quel doit être leur profil professionnel? Faut-il des cyberarchivistes, et quel doit être leur profil professionnel? Jean-Daniel Zeller To cite this version: Jean-Daniel Zeller. Faut-il des cyberarchivistes, et quel doit être leur profil professionnel?.

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Managing Risks at Runtime in VoIP Networks and Services

Managing Risks at Runtime in VoIP Networks and Services Managing Risks at Runtime in VoIP Networks and Services Oussema Dabbebi, Remi Badonnel, Olivier Festor To cite this version: Oussema Dabbebi, Remi Badonnel, Olivier Festor. Managing Risks at Runtime in

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

New implementions of predictive alternate analog/rf test with augmented model redundancy

New implementions of predictive alternate analog/rf test with augmented model redundancy New implementions of predictive alternate analog/rf test with augmented model redundancy Haithem Ayari, Florence Azais, Serge Bernard, Mariane Comte, Vincent Kerzerho, Michel Renovell To cite this version:

More information

Expanding Renewable Energy by Implementing Demand Response

Expanding Renewable Energy by Implementing Demand Response Expanding Renewable Energy by Implementing Demand Response Stéphanie Bouckaert, Vincent Mazauric, Nadia Maïzi To cite this version: Stéphanie Bouckaert, Vincent Mazauric, Nadia Maïzi. Expanding Renewable

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process Amine Chellali, Frederic Jourdan, Cédric Dumas To cite this version: Amine Chellali, Frederic Jourdan, Cédric Dumas.

More information

Additional mechanisms for rewriting on-the-fly SPARQL queries proxy

Additional mechanisms for rewriting on-the-fly SPARQL queries proxy Additional mechanisms for rewriting on-the-fly SPARQL queries proxy Arthur Vaisse-Lesteven, Bruno Grilhères To cite this version: Arthur Vaisse-Lesteven, Bruno Grilhères. Additional mechanisms for rewriting

More information

Optimization results for a generalized coupon collector problem

Optimization results for a generalized coupon collector problem Optimization results for a generalized coupon collector problem Emmanuelle Anceaume, Yann Busnel, Ernst Schulte-Geers, Bruno Sericola To cite this version: Emmanuelle Anceaume, Yann Busnel, Ernst Schulte-Geers,

More information

Running an HCI Experiment in Multiple Parallel Universes

Running an HCI Experiment in Multiple Parallel Universes Running an HCI Experiment in Multiple Parallel Universes,, To cite this version:,,. Running an HCI Experiment in Multiple Parallel Universes. CHI 14 Extended Abstracts on Human Factors in Computing Systems.

More information

Use of tabletop exercise in industrial training disaster.

Use of tabletop exercise in industrial training disaster. Use of tabletop exercise in industrial training disaster. Alexis Descatha, Thomas Loeb, François Dolveck, Nathalie-Sybille Goddet, Valerie Poirier, Michel Baer To cite this version: Alexis Descatha, Thomas

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

DEM modeling of penetration test in static and dynamic conditions

DEM modeling of penetration test in static and dynamic conditions DEM modeling of penetration test in static and dynamic conditions Quoc Anh Tran, Bastien Chevalier, Pierre Breul To cite this version: Quoc Anh Tran, Bastien Chevalier, Pierre Breul. DEM modeling of penetration

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

Overview of model-building strategies in population PK/PD analyses: 2002-2004 literature survey.

Overview of model-building strategies in population PK/PD analyses: 2002-2004 literature survey. Overview of model-building strategies in population PK/PD analyses: 2002-2004 literature survey. Céline Dartois, Karl Brendel, Emmanuelle Comets, Céline Laffont, Christian Laveille, Brigitte Tranchand,

More information

An update on acoustics designs for HVAC (Engineering)

An update on acoustics designs for HVAC (Engineering) An update on acoustics designs for HVAC (Engineering) Ken MARRIOTT To cite this version: Ken MARRIOTT. An update on acoustics designs for HVAC (Engineering). Société Française d Acoustique. Acoustics 2012,

More information

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014 LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

SELECTIVELY ABSORBING COATINGS

SELECTIVELY ABSORBING COATINGS SELECTIVELY ABSORBING COATINGS J. Vuletin, P. Kuli ik, M. Bosanac To cite this version: J. Vuletin, P. Kuli ik, M. Bosanac. SELECTIVELY ABSORBING COATINGS. Journal de Physique Colloques, 1981, 42 (C1),

More information

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks H. T. Kung Dario Vlah {htk, dario}@eecs.harvard.edu Harvard School of Engineering and Applied Sciences

More information

Information Technology Education in the Sri Lankan School System: Challenges and Perspectives

Information Technology Education in the Sri Lankan School System: Challenges and Perspectives Information Technology Education in the Sri Lankan School System: Challenges and Perspectives Chandima H. De Silva To cite this version: Chandima H. De Silva. Information Technology Education in the Sri

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Training Ircam s Score Follower

Training Ircam s Score Follower Training Ircam s Follower Arshia Cont, Diemo Schwarz, Norbert Schnell To cite this version: Arshia Cont, Diemo Schwarz, Norbert Schnell. Training Ircam s Follower. IEEE International Conference on Acoustics,

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

ANALYSIS OF SNOEK-KOSTER (H) RELAXATION IN IRON

ANALYSIS OF SNOEK-KOSTER (H) RELAXATION IN IRON ANALYSIS OF SNOEK-KOSTER (H) RELAXATION IN IRON J. San Juan, G. Fantozzi, M. No, C. Esnouf, F. Vanoni To cite this version: J. San Juan, G. Fantozzi, M. No, C. Esnouf, F. Vanoni. ANALYSIS OF SNOEK-KOSTER

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Physicians balance billing, supplemental insurance and access to health care

Physicians balance billing, supplemental insurance and access to health care Physicians balance billing, supplemental insurance and access to health care Izabela Jelovac To cite this version: Izabela Jelovac. Physicians balance billing, supplemental insurance and access to health

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

More information

The truck scheduling problem at cross-docking terminals

The truck scheduling problem at cross-docking terminals The truck scheduling problem at cross-docking terminals Lotte Berghman,, Roel Leus, Pierre Lopez To cite this version: Lotte Berghman,, Roel Leus, Pierre Lopez. The truck scheduling problem at cross-docking

More information

The Effectiveness of non-focal exposure to web banner ads

The Effectiveness of non-focal exposure to web banner ads The Effectiveness of non-focal exposure to web banner ads Marc Vanhuele, Didier Courbet, Sylvain Denis, Frédéric Lavigne, Amélie Borde To cite this version: Marc Vanhuele, Didier Courbet, Sylvain Denis,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Towards Unified Tag Data Translation for the Internet of Things

Towards Unified Tag Data Translation for the Internet of Things Towards Unified Tag Data Translation for the Internet of Things Loïc Schmidt, Nathalie Mitton, David Simplot-Ryl To cite this version: Loïc Schmidt, Nathalie Mitton, David Simplot-Ryl. Towards Unified

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Performance Evaluation of Encryption Algorithms Key Length Size on Web Browsers

Performance Evaluation of Encryption Algorithms Key Length Size on Web Browsers Performance Evaluation of Encryption Algorithms Key Length Size on Web Browsers Syed Zulkarnain Syed Idrus, Syed Alwee Aljunid, Salina Mohd Asi, Suhizaz Sudin To cite this version: Syed Zulkarnain Syed

More information

1 Example of Time Series Analysis by SSA 1

1 Example of Time Series Analysis by SSA 1 1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales

More information

QUALITY ENGINEERING PROGRAM

QUALITY ENGINEERING PROGRAM QUALITY ENGINEERING PROGRAM Production engineering deals with the practical engineering problems that occur in manufacturing planning, manufacturing processes and in the integration of the facilities and

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Rank one SVD: un algorithm pour la visualisation d une matrice non négative

Rank one SVD: un algorithm pour la visualisation d une matrice non négative Rank one SVD: un algorithm pour la visualisation d une matrice non négative L. Labiod and M. Nadif LIPADE - Universite ParisDescartes, France ECAIS 2013 November 7, 2013 Outline Outline 1 Data visualization

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

SACOC: A spectral-based ACO clustering algorithm

SACOC: A spectral-based ACO clustering algorithm SACOC: A spectral-based ACO clustering algorithm Héctor D. Menéndez, Fernando E. B. Otero, and David Camacho Abstract The application of ACO-based algorithms in data mining is growing over the last few

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Study on Cloud Service Mode of Agricultural Information Institutions

Study on Cloud Service Mode of Agricultural Information Institutions Study on Cloud Service Mode of Agricultural Information Institutions Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang To cite this version: Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang. Study on Cloud

More information

Towards Collaborative Learning via Shared Artefacts over the Grid

Towards Collaborative Learning via Shared Artefacts over the Grid Towards Collaborative Learning via Shared Artefacts over the Grid Cornelia Boldyreff, Phyo Kyaw, Janet Lavery, David Nutter, Stephen Rank To cite this version: Cornelia Boldyreff, Phyo Kyaw, Janet Lavery,

More information

Decision-making with the AHP: Why is the principal eigenvector necessary

Decision-making with the AHP: Why is the principal eigenvector necessary European Journal of Operational Research 145 (2003) 85 91 Decision Aiding Decision-making with the AHP: Why is the principal eigenvector necessary Thomas L. Saaty * University of Pittsburgh, Pittsburgh,

More information

Global Identity Management of Virtual Machines Based on Remote Secure Elements

Global Identity Management of Virtual Machines Based on Remote Secure Elements Global Identity Management of Virtual Machines Based on Remote Secure Elements Hassane Aissaoui, P. Urien, Guy Pujolle To cite this version: Hassane Aissaoui, P. Urien, Guy Pujolle. Global Identity Management

More information

The Quantum Harmonic Oscillator Stephen Webb

The Quantum Harmonic Oscillator Stephen Webb The Quantum Harmonic Oscillator Stephen Webb The Importance of the Harmonic Oscillator The quantum harmonic oscillator holds a unique importance in quantum mechanics, as it is both one of the few problems

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

What Development for Bioenergy in Asia: A Long-term Analysis of the Effects of Policy Instruments using TIAM-FR model

What Development for Bioenergy in Asia: A Long-term Analysis of the Effects of Policy Instruments using TIAM-FR model What Development for Bioenergy in Asia: A Long-term Analysis of the Effects of Policy Instruments using TIAM-FR model Seungwoo Kang, Sandrine Selosse, Nadia Maïzi To cite this version: Seungwoo Kang, Sandrine

More information

Surgical Tools Recognition and Pupil Segmentation for Cataract Surgical Process Modeling

Surgical Tools Recognition and Pupil Segmentation for Cataract Surgical Process Modeling Surgical Tools Recognition and Pupil Segmentation for Cataract Surgical Process Modeling David Bouget, Florent Lalys, Pierre Jannin To cite this version: David Bouget, Florent Lalys, Pierre Jannin. Surgical

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

LOOKING FOR A GOOD TIME TO BET

LOOKING FOR A GOOD TIME TO BET LOOKING FOR A GOOD TIME TO BET LAURENT SERLET Abstract. Suppose that the cards of a well shuffled deck of cards are turned up one after another. At any time-but once only- you may bet that the next card

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Supervised Feature Selection & Unsupervised Dimensionality Reduction

Supervised Feature Selection & Unsupervised Dimensionality Reduction Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Ships Magnetic Anomaly Computation With Integral Equation and Fast Multipole Method

Ships Magnetic Anomaly Computation With Integral Equation and Fast Multipole Method Ships Magnetic Anomaly Computation With Integral Equation and Fast Multipole Method T. S. Nguyen, Jean-Michel Guichon, Olivier Chadebec, Patrice Labie, Jean-Louis Coulomb To cite this version: T. S. Nguyen,

More information

Application-Aware Protection in DWDM Optical Networks

Application-Aware Protection in DWDM Optical Networks Application-Aware Protection in DWDM Optical Networks Hamza Drid, Bernard Cousin, Nasir Ghani To cite this version: Hamza Drid, Bernard Cousin, Nasir Ghani. Application-Aware Protection in DWDM Optical

More information

Undulators and wigglers for the new generation of synchrotron sources

Undulators and wigglers for the new generation of synchrotron sources Undulators and wigglers for the new generation of synchrotron sources P. Elleaume To cite this version: P. Elleaume. Undulators and wigglers for the new generation of synchrotron sources. Journal de Physique

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

A modeling approach for locating logistics platforms for fast parcels delivery in urban areas

A modeling approach for locating logistics platforms for fast parcels delivery in urban areas A modeling approach for locating logistics platforms for fast parcels delivery in urban areas Olivier Guyon, Nabil Absi, Dominique Feillet, Thierry Garaix To cite this version: Olivier Guyon, Nabil Absi,

More information

Partial Least Squares (PLS) Regression.

Partial Least Squares (PLS) Regression. Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component

More information