AMICA: An Adaptive Mixture of Independent Component Analyzers with Shared Components

Size: px
Start display at page:

Download "AMICA: An Adaptive Mixture of Independent Component Analyzers with Shared Components"

Transcription

1 AMICA: An Adaptive Mixture of Independent Component Anayzers wit Sared Components Jason A. Pamer, Ken Kreutz-Degado, and Scott Makeig 1 Abstract We derive an asymptotic Newton agoritm for Quasi Maximum Likeiood estimation of te ICA mixture mode, using te ordinary gradient and Hessian. Te probabiistic mixture framework can accommodate non-stationary environments and arbitrary source densities. We prove asymptotic stabiity wen te source modes matc te true sources. An appication to EEG segmentation is given. Index Terms Independent Component Anaysis, Bayesian inear mode, mixture mode, Newton metod, EEG I. INTRODUCTION A. Reated Work Te Gaussian iner mode approac is described [1] [3]. Non-Gaussian sources in te form of Gaussian scae mixtures, in particuar Student s t distribution, were deveoped in [4] [6]. A mixture of Gaussians source mode was empoyed in [7] [11]. Simiar approaces were proposed in [12], [13]. Tese modes generay incude noise and invove computationay intensive optimization agoritms. Te focus in tese modes is generay on variationa metods of automaticay determining te number of mixtures in a mixture mode during te optimization procedure. Tere is aso overap between te variationa tecnique used in tese metods, and te Gaussian scae mixture approac to representing non-gaussian densities. A mode simiar to tat proposed ere was presented in [14]. Te main distinguising features of te proposed mode are, J. A. Pamer and S. Makeig are wit te Swartz Center for Computationa Neuroscience, La Joa, CA, jason,scott@sccn.ucsd.edu. K. Kreutz-Degado is wit te ECE Department, Univ. of Caifornia San Diego, La Joa, CA, kreutz@ece.ucsd.edu.

2 2 1) Mixtures of Gaussian scae mixture sources provide more fexibiity tan te Gaussian mixture modes of [7], [11], or fixed density modes used in [14]. Accurate source density modeing is important to take advantage of Newton convergence for te true source mode, as we as to distinguis between partiay overapping ICA modes by posterior ikeiood. 2) Impementation of te Amari Newton metod described in [15] greaty improving te convergence, particuary in te mutipe mode case, in wic prewitening is not possibe (in genera a different witening matrix wi be required for eac unknown mode.) 3) Te second derivative source density quantities are converted to first derivative quantities using integration by parts reated properties of te score function and Fiser Information Matrix. Again accurate modeing of te source densities makes tis conversion possibe, and makes it robust in te presence of oter (interfering) modes. Te proposed mode is readiy extendabe to MAP estimation or Variationa Bayes or Ensembe Learning approaces, wic put conjugate yperpriors on te parameters. We are interested primariy in te arge sampe case, so we do not pursue tese extensions ere. Te probabiistic framework can aso be extended to incorporate Markov dependence of state parameters in te ICA and source mixtures. We ave aso extended te mode to incude mixtures of inear processes [16], were bind deconvoution is treated in a manner simiar to [17] [20], as we as compex ICA [21] and dependent sources [21] [23]. In a of tese contexts te adaptive source densities, asymptotic Newton metod, and mixture mode features can a be maintained. II. ICA MIXTURE MODEL In te standard inear mode, observations x(t) R m, t = 1,..., N, are modeed as inear combinations of a set of basis vectors A [a 1 a n ] wit random and independent coefficients s i (t), i = 1,..., n, x(t) = As(t) We assume for simpicity te noiseess case, or tat te data as been pre-processed, e.g.by PCA, fitering, etc., to remove noise. Te data is assumed owever to be non-stationary, so tat different inear modes may be in effect at different times. Tus for eac observation x(t), tere is an index t 1,..., M, wit corresponding compete basis set A wit center c, and a random vector of zero mean, independent sources s(t) q (s), were, n q (s) = q i (s i ) i=1

3 3 suc tat, x(t) = A s(t) + c wit = t. We sa assume tat ony one mode is active at eac time, and tat mode is active wit probabiity γ. For simpicity we assume tempora independence of te mode indices t, t = 1...., N. Since te mode is conditionay inear, te conditiona density of te observations is given by, were W A 1. p(x(t) ) = det W q ( W (x(t) c ) ) Te sources are taken to be mixtures of (generay nongaussian) Gaussian Scae Mixtures (GSMs), as in [24], q i ( si (t) ) = ( ) α ij βij q ij βij (s i (t) µ ij ) ; ρ ij were eac q ij is a GSM parameterized by ρ ij. Tus te density of te observations X x(t), t = 1,..., N, is given by, γ 0, p(x; Θ) = N t=1 =1 M =1 γ = 1. Te parameters to be estimated are, = 1,..., M, i = 1,..., n, and j = 1,..., m. M γ p(x(t) ), Θ = W, c, γ, α ij, µ ij, β ij, ρ ij, A. Invariances in te mode Besides te accepted invariance to permutation of te component indices, invariance or redundancy in te mode aso exists in two oter respects. Te first concerns te mode centers, c, and te source density ocation parameters µ ij. Specificay, we ave p(x; Θ) = p(x; Θ ), Θ =..., c, µ ij,..., Θ =..., c, µ ij,..., if c = c + c, µ ij = µ ij [W c ] i, j = 1,..., m for any c. Putting c = Ex(t), we make te sources s(t) zero mean given te mode. Te zero mean assumption is used in te cacuation of te expected Hessian for te Newton agoritm.

4 4 Tere is aso scae redundancy in te row norms of W and te scae parameters of te source densities. Specificay, p(x; Θ) = p(x; Θ ), were Θ = W, µ ij, β ij,..., Θ = W, µ ij, β ij,..., if for any τ i > 0, [W ] i: = [W ] i: /τ i, µ ij = µ ij/τ i, β ij = β ijτ 2 i, j = 1,..., m were [W ] i: is te it row of W. We use tis redundancy to enforce at eac iteration tat te rows of W are unit norm by putting τ i = [W ] i:. Tese reparameterizations constitute te ony updates for te mode centers c. Te centers are redundant parameters given te source means, and are used ony to maintain zero posterior source mean given te mode. III. MAXIMUM LIKELIHOOD In tis section we assume tat te mode is given and suppress te subscript. Given i.i.d. data X = x 1,..., x N, we consider te ML estimate of W = A 1. For te density of X, we ave, p(x) = N detw p s (Wx t ) t=1 Let y t = Wx t be te estimate of te sources s t, and et q i (y i ) be te density mode for te it source, wit q(y t ) = i q i(y it ). We define, f i (y it ) og q i (y it ) and f(y t ) i f i(y it ). For te negative og ikeiood of te data ten (wic is to be minimized), we ave, L(W) = N og det W + f(y t ) (1) t=1 Te gradient of tis function is proportiona to, L(W) W T + 1 N N f(y t )x T t (2) Note tat if we mutipy (2) by W T W on te rigt, we get, ( W = I 1 N ) g t yt T W (3) N t=1 t=1

5 5 were g t f(y t ). Tis transformation is in fact a positive definite inear transformation of te matrix gradient. Specificay, using te standard matrix inner product in R n n, A, B = tr(ab T ), we ave for nonzero V, V, VW T W = VW T, VW T > 0 (4) wen W is fu rank. Te direction (3) is known as te natura gradient [25]. A. Hessian Denote te gradient (2) by G wit eements g ij, eac a function of W. Taking te derivative of (2), we find, g ij = [W 1 ] i [W 1 ] jk + w k f i ( [Wxt ] k ) xjt x t δ ik were δ ik is te Kronecker deta symbo, and N denotes te empirica average 1 N. To see ow tis inear Hessian operator transforms an argument B, et C = H(B) be te transformed matrix. Ten we cacuate, c ij = k Te first term of c ij can be written, [W 1 ] i [W 1 ] jk b k + [W 1 ] i [W 1 B] j = f N i (y it )x jt b i x t [W T ] i [B T W T ] j = [W T B T W T ] ij Writing te second term in matrix form as we, we ave for te inear transformation C = H(B), C = W T B T W T + diag(f (y t ))Bx t x T t were diag(f (y t )) is te diagona matrix wit diagona eements f i (y it). Tis equation can be simpified as foows. First, et us rewrite te transformation (5) in terms of te source estimates y. We first write, C = (BW 1 ) T W T + diag ( f (y t ) ) BW 1 y t y T t W T N N N (5) Now if we define C CW T and B BW 1, ten we ave, C = B T + diag ( f (y t ) ) Byt yt T N (6)

6 6 At te optimum, we may assume tat te source density modes q i (y i ) are equivaent to te true source densities p i (s i ). Writing (6) in component form and etting N go to infinity we find for te diagona eements, [ C] ii = [ B] ii + E f i (y i ) k [ B] ik y k y i = (1 + ηi )[ B] ii (7) were we define η i Ef (y i )y 2 i. Te cross terms drop out since te expected vaue of f (y i )y i y k is zero for k i by te independence and zero mean assumption on te sources. Now we note, as in [15], [26], [27], tat te off-diagona eements of te equation (6) can be paired as foows, [ C] ij = [ B] ji + E f i (y i ) k [ B] ik y k y j = [ B]ji + κ i σ 2 j [ B] ij [ C] ji = [ B] ij + E f j (y j ) k [ B] jk y k y i = [ B]ij + κ j σ 2 i [ B] ji were we define κ i Ef i (y i) and σi 2 Ey2 i. Again te cross terms drop out due to te expectation of independent zero mean random variabes. Putting tese equations in matrix form, we ave, [ C] ij [ C] = κ iσj 2 1 [ B] ij ji 1 κ j σi 2 [ B] (8) ji If we denote te inear transformation defined by equations (7) and (8) by C = H( B), ten we ave, C = H(B) = H ( BW 1) W T (9) Tus by an argument simiar to (4), we see tat H is asymptoticay positive definite if and ony if H is asymptoticay positive definite and W is fu rank. Te conditions for positive definiteness of H can be found by inspection of equations (7) and (8). Wit te definitions, te conditions can be stated [15] as, 1) 1 + η i > 0, i 2) κ i > 0, i, and, 3) κ i κ j σ 2 i σ2 j 1 > 0, i j η i Ey 2 i f i (y i ), κ i Ef i (y i ), σ 2 i Ey 2 i B. Asymptotic stabiity Using integration by parts, it can be sown tat te stabiity conditions are aways satisfied wen f(y) = og p(y), i.e. q(y) matces te true source density p(y). Specificay, we ave te foowing.

7 7 Teorem 1: If f i (y i ) og q i (y i ) = og p i (y i ), wit p i (y) = 1, i = 1,..., n, i.e. te source density modes matc te true source densities, and p i (y) is twice differentiabe wit Ef i (y) and Eyi 2 finite, i = 1,..., n, and at most one source is Gaussian, ten te stabiity conditions od. Proof: For te first condition, we use integration by parts to evauate, Ey 2 f (y) = y 2 f (y)p(y)dy wit u = y 2 p(y) and dv = f (y)dy. Using te fact tat v = f (y) = p (y)/p(y), we get y 2 p (y) f (y) ( 2y y 2 f (y) ) p(y) dy (10) Te first term in (10) is zero if p (y) = o(1/y 2 ) as y ±. Tis must be te case for integrabe p(y), since oterwise we woud ave p (y) C/y 2, and p(y) = O(1/y) and non-integrabe. Ten, since p(y)dy = 1, we ave, 1 + Ey 2 f (y) = ( y 2 f (y) 2 2yf (y) + 1 ) p(y)dy = E ( yf (y) 1 ) 2 0 were equaity ods ony if p(y) 1/y, so strict inequaity must od for integrabe p(y). For te second condition, Ef (y) > 0 using integration by parts wit u = p(y), dv = f (y)dy, and te fact tat p (y) must tend to 0 as y ± for integrabe p(y), we get, Ef (y) = Finay, for te tird condition, we ave, f (y) 2 p(y)dy = E f (y) 2 > 0 Ey 2 Ef (y) = E y 2 E f (y) 2 ( Eyf (y) ) 2 = 1 by te Caucy Scwartz inequaity, wit equaity ony for f (y) y, i.e. p(y) Gaussian. Tus, wenever at east one of y i and y j is nongaussian. Ey 2 i Ef i (y i )Ey 2 j Ef j (y j ) > 1

8 8 C. Newton metod Te inverse of te Hessian operator, from (9), wi be given by, B = H 1 (C) = H 1( CW T ) W (11) Te cacuation of B = H 1 ( C) is easiy carried out by inverting te transformation (7) and (8), [ B] ii = [ C] ii 1 + η i, i = 1,..., n (12) [ B] ij = κ jσ 2 i [ C] ij [ C] ji κ i κ j σ 2 i σ2 j 1, i j (13) Te Newton direction is given by taking C = G, te gradient (2), Let, W = H 1( GW T ) W (14) Φ 1 N We ave GW T = I Φ. If we et B = H 1 ( GW T ), ten N g t yt T (15) t=1 bii = 1 [Φ] ii 1 + η i, i = 1,..., n (16) Ten bij = [Φ] ji κ j σ 2 i [Φ] ij κ i κ j σ 2 i σ2 j 1, i j (17) W = BW (18) IV. CRAMER-RAO LOWER BOUND By te teorem on asymptotic efficiency of Maximum Likeiood estimation, we ave tat te asymptotic and minimum error covariance of te estimation of W is given by te inverse Fiser Information matrix. Aso, since te asymptotic distribution is Gaussian, we can determine te asymptotic distribution inear transformations of Ŵ. In particuar, we see tat te asymptotic distribution of C ŴA, were A = W 1 is te true (unknown) mixing matrix. Remarkaby, we find tat tis asymptotic distribution does not depend on A, but ony on te source density statistics. Specificay, te inverse Fiser information matrix is H 1, and te asymptotic minimum error covariance in te estimate is N 1 H 1. Te minimum error covariance matrix for te pair of off-diagonas c ij and c ji wit a sampe size N is given by, 1 N 1 κ i κ j σ 2 i σ2 j 1 κ jσi κ i σj 2

9 9 In particuar, te asymptotic distribution of C is Norma wit mean I, and variance, Eĉ 2 ij 1 N κ j σ 2 i κ i κ j σ 2 i σ2 j 1, For te Generaized Gaussian density, we ave, E(ĉ ii 1) N 1 + η i κ i = ρ2 i Γ(2 1/ρ i), σi 2 = Γ(3/ρ i) Γ(1/ρ i ) Γ(1/ρ i ), η i = ρ i 1 V. EM PARAMETER UPDATES We define t to be te random variabe denoting te index of te mode cosen at time t, producing te observation x(t), and define te random variabes v t, 1, t = v t 0, oterwise We define j it to be te random variabe indicating te source density mixture component index tat is cosen at time t (independenty of t ) for te it source of te t mode, and we define te random variabes u ijt by, 1, j ti = j and t = u tij 0, oterwise We empoy te EM agoritm by writing te density of X as a margina integra over compete data, wic incudes U and V, p(x; Θ) = N M P vt t = N ( M ) exp v t L t V t=1 =1 V t=1 =1 = ( N M ( exp v t og γ + og det W ) + U,V were we make te definitions, t=1 =1 d i=1 ) u tij Q tij b t W (x t c ) y tij ( β kij [bt ] i µ kij) exp(q tij ) ( ) α kij βkij q kij ytij P t d γ det W exp(q tij ) exp(l t ) Te posterior expectation, ˆv t is given by, i=1 ˆv t = Ev t x t ; Θ = P [ v t = 1 x t ; Θ ] = P t M =1 P t = exp(l t ) M =1 exp(l t ) (19)

10 10 For te û ijt, we ave, û tij = P [ u tij =1 x t ; Θ ] = P [ u tij =1 v t =1, x t ; Θ ] P [ v t =1 x t ; Θ ] = ẑ tijˆv t (20) were ẑtij E u tij v t =1, x t, ; Θ : ẑ tij = exp(q tij ) m j =1 exp(q tij ) (21) Te function to be maximized in te EM agoritm is ten, N M [ ˆv t( og γ + og det W ) d + û ( tij og αkij og β k ij f ij (y tij ) )] t=1 =1 i=1 were f kij og q kij. Maximizing wit respect to γ subject to γ 0, γ = 1, we get, We can ten rearrange te ikeiood as, M =1 γ +1 og det W + n γ +1 = 1 N k=1 :i k >0 N ˆv t (22) t=1 û ( ti k j og αkj og β kj f kj (y )) tikj) Maximizing wit respect to α kj subject to α kj 0, j α kj = 1, we get, N α +1 :i kj = k >0 t=1 ˆv tẑ ti k j N :i k>0 t=1 ˆv t We define te foowing expectations conditioned on te mode, in wic G t are arbitrary functions of x t EvGt t ˆv t G t, E u G t, j t ˆv t t û tij G t t û tij We aso define te foowing, conditioned on te component k, in wic G t are arbitrary functions of x t and parameters of mode an arbitrary mode, :i E v G t k k >0 t ˆv t G t, E u G t k, j :i k>0 t ˆv t Now we can rearrange te ikeiood as, M =1 γ +1 og det W + n k=1 ζ +1 k :i k>0 t û ti G kj t :i k>0 t û ti kj ( α +1 1 kj 2 og β ( ( )) ) kj E u fkj βkj [bt ] ik µ kj k, j (23)

11 11 were we define ζ k = :i k >0 γ to be te tota probabiity of a context containing component k, ζ +1 k :i k >0 γ +1 (24) If te source mixture component densities are strongy super-gaussian, ten we can maximize te surrogate ikeiood, were, M =1 γ +1 og det W + n k=1 ζ +1 k ( α +1 1 kj 2 og β kj 1 2 β ) kj E u ξ 2 tikj( ) [bt ] ik µ kj ξ ti k j f kj (y ti kj ) y ti k j Te ocation and scae parameter updates are ten given by, µ +1 kj = E u ξ [b tikj t] ik k, j E u ξ tik j k, j = µ kj + 1 βkj and, β +1 kj = 1 ) E u ξ tikj( [bt ] ik µ 2 = kj k, j Te Generaized Gaussian sape parameters are updated by, E u f kj (yti k j ) k, j ρ ij = 1 ( ρ kj /Ψ( 1 + 1/ρ kj)) Eu yik jt ρ kj og yik jt ρ kj E u f kj (yti (25) k j )/y ti k j k, j β kj E u f kj (yti ) (26) kj y ti kj k, j k, j (27) or if ρ kj > 2, ρ kj = Ψ ( 1 + 1/ρ ij) /ρ kj E u yijt ρ kj og yijt ρ kj k, j (28) A. ICA mixture mode Newton updates Since F is an additive function of te W, te Newton updates can be considered separatey. Te cost function for W is, Te gradient of tis function is, og det W E v n i=1 ẑtij f k i j(y tij ) W T + E v gt (x t c ) T (29) were g t is defined by, [g t ] i ẑtij βkij f k ( ) ij ytij (30)

12 12 get, Denote te matrix gradient (29) by G. Taking te derivative of [G ] iν wit respect to [W ] kλ, we [G ] iν [W ] kλ = [ W 1 ] [ ] λi W 1 νk + δ ik For te inear transformation C = H(B), we ave, β ij E v ẑ tij f ij (y tij)(x νt [c ] ν )(x λt [c ] λ ) C = W T B T W T + E v Dt B(x t c )(x t c ) T were D t is te diagona matrix wit diagona eements [ Dt ]ii = m ẑtij β k i j f ij ( ) ytij To simpify te cacuation of te asymptotic vaue of te Hessian, we rewrite te second term on te rigt and side of (31) as, E v Dt BW 1 b tb T t W T (31) (32) If we define C CW T and B BW 1, ten we ave, C = B T + E v Dt Bbt b T t (33) Now we write te it row of te second term in (33) as, ẑ β kije v tij f k ij( ) ytij [ Bbt ] i b T t Since b t is zero mean given mode, te Hessian matrix reduces to a 2 2 bock diagona form as in te singe mode case. In te mutipe mode case we get, η i ᾱ +1 ij β k ij E u f k i j (y tij)[b t ] 2 i, j κ i σ 2 i E v [bt ] 2 i ᾱ +1 ij β k i j E u f k i j (y tij), j were ᾱ +1 ij = E vẑtij (sum weigted ony by mode ikeiood). If we define, η ij E u f k (y ij tij) ytij 2, j κ ij E u f k (y ij tij), j (34)

13 13 or, using integration by parts to rewrite te integras, λ ij 1 + η ij = E u ( f ki j (y tij)y tij 1 ) 2, j κ ij = E u f ki j (y tij) 2, j ten te expressions can be simpified to te foowing, λ i = ᾱ +1 ( ij λij + β ij κ ij µ 2 ij) κ i = ᾱ +1 ij β ij κ ij σ 2 i = E v [bt ] 2 i Define, Φ E v gt b T t (35) We ave G W T = I Φ. If we et, ten we ave, B = H 1( G W T ) ( ) = H 1 I Φ [ B] ii = 1 [Φ ] ii λ i, i = 1,..., n (36) Ten [ B] ij = [Φ ] ji κ j σ 2 i [Φ ] ij κ i κ j σ 2 i σ2 j 1, i j (37) W = BW (38) Te og ikeiood of Θ given X is cacuated as, L ( Θ X ) N ( M ) = og exp(l t ) t=1 =1 (39) VI. EXPERIMENTS VII. CONCLUSION REFERENCES [1] M. E. Tipping and C. M. Bisop, Probabiistic principa component anaysis, J. R. Stat. Soc. Ser. B Stat. Metodo., vo. 61, no. 3, pp , [2] M. E. Tipping and C. M Bisop, Mixtures of probabiistic principe component anayzers, Neura Computation, vo. 11, pp , 1999.

14 14 [3] S. Roweis and Z. Garamani, A unifying review of inear gaussian modes, Neura Computation, vo. 11, no. 5, pp , [4] D. J. C. Mackay, Comparison of approximate metods for anding yperparameters, Neura Computation, vo. 11, no. 5, pp , [5] M. E. Tipping, Sparse Bayesian earning and te Reevance Vector Macine, Journa of Macine Learning Researc, vo. 1, pp , [6] M. E. Tipping and N. D. Lawrence, Variationa inference for student s t modes: Robust Bayesian interpoation and generaised component anaysis, Neurocomputing, vo. 69, pp , [7] H. Attias, Independent factor anaysis, Neura Computation, vo. 11, pp , [8] H. Attias, A variationa Bayesian framework for grapica modes, in Advances in Neura Information Processing Systems , MIT Press. [9] Z. Garamani and M. J. Bea, Variationa inference for Bayesian mixtures of factor anaysers, in Advances in Neura Information Processing Systems , MIT Press. [10] H. Lappaainen, Ensembe earning for independent component anaysis, in Proceedings of te First Internationa Worksop on Independent Component Anaysis, [11] R. A. Coudrey and S. J. Roberts, Variationa mixture of Bayesian independent component anaysers, Neura Computation, vo. 15, no. 1, pp , [12] James W. Miskin, Ensembe Learning for Independent Component Anaysis, P.D. tesis, Dissertation, University of Cambridge, [13] K. Can, T.-W. Lee, and T. J. Sejnowski, Variationa earning of custers of undercompete nonsymmetric independent components, Journa of Macine Learning Researc, vo. 3, pp , [14] T.-W. Lee, M. S. Lewicki, and T. J. Sejnowski, ICA mixture modes for unsupervised cassification of non-gaussian casses and automatic context switcing in bind signa separation, IEEE Trans. Pattern Anaysis and Macine Inteigence, vo. 22, no. 10, pp , [15] S.-I. Amari, T.-P. Cen, and A. Cicocki, Stabiity anaysis of earning agoritms for bind source separation, Neura Networks, vo. 10, no. 8, pp , [16] J. A. Pamer, Variationa and Scae Mixture Representations of Non-Gaussian Densities for Estimation in te Bayesian Linear Mode, P.D. tesis, University of Caifornia San Diego, 2006, Avaiabe at ttp://sccn.ucsd.edu/ jason. [17] H. Attias and C. E. Screiner, Bind source separation and deconvoution: Te dynamic component anaysis agoritm, Neura Computation, vo. 10, pp , [18] S. C. Dougas, A. Cicocki, and S. Amari, Muticanne bind separation and deconvoution of sources wit arbitrary distributions, in Proc. IEEE Worksop on Neura Networks for Signa Processing, Ameia Isand Pantation, FL, 1997, pp [19] D. T. Pam, Mutua information approac to bind separation of stationary sources, IEEE Trans. Information Teory, vo. 48, no. 7, pp , [20] A. M. Bronstein, M. M. Bronstein, and M. Zibuevsky, Reative optimization for bind deconvoution, IEEE Transactions on Signa Processing, vo. 53, no. 6, pp , [21] T. Kim, H. Attias, S.-Y. Lee, and T.-W. Lee, Bind source separation expoiting iger-order frequency dependencies, IEEE Transactions on Speec and Audio Processing, vo. 15, no. 1, 2007.

15 15 [22] A. Hyvärinen, P. O. Hoyer, and M. Inki, Topograpic independent component anaysis, Neura Computation, vo. 13, no. 7, pp , [23] T. Etoft, T. Kim, and T.-W. Lee, Mutivariate scae mixture of Gaussians modeing, in Proceedings of te 6t Internationa Conference on Independent Component Anaysis, J. Rosca et a., Ed. 2006, Lecture Notes in Computer Science, pp , Springer-Verag. [24] J. A. Pamer, K. Kreutz-Degado, and S. Makeig, Super-Gaussian mixture source mode for ICA, in Proceedings of te 6t Internationa Conference on Independent Component Anaysis, J. Rosca et a., Ed. 2006, Lecture Notes in Computer Science, Springer-Verag. [25] S.-I. Amari, Natura gradient works efficienty in earning, Neura Computation, vo. 10, no. 2, pp , [26] J.-F. Cardoso and B. H. Laed, Equivariant adaptive source separation, IEEE. Trans. Sig. Proc., vo. 44, no. 12, pp , [27] D. T. Pam and P. Garat, Bind separation of mixture of independent sources troug a quasi-maximum ikeiood approac, IEEE Trans. Signa Processing, vo. 45, no. 7, pp , 1997.

SAT Math Must-Know Facts & Formulas

SAT Math Must-Know Facts & Formulas SAT Mat Must-Know Facts & Formuas Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Rationas: fractions, tat is, anyting expressabe as a ratio of integers Reas: integers pus rationas

More information

Projective Geometry. Projective Geometry

Projective Geometry. Projective Geometry Euclidean versus Euclidean geometry describes sapes as tey are Properties of objects tat are uncanged by rigid motions» Lengts» Angles» Parallelism Projective geometry describes objects as tey appear Lengts,

More information

SAT Math Facts & Formulas

SAT Math Facts & Formulas Numbers, Sequences, Factors SAT Mat Facts & Formuas Integers:..., -3, -2, -1, 0, 1, 2, 3,... Reas: integers pus fractions, decimas, and irrationas ( 2, 3, π, etc.) Order Of Operations: Aritmetic Sequences:

More information

Verifying Numerical Convergence Rates

Verifying Numerical Convergence Rates 1 Order of accuracy Verifying Numerical Convergence Rates We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, suc as te grid size or time step, and

More information

Secure Network Coding with a Cost Criterion

Secure Network Coding with a Cost Criterion Secure Network Coding with a Cost Criterion Jianong Tan, Murie Médard Laboratory for Information and Decision Systems Massachusetts Institute of Technoogy Cambridge, MA 0239, USA E-mai: {jianong, medard}@mit.edu

More information

FINITE DIFFERENCE METHODS

FINITE DIFFERENCE METHODS FINITE DIFFERENCE METHODS LONG CHEN Te best known metods, finite difference, consists of replacing eac derivative by a difference quotient in te classic formulation. It is simple to code and economic to

More information

A Latent Variable Pairwise Classification Model of a Clustering Ensemble

A Latent Variable Pairwise Classification Model of a Clustering Ensemble A atent Variabe Pairwise Cassification Mode of a Custering Ensembe Vadimir Berikov Soboev Institute of mathematics, Novosibirsk State University, Russia berikov@math.nsc.ru http://www.math.nsc.ru Abstract.

More information

Training Robust Support Vector Regression via D. C. Program

Training Robust Support Vector Regression via D. C. Program Journal of Information & Computational Science 7: 12 (2010) 2385 2394 Available at ttp://www.joics.com Training Robust Support Vector Regression via D. C. Program Kuaini Wang, Ping Zong, Yaoong Zao College

More information

SAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY

SAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY ASA Section on Survey Researc Metods SAMPLE DESIG FOR TE TERRORISM RISK ISURACE PROGRAM SURVEY G. ussain Coudry, Westat; Mats yfjäll, Statisticon; and Marianne Winglee, Westat G. ussain Coudry, Westat,

More information

Instantaneous Rate of Change:

Instantaneous Rate of Change: Instantaneous Rate of Cange: Last section we discovered tat te average rate of cange in F(x) can also be interpreted as te slope of a scant line. Te average rate of cange involves te cange in F(x) over

More information

Face Hallucination and Recognition

Face Hallucination and Recognition Face Haucination and Recognition Xiaogang Wang and Xiaoou Tang Department of Information Engineering, The Chinese University of Hong Kong {xgwang1, xtang}@ie.cuhk.edu.hk http://mmab.ie.cuhk.edu.hk Abstract.

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Computer Science and Engineering, UCSD October 7, 1999 Goldreic-Levin Teorem Autor: Bellare Te Goldreic-Levin Teorem 1 Te problem We æx a an integer n for te lengt of te strings involved. If a is an n-bit

More information

Research on the Anti-perspective Correction Algorithm of QR Barcode

Research on the Anti-perspective Correction Algorithm of QR Barcode Researc on te Anti-perspective Correction Algoritm of QR Barcode Jianua Li, Yi-Wen Wang, YiJun Wang,Yi Cen, Guoceng Wang Key Laboratory of Electronic Tin Films and Integrated Devices University of Electronic

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Multivariate time series analysis: Some essential notions

Multivariate time series analysis: Some essential notions Capter 2 Multivariate time series analysis: Some essential notions An overview of a modeling and learning framework for multivariate time series was presented in Capter 1. In tis capter, some notions on

More information

Measuring operational risk in financial institutions

Measuring operational risk in financial institutions Measuring operationa risk in financia institutions Operationa risk is now seen as a major risk for financia institutions. This paper considers the various methods avaiabe to measure operationa risk, and

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Lecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function

Lecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function Lecture 10: Wat is a Function, definition, piecewise defined functions, difference quotient, domain of a function A function arises wen one quantity depends on anoter. Many everyday relationsips between

More information

ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE

ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE Byeong U. Park 1 and Young Kyung Lee 2 Department of Statistics, Seoul National University, Seoul, Korea Tae Yoon Kim 3 and Ceolyong Park

More information

Finance 360 Problem Set #6 Solutions

Finance 360 Problem Set #6 Solutions Finance 360 Probem Set #6 Soutions 1) Suppose that you are the manager of an opera house. You have a constant margina cost of production equa to $50 (i.e. each additiona person in the theatre raises your

More information

ASYMPTOTIC DIRECTION FOR RANDOM WALKS IN RANDOM ENVIRONMENTS arxiv:math/0512388v2 [math.pr] 11 Dec 2007

ASYMPTOTIC DIRECTION FOR RANDOM WALKS IN RANDOM ENVIRONMENTS arxiv:math/0512388v2 [math.pr] 11 Dec 2007 ASYMPTOTIC DIRECTION FOR RANDOM WALKS IN RANDOM ENVIRONMENTS arxiv:math/0512388v2 [math.pr] 11 Dec 2007 FRANÇOIS SIMENHAUS Université Paris 7, Mathématiques, case 7012, 2, pace Jussieu, 75251 Paris, France

More information

Getting to Know your Agent: Interim Information in Long Term Contractual Relationships

Getting to Know your Agent: Interim Information in Long Term Contractual Relationships Getting to Know your Agent: Interim Information in Long Term Contractua Reationsips Roand Strausz Free University of Berin November 7, 2001 Abstract In a finitey repeated principa agent reationsip wit

More information

Comparing Alternative Reimbursement Methods in a Model of Public Health Insurance

Comparing Alternative Reimbursement Methods in a Model of Public Health Insurance Comparing Aternative Reimbursement Metods in a Mode of Pubic Heat Insurance Francesca Barigozzi y First version: October 1998 Tis version: June 2000 Abstract I compare in-kind reimbursement and reimbursement

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Distances in random graphs with infinite mean degrees

Distances in random graphs with infinite mean degrees Distances in random graps wit infinite mean degrees Henri van den Esker, Remco van der Hofstad, Gerard Hoogiemstra and Dmitri Znamenski April 26, 2005 Abstract We study random graps wit an i.i.d. degree

More information

Assignment of Multiplicative Mixtures in Natural Images

Assignment of Multiplicative Mixtures in Natural Images Assignment of Mutipicative Mixtures in Natura Images Odeia Schwartz HHMI and Sak Institute La Joa, CA 94 odeia@sak.edu Terrence J. Sejnowski HHMI and Sak Institute La Joa, CA 94 terry@sak.edu Abstract

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

The EOQ Inventory Formula

The EOQ Inventory Formula Te EOQ Inventory Formula James M. Cargal Matematics Department Troy University Montgomery Campus A basic problem for businesses and manufacturers is, wen ordering supplies, to determine wat quantity of

More information

A New Statistical Approach to Network Anomaly Detection

A New Statistical Approach to Network Anomaly Detection A New Statistica Approach to Network Anomay Detection Christian Caegari, Sandrine Vaton 2, and Michee Pagano Dept of Information Engineering, University of Pisa, ITALY E-mai: {christiancaegari,mpagano}@ietunipiit

More information

CHAPTER 7. Di erentiation

CHAPTER 7. Di erentiation CHAPTER 7 Di erentiation 1. Te Derivative at a Point Definition 7.1. Let f be a function defined on a neigborood of x 0. f is di erentiable at x 0, if te following it exists: f 0 fx 0 + ) fx 0 ) x 0 )=.

More information

On Capacity Scaling in Arbitrary Wireless Networks

On Capacity Scaling in Arbitrary Wireless Networks On Capacity Scaing in Arbitrary Wireess Networks Urs Niesen, Piyush Gupta, and Devavrat Shah 1 Abstract arxiv:07112745v3 [csit] 3 Aug 2009 In recent work, Özgür, Lévêque, and Tse 2007) obtained a compete

More information

Strategic trading in a dynamic noisy market. Dimitri Vayanos

Strategic trading in a dynamic noisy market. Dimitri Vayanos LSE Researc Online Article (refereed) Strategic trading in a dynamic noisy market Dimitri Vayanos LSE as developed LSE Researc Online so tat users may access researc output of te Scool. Copyrigt and Moral

More information

Chapter 7 Numerical Differentiation and Integration

Chapter 7 Numerical Differentiation and Integration 45 We ave a abit in writing articles publised in scientiþc journals to make te work as Þnised as possible, to cover up all te tracks, to not worry about te blind alleys or describe ow you ad te wrong idea

More information

Optimized Data Indexing Algorithms for OLAP Systems

Optimized Data Indexing Algorithms for OLAP Systems Database Systems Journal vol. I, no. 2/200 7 Optimized Data Indexing Algoritms for OLAP Systems Lucian BORNAZ Faculty of Cybernetics, Statistics and Economic Informatics Academy of Economic Studies, Bucarest

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

IEICE TRANS. INF. & SYST., VOL.E200 D, NO.1 JANUARY 2117 1

IEICE TRANS. INF. & SYST., VOL.E200 D, NO.1 JANUARY 2117 1 IEICE TRANS. INF. & SYST., VOL.E200 D, NO. JANUARY 27 PAPER Specia Issue on Information Processing Technoogy for web utiization A new simiarity measure to understand visitor behavior in a web site Juan

More information

Geometric Stratification of Accounting Data

Geometric Stratification of Accounting Data Stratification of Accounting Data Patricia Gunning * Jane Mary Horgan ** William Yancey *** Abstract: We suggest a new procedure for defining te boundaries of te strata in igly skewed populations, usual

More information

TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix

TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS Swati Dingra London Scool of Economics and CEP Online Appendix APPENDIX A. THEORETICAL & EMPIRICAL RESULTS A.1. CES and Logit Preferences: Invariance of Innovation

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

FRAME BASED TEXTURE CLASSIFICATION BY CONSIDERING VARIOUS SPATIAL NEIGHBORHOODS. Karl Skretting and John Håkon Husøy

FRAME BASED TEXTURE CLASSIFICATION BY CONSIDERING VARIOUS SPATIAL NEIGHBORHOODS. Karl Skretting and John Håkon Husøy FRAME BASED TEXTURE CLASSIFICATION BY CONSIDERING VARIOUS SPATIAL NEIGHBORHOODS Kar Skretting and John Håkon Husøy University of Stavanger, Department of Eectrica and Computer Engineering N-4036 Stavanger,

More information

TERM INSURANCE CALCULATION ILLUSTRATED. This is the U.S. Social Security Life Table, based on year 2007.

TERM INSURANCE CALCULATION ILLUSTRATED. This is the U.S. Social Security Life Table, based on year 2007. This is the U.S. Socia Security Life Tabe, based on year 2007. This is avaiabe at http://www.ssa.gov/oact/stats/tabe4c6.htm. The ife eperiences of maes and femaes are different, and we usuay do separate

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Characterization and Uniqueness of Equilibrium in Competitive Insurance

Characterization and Uniqueness of Equilibrium in Competitive Insurance Caracterization and Uniqueness of Equiibrium in Competitive Insurance Vitor Farina Luz June 16, 2015 First draft: September 14t, 2012 Tis paper provides a compete caracterization of equiibria in a game-teoretic

More information

Time Series Analysis III

Time Series Analysis III Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

More information

Tangent Lines and Rates of Change

Tangent Lines and Rates of Change Tangent Lines and Rates of Cange 9-2-2005 Given a function y = f(x), ow do you find te slope of te tangent line to te grap at te point P(a, f(a))? (I m tinking of te tangent line as a line tat just skims

More information

OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS

OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS ERIC T. CHUNG AND BJÖRN ENGQUIST Abstract. In tis paper, we developed and analyzed a new class of discontinuous

More information

Australian Bureau of Statistics Management of Business Providers

Australian Bureau of Statistics Management of Business Providers Purpose Austraian Bureau of Statistics Management of Business Providers 1 The principa objective of the Austraian Bureau of Statistics (ABS) in respect of business providers is to impose the owest oad

More information

Minimum Required Payment and Supplemental Information Disclosure Effects on Consumer Debt Repayment Decisions

Minimum Required Payment and Supplemental Information Disclosure Effects on Consumer Debt Repayment Decisions DANIEL NAVARRO-MARTINEZ, LINDA COURT SALISBURY, KATHERINE N. LEMON, NEIL STEWART, WILLIAM J. MATTHEWS, and ADAM J.L. HARRIS Repayment decisions ow muc of te oan to repay and wen to make te payments directy

More information

University of Southern California

University of Southern California Master of Science in Financia Engineering Viterbi Schoo of Engineering University of Southern Caifornia Dia 1-866-469-3239 (Meeting number 924 898 113) to hear the audio portion, or isten through your

More information

Risk Margin for a Non-Life Insurance Run-Off

Risk Margin for a Non-Life Insurance Run-Off Risk Margin for a Non-Life Insurance Run-Off Mario V. Wüthrich, Pau Embrechts, Andreas Tsanakas February 2, 2011 Abstract For sovency purposes insurance companies need to cacuate so-caed best-estimate

More information

ACT Math Facts & Formulas

ACT Math Facts & Formulas Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Rationals: fractions, tat is, anyting expressable as a ratio of integers Reals: integers plus rationals plus special numbers suc as

More information

A Supplier Evaluation System for Automotive Industry According To Iso/Ts 16949 Requirements

A Supplier Evaluation System for Automotive Industry According To Iso/Ts 16949 Requirements A Suppier Evauation System for Automotive Industry According To Iso/Ts 16949 Requirements DILEK PINAR ÖZTOP 1, ASLI AKSOY 2,*, NURSEL ÖZTÜRK 2 1 HONDA TR Purchasing Department, 41480, Çayırova - Gebze,

More information

Teamwork. Abstract. 2.1 Overview

Teamwork. Abstract. 2.1 Overview 2 Teamwork Abstract This chapter presents one of the basic eements of software projects teamwork. It addresses how to buid teams in a way that promotes team members accountabiity and responsibiity, and

More information

Risk Margin for a Non-Life Insurance Run-Off

Risk Margin for a Non-Life Insurance Run-Off Risk Margin for a Non-Life Insurance Run-Off Mario V. Wüthrich, Pau Embrechts, Andreas Tsanakas August 15, 2011 Abstract For sovency purposes insurance companies need to cacuate so-caed best-estimate reserves

More information

Chapter 7. Trade and Business Cycle Correlation. in the Asia-Pacific Region

Chapter 7. Trade and Business Cycle Correlation. in the Asia-Pacific Region Capter 7 Trade and Business Cyce Correation in te Asia-Pacific Region KUMAKURA Masanaga Introduction Te Asian financia crisis in 1997 and te European monetary unification in 1999 ave spurred interest in

More information

Betting on the Real Line

Betting on the Real Line Betting on the Rea Line Xi Gao 1, Yiing Chen 1,, and David M. Pennock 2 1 Harvard University, {xagao,yiing}@eecs.harvard.edu 2 Yahoo! Research, pennockd@yahoo-inc.com Abstract. We study the probem of designing

More information

Schedulability Analysis under Graph Routing in WirelessHART Networks

Schedulability Analysis under Graph Routing in WirelessHART Networks Scedulability Analysis under Grap Routing in WirelessHART Networks Abusayeed Saifulla, Dolvara Gunatilaka, Paras Tiwari, Mo Sa, Cenyang Lu, Bo Li Cengjie Wu, and Yixin Cen Department of Computer Science,

More information

Pattern Analysis. Logistic Regression. 12. Mai 2009. Joachim Hornegger. Chair of Pattern Recognition Erlangen University

Pattern Analysis. Logistic Regression. 12. Mai 2009. Joachim Hornegger. Chair of Pattern Recognition Erlangen University Pattern Analysis Logistic Regression 12. Mai 2009 Joachim Hornegger Chair of Pattern Recognition Erlangen University Pattern Analysis 2 / 43 1 Logistic Regression Posteriors and the Logistic Function Decision

More information

CONTRIBUTION OF INTERNAL AUDITING IN THE VALUE OF A NURSING UNIT WITHIN THREE YEARS

CONTRIBUTION OF INTERNAL AUDITING IN THE VALUE OF A NURSING UNIT WITHIN THREE YEARS Dehi Business Review X Vo. 4, No. 2, Juy - December 2003 CONTRIBUTION OF INTERNAL AUDITING IN THE VALUE OF A NURSING UNIT WITHIN THREE YEARS John N.. Var arvatsouakis atsouakis DURING the present time,

More information

Market Segmentation and Information Development Costs in a Two-Tiered Information Web Site

Market Segmentation and Information Development Costs in a Two-Tiered Information Web Site Market Segmentation and Information Deveopment Costs in a Two-Tiered Information Web Site Frederick J. Riggins Carson Scoo of Management University of Minnesota, Minneapois, MN friggins@csom.umn.edu Pone:

More information

f(a + h) f(a) f (a) = lim

f(a + h) f(a) f (a) = lim Lecture 7 : Derivative AS a Function In te previous section we defined te derivative of a function f at a number a (wen te function f is defined in an open interval containing a) to be f (a) 0 f(a + )

More information

OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS

OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS New Developments in Structural Engineering and Construction Yazdani, S. and Sing, A. (eds.) ISEC-7, Honolulu, June 18-23, 2013 OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS JIALI FU 1, ERIK JENELIUS

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS ASHER M. KACH, KAREN LANGE, AND REED SOLOMON Abstract. We show that if H is an effectivey competey decomposabe computabe torsion-free abeian group, then

More information

Introduction to Online Learning Theory

Introduction to Online Learning Theory Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

1.6. Analyse Optimum Volume and Surface Area. Maximum Volume for a Given Surface Area. Example 1. Solution

1.6. Analyse Optimum Volume and Surface Area. Maximum Volume for a Given Surface Area. Example 1. Solution 1.6 Analyse Optimum Volume and Surface Area Estimation and oter informal metods of optimizing measures suc as surface area and volume often lead to reasonable solutions suc as te design of te tent in tis

More information

arxiv:0712.3028v2 [astro-ph] 7 Jan 2008

arxiv:0712.3028v2 [astro-ph] 7 Jan 2008 A practica guide to Basic Statistica Techniques for Data Anaysis in Cosmoogy Licia Verde arxiv:0712.3028v2 [astro-ph] 7 Jan 2008 ICREA & Institute of Space Sciences (IEEC-CSIC), Fac. Ciencies, Campus UAB

More information

An Idiot s guide to Support vector machines (SVMs)

An Idiot s guide to Support vector machines (SVMs) An Idiot s guide to Support vector machines (SVMs) R. Berwick, Viage Idiot SVMs: A New Generation of Learning Agorithms Pre 1980: Amost a earning methods earned inear decision surfaces. Linear earning

More information

Christfried Webers. Canberra February June 2015

Christfried Webers. Canberra February June 2015 c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic

More information

Derivatives Math 120 Calculus I D Joyce, Fall 2013

Derivatives Math 120 Calculus I D Joyce, Fall 2013 Derivatives Mat 20 Calculus I D Joyce, Fall 203 Since we ave a good understanding of its, we can develop derivatives very quickly. Recall tat we defined te derivative f x of a function f at x to be te

More information

Swingtum A Computational Theory of Fractal Dynamic Swings and Physical Cycles of Stock Market in A Quantum Price-Time Space

Swingtum A Computational Theory of Fractal Dynamic Swings and Physical Cycles of Stock Market in A Quantum Price-Time Space Swingtum A Computationa Theory of Fracta Dynamic Swings and Physica Cyces of Stock Market in A Quantum Price-Time Space Heping Pan Schoo of Information Technoogy and Mathematica Sciences, University of

More information

Social Comparisons and Contributions to Online Communities: A Field Experiment on MovieLens

Social Comparisons and Contributions to Online Communities: A Field Experiment on MovieLens Socia Comparisons and Contributions to Onine Communities: A Fied Experiment on MovieLens Yan Cen F. Maxwe Harper Josep Konstan Serry Xin Li November 18, 2008 We woud ike to tank Race Croson, Jon Duffy,

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

The Basel II Risk Parameters. Second edition

The Basel II Risk Parameters. Second edition The Base II Risk Parameters Second edition . Bernd Engemann Editors Robert Rauhmeier The Base II Risk Parameters Estimation, Vaidation, Stress Testing with Appications to Loan Risk Management Editors Dr.

More information

Equilibria in sequential bargaining games as solutions to systems of equations

Equilibria in sequential bargaining games as solutions to systems of equations Economics Letters 84 (2004) 407 411 www.elsevier.com/locate/econbase Equilibria in sequential bargaining games as solutions to systems of equations Tasos Kalandrakis* Department of Political Science, Yale

More information

Multi-Robot Task Scheduling

Multi-Robot Task Scheduling Proc of IEEE Internationa Conference on Robotics and Automation, Karsruhe, Germany, 013 Muti-Robot Tas Scheduing Yu Zhang and Lynne E Parer Abstract The scheduing probem has been studied extensivey in

More information

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS 1 DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS 2 ASHER M. KACH, KAREN LANGE, AND REED SOLOMON Abstract. We show that if H is an effectivey competey decomposabe computabe torsion-free abeian group,

More information

Independent Component Analysis: Algorithms and Applications

Independent Component Analysis: Algorithms and Applications Independent Component Analysis: Algorithms and Applications Aapo Hyvärinen and Erkki Oja Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400, FIN-02015 HUT, Finland Neural Networks,

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

Detekce změn v autoregresních posloupnostech

Detekce změn v autoregresních posloupnostech Nové Hrady 2012 Outline 1 Introduction 2 3 4 Change point problem (retrospective) The data Y 1,..., Y n follow a statistical model, which may change once or several times during the observation period

More information

Comparison between two approaches to overload control in a Real Server: local or hybrid solutions?

Comparison between two approaches to overload control in a Real Server: local or hybrid solutions? Comparison between two approaces to overload control in a Real Server: local or ybrid solutions? S. Montagna and M. Pignolo Researc and Development Italtel S.p.A. Settimo Milanese, ITALY Abstract Tis wor

More information

Can a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade?

Can a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade? Can a Lump-Sum Transfer Make Everyone Enjoy te Gains from Free Trade? Yasukazu Icino Department of Economics, Konan University June 30, 2010 Abstract I examine lump-sum transfer rules to redistribute te

More information

SAT Subject Math Level 1 Facts & Formulas

SAT Subject Math Level 1 Facts & Formulas Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Reals: integers plus fractions, decimals, and irrationals ( 2, 3, π, etc.) Order Of Operations: Aritmetic Sequences: PEMDAS (Parenteses

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Parametric Statistical Modeling

Parametric Statistical Modeling Parametric Statistical Modeling ECE 275A Statistical Parameter Estimation Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 275A SPE Version 1.1 Fall 2012 1 / 12 Why

More information

2 Limits and Derivatives

2 Limits and Derivatives 2 Limits and Derivatives 2.7 Tangent Lines, Velocity, and Derivatives A tangent line to a circle is a line tat intersects te circle at exactly one point. We would like to take tis idea of tangent line

More information

1 The Collocation Method

1 The Collocation Method CS410 Assignment 7 Due: 1/5/14 (Fri) at 6pm You must wor eiter on your own or wit one partner. You may discuss bacground issues and general solution strategies wit oters, but te solutions you submit must

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Fast b-matching via Sufficient Selection Belief Propagation

Fast b-matching via Sufficient Selection Belief Propagation Fast b-matching via Sufficient Seection Beief Propagation Bert Huang Computer Science Department Coumbia University New York, NY 127 bert@cs.coumbia.edu Tony Jebara Computer Science Department Coumbia

More information

large-scale machine learning revisited Léon Bottou Microsoft Research (NYC)

large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) 1 three frequent ideas in machine learning. independent and identically distributed data This experimental paradigm has driven

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

More information

Fast Robust Hashing. ) [7] will be re-mapped (and therefore discarded), due to the load-balancing property of hashing.

Fast Robust Hashing. ) [7] will be re-mapped (and therefore discarded), due to the load-balancing property of hashing. Fast Robust Hashing Manue Urueña, David Larrabeiti and Pabo Serrano Universidad Caros III de Madrid E-89 Leganés (Madrid), Spain Emai: {muruenya,darra,pabo}@it.uc3m.es Abstract As statefu fow-aware services

More information

100 Austrian Journal of Statistics, Vol. 32 (2003), No. 1&2, 99-129

100 Austrian Journal of Statistics, Vol. 32 (2003), No. 1&2, 99-129 AUSTRIAN JOURNAL OF STATISTICS Volume 3 003, Number 1&, 99 19 Adaptive Regression on te Real Line in Classes of Smoot Functions L.M. Artiles and B.Y. Levit Eurandom, Eindoven, te Neterlands Queen s University,

More information

Betting Strategies, Market Selection, and the Wisdom of Crowds

Betting Strategies, Market Selection, and the Wisdom of Crowds Betting Strategies, Market Seection, and the Wisdom of Crowds Wiemien Kets Northwestern University w-kets@keogg.northwestern.edu David M. Pennock Microsoft Research New York City dpennock@microsoft.com

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld.

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld. Logistic Regression Vibhav Gogate The University of Texas at Dallas Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld. Generative vs. Discriminative Classifiers Want to Learn: h:x Y X features

More information