Algorithms for inverse big data problems

Size: px
Start display at page:

Download "Algorithms for inverse big data problems"

Transcription

1 Algorithms for inverse big data problems Serge Gratton University of Toulouse CERFACS-IRIT Joint work with E. Bergou, S. Gurol, E. Simon, D. Titley-Peloquin, Ph.L. Toint, J. Tshimanga,L.N. Vicente, Z. Zhang BFG Conference, London, June 2015

2 Adynamicalsystemischaracterizedbystate variables, e.g. velocity components pressure density temperature gravitational potential Goal: predict the state of the system at a future time from dynamical integration model observational data Applications: climate, meteorology, oceanography, neutronics, finance,...! forecasting problems

3 A dynamical integration model predicts the state of the system given the state at an earlier time.! integrating may lead to very large prediction errors! (inexact physics, discretization errors, approximated parameters) Observational data are used to improve accuracy of the forecasts.! but the data are inaccurate (measurement noise, under-sampling! 10 7 observations (10 9 variables) processed every day: inverse big data

4 A dynamical integration model predicts the state of the system given the state at an earlier time.! integrating may lead to very large prediction errors! (inexact physics, discretization errors, approximated parameters) Observational data are used to improve accuracy of the forecasts.! but the data are inaccurate (measurement noise, under-sampling! 10 7 observations (10 9 variables) processed every day: inverse big data

5 A dynamical integration model predicts the state of the system given the state at an earlier time.! integrating may lead to very large prediction errors! (inexact physics, discretization errors, approximated parameters) Observational data are used to improve accuracy of the forecasts.! but the data are inaccurate (measurement noise, under-sampling! 10 7 observations (10 9 variables) processed every day: inverse big data DA uses a weighted combination of data and dynamics

6 known situation 2 days ago and background prediction

7 known situation 2 days ago and background prediction record data for the past 2 days

8 known situation 2 days ago and background prediction record data for the past 2 days minimize the di erence between the model and observations

9 known situation 2 days ago and background prediction record data for the past 2 days minimize the di erence between the model and observations make predictions for the following day

10 Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

11 Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

12 Avibratingstring We consider a vibrating string, hold fixed at both ends It is released with a zero initial speed, from an unknown position The string remains in the vertical plane The string is observed with a set of physical devices measuring the position string at regularly spaced points during a given time span We would like to make a forecast of the string position outside the observation time span u(x) String 0 x 1 Observations

13 Avibratingstring:themodel The string 8 position u(x) is the solution of the partial di < u(x, t) 2 u(x, t) =0 in]0, 1[ ]0, T 2 equation u(0, t) =u(1, t) =0, t 2]0, T [ : u(x, 0) = u u(x, 0) = 0, x 2]0, 1[ Under regularity assumptions on u 0,thissystemhasone unique solution We suppose that the system is observed at time t n, n 0 The problem reads min u0 P Nobt n=0 ku(:, t n) y n k 2 This is an infinite dimensional linear least squares problem, that has to be discretized to be solved on a computer. Discretize then minimize.

14 The observations We consider now that the string is observed regularly in time and space. No noise, more observations than unkonwns The discretized P version of linear least-squares problem min Nobt u0 n=0 ku(:, t n) y n k 2 is solved with a conjugate gradient technique! test( over ) Very good agreement between truth an analysis

15 Realistic di cult case In practice, observing a 3D field at all space points is out of reach! test( under ) The observations are noisy, which introduces high frequencies in the analysis! test( noisy ) Both e ect (always) come together! test( under-noisy )

16 Exploiting a priori information We do not consider the previous solution acceptable, because we doubt a string might take such positions. We expect the solution to be smooth enough We would like to introduce the fact that the string position should not vary too much when considering points that are close in the physical space purely P algebraic approach, e.g. Nobx min u0 j=0 1 uj 0 uj P Nob t n=0 ku(:, t n) y n k 2 s.t. discretized dynamics using a pseudo-physical smoothing process Sum of background (a priori) term and observation term

17 Smoothing in the discretized space with the heat equation We 8 consider the discretized heat u(x, 2 u(x, t) =0 in]0, 1[ ]0, T 2 u(0, t) =u(1, t) =0, t 2]0, T [ : u(x, 0) = u 0 (x), x 2]0, 1[ For a given T, u(., T ) is smoother than u 0,becausehigh frequency terms get strongly damped

18 Eigenbasis of few steps in the heat equation

19 Solve a large-scale non-linear weighted least-squares problem: 1 min x2r n 2 kx x bk 2 B NX j=0 H j M j (x) y j 2 R 1 j where x x(t 0 ) is the control variable M j are model operators: x(t j )=M j (x(t 0 )) H j are observation operators: y j H j (x(t j )) the obervations y j and the background x b are noisy B and R j are covariance matrices

20 Back to the realistic di cult case Underdetermined case! test( under-reg ) Noisy case! test( noisy-reg ) Underdetermined and noisy case! test( under-noisy-reg )

21 Issues on background regularization Background matrix mat-vec in CG : another di erential equation has to be solved In case of modelling,when a direct solution not applicable, an inner-outer iteration scheme has to be controlled Determining a reasonable background matrix : based on physical considerations, possibly on statistics over past assimilation periods Introduction of balanced relations in the background : when variables are related to each other by relations that are not accounted for in the model and not properly observed, an additional (weak) penalty term is added

22 1 min x2r n 2 kx x bk 2 B Algorithmic considerations: NX j=0 H j M j (x) y j 2 R 1 j very large problem size: m 10 7 observations, n 10 8 unknowns iterative methods for nonlinear problems: primal/dual formulations? [ S. Gratton and J. Tshimanga (2009) ; S. Gratton, P. Toint, and J. Tshimanga, ] preconditioning? [ S. Gratton, A. Sartenaer, J. Tshimanga, A.T. Weaver (2008) ; S. Gratton, P. Laloyaux, A. Sartenaer, and J. Tshimanga (2001) ] using of subspace information? handle stochastic dynamical systems? very few iterations performed

23 Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

24 Solve a large-scale non-linear weighted least-squares problem: 1 min x2r n 2 kx x bk 2 B NX j=0 H j M j (x) y j 2 R 1 j Typically solved using a truncated Gauss-Newton algorithm (known as 4D-Var in the DA community)! linearize H j M j (x (k) + x (k) ) H j M j (x (k) ) + H (k) j x (k)! solve the linearized subproblem 1 min x (k) 2R n 2 k x (k) (x b x (k) )k 2 B H(k) x (k) d (k) 2 R 1! update x (k+1) = x (k) + x (k)

25 Defining the constraint as " = H x for the quadratic problem as d we can write Lagrangian function L( x,", )= 1 2 kx + x x bk 2 B k"k2 R 1 + T (" H( x)+d) From Karush-Kuhn-Tucker conditions, the following stationary conditions are satisfied at any optimum: r x L( x,", )=B 1 (x + x x b ) H T =0 r " L( x,", )=R 1 " + =0 r L( x,", )=" H x + d =0

26 Exploiting the structure: Dual Approach The exact solution is either x b x k + BH T k (H k BH T k + R) 1 (d k H k (x b x k )) {z } Lagrange mult. : requires solving a linear system iteratively in R m or, from normal equations x b x k +(B 1 + Hk T R 1 H k ) 1 Hk T R 1 (d k {z H k (x b x k )) } linear system iteratively in R n If m << n, then performing the minimization in R m can both reduce memory and computational cost Nonlinear problem : we want to solve iteratively and truncate after few steps in a trust-region tuncated CG solver. Does it hurt to use this dual approach?

27 Dual/Primal approach compared Cost function: J( x k )= 1 2 k x k x b + x k k 2 B kh k x k d k k 2 R 1 Blue primal - Red dual the relation between primal and dual formula is lost when an iterative solver is truncated prematurely This is bad news for the nonlinear iterations

28 Preconditioned CG algorithm For i =0, 1,... 1 q i =(B 1 + H T R 1 H)p i 2 i =< r i, z i >/<q i, p i > Compute the step-length 3 x i+1 = x i + i p i Update the iterate 4 r i+1 = r i i q i Update the residual 5 z i+1 = F r i+1 Update the preconditioned residual 6 i =< r i+1, z i+1 >/<r i, z i > Ensure A-conjugate directions 7 p i+1 = z i+1 + i p i Update the descent direction

29 Initialization steps given v 0 ; r 0 =(H T R 1 H + B 1 )v 0 b,... Loop: WHILE 1 H T bq i 1 = H T (R 1 HB 1 H T + I m )bp i 1 2 i 1 = r T i 1 z i 1 / bq T i 1 bp i 1 3 BH T bv i = BH T (v i 1 + i 1 bp i 1 ) 4 H T br i = H T (r i 1 + i 1 bq i 1 ) 5 BH T bz i = FH T br i = BH T Gbr i FH T = BH T G 6 i = (r T i z i / r T i 1 z i 1) 7 BH T bp i = BH T ( bz i + i bp i 1 )

30 Initialization 0 =0,br 0 = R 1 (d H(x b x)), bz 0 = Gbr 0, bp 1 = bz 0, k =1 Loop on k 1 bq i = b Abp i 2 i =< br i 1, bz i 1 > M /<bq i, bp i > M 3 i = i 1 + i bp i 4 br i = br i 1 i bq i 5 i =< br i 1, bz i 1 > M /< br i 2, bz i 2 > M ba = R 1 HBH T + I m G is the preconditioner M is the inner-product RPCG Algorithm: M = HBH T preserves monotonic decrease on quadratic cost G should be symmetric w.r.t. to M Need BH T G = FH T 6 bz i = Gbr i 7 bp i = bz i 1 + i bp i 1

31 Observations: SST (Sea Surface Temperature) and SSH(Sea Surface Height) observations from satellites. Sub-surface hydrographic observations from floats Number of observations (m): 10 5 Number of state variables (n): 10 6 for strong constraint and 10 7 for weak constraint Computation: 64CPUs

32 Let F k be s.p.d.. Is the relation F k H T k = BHT k G k reasonable? Perfect preconditioner : (B 1 + H T k R 1 H k ) 1 H T k = BHT k (H kbh T k + R) 1 If H k BHk T is nonsingular, andim(fht k ) Im(BHT ),thereisalways auniqueg k that corresponds to F k. But don t want to compute G k =(H k BHk T + R) 1 F k Hk T :tooexpensive We focus on quasi-newton Limited Memory Preconditioner [ Morales and Nocedal 2000 ][Gratton, Sartenaer and Tshimanga 2011 ] that are widely used in applications [ Gratton, Gurol and Toint 2012 ] derive the quasi-newton LMP in dual space which generates mathematically equivalent iterates to those of primal approach

33 F as a Quasi-Newton Limited Memory Preconditioner Quasi-Newton LMPs are simply based on the idea that generates preconditioners by using LBFGS updating formula k =1/(q T k p k) F k+1 =(I n k p k q T k )F k(i n k q k p T k )+ kp k p T k q k =(B 1 + H T R 1 H)p k The pairs F k defined by F k = F k+1 F k,is min Fk W 1/2 F k W 1/2 F subject to F k = F T k, F k+1q k = p k where W is a positive definite matrix satisfying Wp k = q k. Matrix F approximates the inverse of the system matrix: a preconditioner

34 G as a Quasi-Newton Limited Memory Preconditioner Giving F,wecanfindG that satisfies FH T = BH T G G can be derived by using the formula for F using the relations p i = BH T bp i, q i = H T bq i G k+1 =(I m b k bp k (Mbq k ) T )G k (I m b k bq k bp T k M)+b kbp k bp T k M M = HBH T bp k is the search direction bq k =(I m + R 1 HBH T )bp k The dual pairs b k =1/(bq T k HBHT bp k )

35 The quasi-newton LMP in dual space (Nonlinear case) In the nonlinear iterations, inheriting the previous preconditioner G k 1 may not be possible because we need the existence of F k,s.p.d, satisfying F k H k T = BH k T G k 1 T The inherited matrix G k 1 may not satisfy H k F k H k = H k BH T k G k 1 for any F k,s.p.d.. The method can be improved by using a safeguarding strategy, whose convergence relies on the Cauchy decrease and a trust-region magical step

36 Numerical experiment on heat equation (1/3) The dynamical model is considered to be the nonlinear heat equation defined by x t 2 x u 2 2 x + f [x] =0in (0, 1) v 2 x[u, v, t] = 0 on (0, 1) where the temperature variable x[u, v, t] depend on both time t and position given by spatial coordinates u and v. The function f [x] isdefined by f [x] =exp[ x]

37 Numerical experiment on heat equation (2/3) f [x] =exp[ x] =2

38 Numerical experiment on heat equation (3/3) f [x] =exp[ x] =4.2

39 Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

40 Nemo, GYRE configuration Gradient based approach DFO North Atlantic circulation with its Gulf stream current. The basin has 3180 km length, 2120 km width and 4, 5 km depth. State of length Sea level anomaly (top left), velocities (top right), temperature (bottom left) and salinity (bottom right)

41 Gradient based approach DFO Asubspacefromphysicalconsiderations Physical considerations: The ocean and the atmosphere exhibit an attractor: aslowlyvarying geometrical oject that contains the state Most of the variability can be explained in the attractor subspace (of low dimension r)! Minimize first in this subspace (of basis L)

42 Gradient based approach DFO Empirical Orthogonal Functions (EOFs) Construction of L: Let x 1,...,x p 2 R n be a set of state vectors Build C = 1 P p p 1 i=1 (x i x)(x i x) T Compute the eigenvectors of C (EOFs) Store r eigenvectors corresponding to the largest eigenvalues! Already used in the reduced Kalman filters (SEEK filter)

43 Choice for r Notations and toy example Gradient based approach DFO Select r such that: P r Pi=1 n i=1 i i 0.9 ( i &)! The r =100yieldsmorethan90%

44 Gradient based approach DFO Using the subspace [ Gratton, Laloyau, Sartenaer, Tshimanga 2011 ] The solution of the first system in the subspace spanned by L can be obtained with a Ritz-Galerkin formula: x 0 0 = x b + L(L T A 0 L) 1 L T b 0 It is also possible to approximate the inverse Hessian with a Limited Memory Preconditioner ih H = hi i n L(L T A 0 L) 1 L T A 0 I n A 0 L(L T A 0 L) 1 L T + L(L T A 0 L) 1 L T We also experimented the use of a smoother ( t based) on the vectors in L Possible to keep A 0 L from past iteration or recompute (linearization)

45 Subspace solution Notations and toy example Gradient based approach DFO

46 Gradient based approach DFO Subspace solution with linearization

47 Gradient based approach DFO Using the subspace [ Gratton, Laloyau, Sartenaer 2014 ] The Ritz-Galerkin starting point corresponds to solving the quadratic minimisation problem inside the subspace Im(L) Since this subspace it typically of dimension less then 1000, using derivative free optimization techniques is conceivable, even on the nonlinear function Possible scheme, by processing vectors by e.g. batches of 10 members for k =1,.. 1 Build a new subspace L k 2 Minimize function in a ne space x k + Im L k to get x k + L k w K SID-PSM [ Custodio, Vicente 2007 ], see also [ Vaz, Vicente 2009 ]) 3 Set x k+1 = x k + L k w K r

48 Subspace solution Notations and toy example Gradient based approach DFO Acriterion i to promote subspace vectors l i for which large variation of the cost function occurs (singular value i )isused: i = i + max j j max j j i, where i = kh(x 0 ) h(x 0 + l i )k R 1 Vectors l i can be smoothed See [ Gratton, Vicente, Zhang 2014 ] for extension to full domain decomposition and stochastic methods. Also see [ Gratton, Royer, Vicente, Zhang 2014 ] for a complexity analysis of stochastic direct search and [ Diouane, Gratton, Vicente 2014 ] for similar algorithms in globally convergent evolution algorithms

49 Gradient based approach DFO Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

50 Weak-constraint 4D-Var Gradient based approach DFO 1 min x2r n 2 kx 0 x b k 2 B NX j=0 2 H j x j y j + 1 R 1 j 2 NX kx j M j (x j {z 1 ) k } q j j=1 2 Q 1 j 0 x 0 1 x 1 x = B A 2 Rn is the control variable (with x j = x(t j )) x N x b is the background given at the initial time (t 0 ). y j 2 R m j is the observation vector over a given time interval H j maps the state vector x j from model space to observation space M j represents an integration of the numerical model from time t j 1 to t j B, R j and Q j are the covariance matrices of background, observation and model error.

51 Adataassimilationapproach Gradient based approach DFO The objective function to be minimized 1 2 kx 0 x b k 2 B kx 2 i=1 x i M i x i 1 2 Q 1 i kx i=1 kd i H i (x i )k 2 R 1. i The corresponding LMM linearized least-squares is 1 kx 2 kx 0 + x 0 x b k 2 B i=1 + 1 kx ky i H i x i k 2 2 R 1 i=1 i x i m i M i x i 1 2 Q 1 i + 2 X k k x i k 2, i=0 where y i = d i H i (x i ), H i = H 0 i (x i ), m i = M i x i 1 x i, M i = M 0 i x i 1. The penalty term can be considered as further observations.

52 Kalman Filter Notations and toy example Gradient based approach DFO Kalman filter (KF) Initialization Prediction step x i i 1 = M i x i 1 i 1 + m i, Correction step x i i = x i i 1 P i i 1 H > i (H i P i i 1 H > i + R i ) 1 (H i x i i 1 y i ), where P i i 1 = M i P i 1 i 1 M > i + Q i. P i i 1 is expensive to compute and store ) ensemble Kalman filter.

53 Ensemble Kalman filter Gradient based approach DFO Ensemble Kalman filter Initialization Prediction step xi i n 1 = M i xi n 1 i 1 + m i+vi n, Vi n N (0, Q i ), n =1,...,N, Correction step x n i i = x n i i 1 P N i i 1 H> i (H i P N i i 1 H> i + R i ) 1 (H i x n i i 1 W n i y i ), W n i P N i i 1 = 1 N 1 P N n=1 N (0, R i ), n =1,...,N, x n i i 1 P 1 N N n=1 x i i n 1 x n i i 1 P 1 N N n=1 x n i i 1 >.

54 Gradient based approach DFO Derivative-free implementation of the EnKS M i = M 0 i (x i 1) occurs only in advancing the time as action on an ensemble member M i x i 1 + x n M i xi i n 1 i i 1 M i (x i 1 ). H i = Hi 0 (x i) occurs only in the action on an ensemble member H i x i + x n H i xi i n i i H i (x i ). The algorithm has to be transformed to get the least-squares solution: ensemble smoother

55 Globalization of LMM-EnKS Gradient based approach DFO Initialization: choose 1 2 (0, 1), 2 2 (0, 1), min > 0, max > 0, >1, x 0 and 0 2 [ min, max]. For j =0, 1, 2,... 1 Approximate the solution of the linearized LMM subproblem by EnKS. 2 Add the regularization term as further observations. 3 Compute = f (x j ) m j (x j ) f (x j + x j ) m j (x j + x j ),where x j is the EnKS mean. If 1,thensetx j+1 = x j + x j and 8 >< j ( ) if kg mj k < 2 / j, j+1 = >: max j 1 p j, min p j Otherwise, the step is rejected and j+1 = j. [ E. Bergou and L.N. Vicente ] if kg mj k 2 / j.

56 Experiments set-up Notations and toy example Gradient based approach DFO Dynamical model = Lorenz 63 model dx dt = (x y) dy dt = x y xz dz dt = xy z Simplified mathematical model for atmospheric convection [Lorenz 63]. Model describing idealized thermal convection in the Earth s atmosphere. The system if fully observed at each time step dt =0.11. k = 40, N = 400.

57 Numerical experiments Gradient based approach DFO Objective function values Root mean square error [ E. Bergou, S. Gratton, L.N. Vicente, submitted to SIAM/JUQ ]

58 Gradient based approach DFO Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

59 Variational approach: the linearized subproblem (Inner loop) The linearized problem at the k-th outer loop is given by min x 1 2 k x 0 b (k) k 2 B NX j=0 H (k) j x j d (k) j 2 R 1 j + 1 NX k x j M (k) x j j 1 k 2 2 Q j=1 {z } 1 j q j 0 1 x 0 x 1 x = C. A 2 Rn is the increment. x N The vectors b (k), c (k) j and d (k) j are defined by b (k) = x b d (k) j x 0 (k) = y j H j (x j (k) ) and they are calculated at the outer loop.

60 Rewriting the linearized subproblem 1 min x2r n 2 kl x bk2 D kh x dk2 R 1 where 0 L = I M 1 I M 2 I d 0 b d 1 d = B A and b = c 1 B A d N c N M N 1 C A I H = diag(h 0, H 1,...,H N ) D = diag(b, Q 1,...,Q N )andr = diag(r 0, R 1,...,R N )

61 Rewriting the linearized subproblem 1 min x2r n 2 kl x bk2 D kh x dk2 R 1 0 L x = I M 1 I M 2 I M N 1 0 C B I x 0 x 1 x 2. x N 1 0 = C B x 0 x 1 M 1 x 0 x 2 M 2 x 1. x N M N x N 1 1 C A Matrix-vector products with L can be parallelized in the time dimension

62 Rewriting the linearized subproblem Making change of variables p = L x the subproblem can also be rewritten as 1 min p2r n 2 k p bk2 D khl 1 p dk 2 R 1 x = L 1 p is sequential! x j = M j x j 1 + q j

63 The linearized subproblems State Formulation min x 1 2 kl x bk2 D kh x dk2 R 1 Matrix-vector products with L can be parallelized in the time dimension. Solution algorithm: Preconditioned Lanczos or PCG type methods. Preconditioning is di cult since D 1/2 e L T (L T D 1 L) e L 1 D 1/2 Forcing Formulation min p 1 2 k p bk2 D khl 1 p dk 2 R 1 Matrix-vector products with L 1 is sequential. Solution algorithm: Preconditioned Lanczos or PCG type methods. Preconditioning by D often e ective. The structure of D is tri-diagonal by blocks! can be ill-conditioned depending on the accuracy of e L 1.

64 Saddle Point Approach! This approach is time-parallel.! GMRES method with preconditioner. In matrix form: 0 D L R µ A da L T H T {z 0 } x 0 A where A is a (2n + m)-by-(2n + m) indefinite symmetric matrix. The solution of this problem is a saddle point

65 Preconditioning Saddle Point Formulation of 4D-Var 0 1 D 0 L A 0 R HA A B T = L T H T B 0 0 B is the most computationally expensive block and calculations involving A are relatively cheap. The inexact constraint preconditioner proposed by (Bergamaschi et. al. 2005) is promising for our application. The preconditioner can be chosen as: P = A B e! 0 1 T D 0 e L 0 R 0A, eb 0 e L T 0 0 where e L is an approximation to the matrix L eb =[ e L T 0] is a full row rank approximation of the matrix B 2 R n (m+n)

66 Numerical Results Implementation platform Notations and toy example We used the Object Oriented Prediction System (OOPS) OOPS consists of simplified models of a real-system The model It is a two-layer quasi-geostraphic model with 1600 grid-points Implementation details There are 100 observations of stream function every 3 hours, 100 wind observations plus 100 wind-speed observations every 6 hours The error covariance matrices are assumed to be horizontally isotropic and homogeneous, with Gaussian spatial structure The analysis window is 24 hours, and is divided into 8 subwindows 3 outer loops with 10 inner loops each are performed

67 Numerical Results Methods Notations and toy example 1 Forcing formulation with D preconditioning! Solution method is preconditioned conjugate-gradients 2 Saddle point formulation with an updated inexact constraint preconditioner! Solution method is GMRES! The initial preconditioner is chosen as P 1 0 = P 0 D 0 1 I e L 0 R 0A with e I I L = B e L C A. 0 0 I I 0 0 e L 0 R 1 0 A and e L 1 I I = B e L 1 0 e L 1 D e L C A. I I I

68 Numerical Results with OOPS qg-model Figure : Nonlinear cost function values along iterations! Last 8 pairs were used to construct the preconditioner! B = vw T (where v = r c 2 Serge R n, G. w = Large-scale r b 2 R n+m inverseand problems =1/v T u 2 )

69 Numerical Results with OOPS qg-model Figure : Nonlinear cost function values along iterations! Last 8 pairs were used to construct the preconditioner

70 Numerical Results with OOPS qg-model Figure : Nonlinear cost function values along sequential subwindow integrations! At each iteration the forcing formulation requires one application of L 1, followed by one application of L T Serge (16 G. sequential Large-scale subwindow inverse problems integrations)

71 Modeling and toy example Dual iterative solvers Using subspace ideas, subspace reduction Ensemble methods with a stochastic dynamical system Variational methods with a stochastic dynamical system Summary, caveat and ongoing related work

72 s Notations and toy example Very large scale higly-nonlinear forecasting problems can be solved using dual methods, projection methods, satistical sampling, adapted to nonlinearity In some severe cases, having more than a local minimum might be needed. Model reduction and exploration techniques are useful Most systems are a priori designed for solving the prediction problem. Using DA techniques to other settings not necessarily successful. Further work. Errors between time-steps correlated. Having a parallel code is a challenge. Use nonlinear multigrid, domain decomposition ideas for scalability.

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

An Introduction to Applied Mathematics: An Iterative Process

An Introduction to Applied Mathematics: An Iterative Process An Introduction to Applied Mathematics: An Iterative Process Applied mathematics seeks to make predictions about some topic such as weather prediction, future value of an investment, the speed of a falling

More information

Selime GÜROL. Solving regularized nonlinear least-squares problem in dual space with application to variational data assimilation

Selime GÜROL. Solving regularized nonlinear least-squares problem in dual space with application to variational data assimilation Institut National Polytechnique de Toulouse(INP Toulouse) Signal, Image, Acoustique et Optimisation Selime GÜROL vendredi 14 juin 2013 Solving regularized nonlinear least-squares problem in dual space

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

2.2 Creaseness operator

2.2 Creaseness operator 2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Dual Methods for Total Variation-Based Image Restoration

Dual Methods for Total Variation-Based Image Restoration Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony

More information

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION Ş. İlker Birbil Sabancı University Ali Taylan Cemgil 1, Hazal Koptagel 1, Figen Öztoprak 2, Umut Şimşekli

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Parameter Estimation for Bingham Models

Parameter Estimation for Bingham Models Dr. Volker Schulz, Dmitriy Logashenko Parameter Estimation for Bingham Models supported by BMBF Parameter Estimation for Bingham Models Industrial application of ceramic pastes Material laws for Bingham

More information

Module 1 : Conduction. Lecture 5 : 1D conduction example problems. 2D conduction

Module 1 : Conduction. Lecture 5 : 1D conduction example problems. 2D conduction Module 1 : Conduction Lecture 5 : 1D conduction example problems. 2D conduction Objectives In this class: An example of optimization for insulation thickness is solved. The 1D conduction is considered

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

Breeding and predictability in coupled Lorenz models. E. Kalnay, M. Peña, S.-C. Yang and M. Cai

Breeding and predictability in coupled Lorenz models. E. Kalnay, M. Peña, S.-C. Yang and M. Cai Breeding and predictability in coupled Lorenz models E. Kalnay, M. Peña, S.-C. Yang and M. Cai Department of Meteorology University of Maryland, College Park 20742 USA Abstract Bred vectors are the difference

More information

Principles of Scientific Computing Nonlinear Equations and Optimization

Principles of Scientific Computing Nonlinear Equations and Optimization Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related

More information

Discrete mechanics, optimal control and formation flying spacecraft

Discrete mechanics, optimal control and formation flying spacecraft Discrete mechanics, optimal control and formation flying spacecraft Oliver Junge Center for Mathematics Munich University of Technology joint work with Jerrold E. Marsden and Sina Ober-Blöbaum partially

More information

Limited Memory Solution of Complementarity Problems arising in Video Games

Limited Memory Solution of Complementarity Problems arising in Video Games Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 69 Limited Memory Solution of Complementarity Problems arising in Video Games Michael C. Ferris Andrew J. Wathen Paul Armand Rapport

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Model order reduction via Proper Orthogonal Decomposition

Model order reduction via Proper Orthogonal Decomposition Model order reduction via Proper Orthogonal Decomposition Reduced Basis Summer School 2015 Martin Gubisch University of Konstanz September 17, 2015 Martin Gubisch (University of Konstanz) Model order reduction

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Course 8. An Introduction to the Kalman Filter

Course 8. An Introduction to the Kalman Filter Course 8 An Introduction to the Kalman Filter Speakers Greg Welch Gary Bishop Kalman Filters in 2 hours? Hah! No magic. Pretty simple to apply. Tolerant of abuse. Notes are a standalone reference. These

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Christfried Webers. Canberra February June 2015

Christfried Webers. Canberra February June 2015 c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic

More information

Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms

Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms Björn Rocker Hamburg, June 17th 2010 Engineering Mathematics and Computing Lab (EMCL) KIT University of the State

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Numerical Methods For Image Restoration

Numerical Methods For Image Restoration Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

EE 570: Location and Navigation

EE 570: Location and Navigation EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering

More information

Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks

Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks Chin. J. Astron. Astrophys. Vol. 5 (2005), No. 2, 203 210 (http:/www.chjaa.org) Chinese Journal of Astronomy and Astrophysics Automated Stellar Classification for Large Surveys with EKF and RBF Neural

More information

Data Assimilation and Operational Oceanography: The Mercator experience

Data Assimilation and Operational Oceanography: The Mercator experience www.mercator.eu.org Data Assimilation and Operational Oceanography: The Mercator experience Benoît Tranchant btranchant@mercator-ocean.fr And the Mercator-Ocean Assimilation Team (Marie Drevillon, Elisabeth

More information

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

More information

Binary Image Reconstruction

Binary Image Reconstruction A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands kbatenbu@math.leidenuniv.nl Abstract We present a new algorithm

More information

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu

More information

NORTHWESTERN UNIVERSITY Department of Electrical Engineering and Computer Science LARGE SCALE UNCONSTRAINED OPTIMIZATION by Jorge Nocedal 1 June 23, 1996 ABSTRACT This paper reviews advances in Newton,

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure

Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

5.4 The Heat Equation and Convection-Diffusion

5.4 The Heat Equation and Convection-Diffusion 5.4. THE HEAT EQUATION AND CONVECTION-DIFFUSION c 6 Gilbert Strang 5.4 The Heat Equation and Convection-Diffusion The wave equation conserves energy. The heat equation u t = u xx dissipates energy. The

More information

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008 A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity

More information

Introduction to the Finite Element Method

Introduction to the Finite Element Method Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems Stefania Bellavia Dipartimento di Ingegneria Industriale Università degli Studi di Firenze Joint work

More information

This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

This is a square root. The number under the radical is 9. (An asterisk * means multiply.) Page of Review of Radical Expressions and Equations Skills involving radicals can be divided into the following groups: Evaluate square roots or higher order roots. Simplify radical expressions. Rationalize

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

A new approach for regularization of inverse problems in image processing

A new approach for regularization of inverse problems in image processing A new approach for regularization of inverse problems in image processing I. Souopgui 1,2, E. Kamgnia 2, 1, A. Vidard 1 (1) INRIA / LJK Grenoble (2) University of Yaounde I 10 th African Conference on

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

Performance of First- and Second-Order Methods for Big Data Optimization

Performance of First- and Second-Order Methods for Big Data Optimization Noname manuscript No. (will be inserted by the editor) Performance of First- and Second-Order Methods for Big Data Optimization Kimon Fountoulakis Jacek Gondzio Received: date / Accepted: date Abstract

More information

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt. An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity

More information

4. Matrix Methods for Analysis of Structure in Data Sets:

4. Matrix Methods for Analysis of Structure in Data Sets: ATM 552 Notes: Matrix Methods: EOF, SVD, ETC. D.L.Hartmann Page 64 4. Matrix Methods for Analysis of Structure in Data Sets: Empirical Orthogonal Functions, Principal Component Analysis, Singular Value

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Constrained curve and surface fitting

Constrained curve and surface fitting Constrained curve and surface fitting Simon Flöry FSP-Meeting Strobl (June 20, 2006), floery@geoemtrie.tuwien.ac.at, Vienna University of Technology Overview Introduction Motivation, Overview, Problem

More information

Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

More information

An asynchronous Oneshot method with Load Balancing

An asynchronous Oneshot method with Load Balancing An asynchronous Oneshot method with Load Balancing Torsten Bosse Argonne National Laboratory, IL . 2 of 24 Design Optimization Problem min f(u,y) subject to c(u,y) = 0, (u,y) R m+n for sufficiently smooth

More information

FINITE ELEMENT : MATRIX FORMULATION. Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 7633

FINITE ELEMENT : MATRIX FORMULATION. Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 7633 FINITE ELEMENT : MATRIX FORMULATION Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 76 FINITE ELEMENT : MATRIX FORMULATION Discrete vs continuous Element type Polynomial approximation

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Practice Final Math 122 Spring 12 Instructor: Jeff Lang

Practice Final Math 122 Spring 12 Instructor: Jeff Lang Practice Final Math Spring Instructor: Jeff Lang. Find the limit of the sequence a n = ln (n 5) ln (3n + 8). A) ln ( ) 3 B) ln C) ln ( ) 3 D) does not exist. Find the limit of the sequence a n = (ln n)6

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information