Model order reduction via Proper Orthogonal Decomposition
|
|
|
- Dominick Weaver
- 9 years ago
- Views:
Transcription
1 Model order reduction via Proper Orthogonal Decomposition Reduced Basis Summer School 2015 Martin Gubisch University of Konstanz September 17, 2015 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Motivation Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
2 Introduction Let X be a Hilbert space and be a time interval. For a given function y L 2 (, X ) we want to find a reduced basis Ψ l = {ψ 1,..., ψ l } X of X -orthonormal elements ψ 1,..., ψ l such that y admits an optimal representation in L 2 (, span Ψ l ), in the sense that ψ 1,..., ψ l solve min ψ 1,...,ψ l X y(t) 2 y(t), ψ l X ψ l l=1 X dt s.t. ψ k, ψ l X = δ kl (1) where δ denotes the Kronecker-Delta δ kl = 1 for k = l and 0 elsewise. A solution to (1) is called a rank-l POD basis. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Introduction The minimization problem (1) is equivalent to maximizing the averaged projection of the snapshots y(t), t, onto the reduced space span Ψ l : max ψ 1,...,ψ l l=1 y(t), ψ l 2 X dt s.t. ψ k, ψ l X = δ kl ; (2) especially, a given rank-l POD basis Ψ l can be expanded to a rank-(l + 1) basis by a solution ψ l+1 to max y(t), ψ 2 ψ (span Ψ l ) X dt, (3) choosing Ψ l+1 = Ψ l {ψ l+1 }; it is not required to calculate l+1 new basis elements. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
3 Basis construction Since problem (2) is a constrained optimization problem in a Banach space, we apply the Lagrange calculus: We define the Lagrange function L : X l R l l R by L (ψ, λ) = y(t), ψ l 2 X dt + l=1 λ kl ( ψ k, ψ l X δ kl ) (4) and receive the necessary first-order optimality conditions y(t), ψ l X y(t) dt = λ ll ψ l : (5) k,l=1 The POD basis elements solve an eigenvalue problem [4]. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Basis construction Furthermore, the maximal value of (2) is y(t) 2 y(t), ψ l X ψ l l=1 X dt = l=1 y(t), ψ l 2 X dt = l>l λ ll, (6) so we look for eigenvectors corresponding to the l largest eigenvalues of (5). The eigenvalue problem (5) admits a solution in X l R l since the operator R(y) : X X, R(y)φ = y(t), φ X y(t) dt (φ X ) (7) is nonnegative, selfadjoint and compact. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
4 Projection error Let P l X : X span Ψl denote the orthogonal projection on the POD space, i.e. P l X φ = arg min ψ span Ψ l φ φ 2 X = φ, ψ l X ψ l (8) l=1 for each φ X. Then we have a formula for the POD projection error [10]: y(t) P l X y(t) 2 X = l>l λ l l 0. (9) Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Projection error Let V, H be Hilbert spaces with dense and continuous embedding V H and let y L 2 (, V). Choose X = H, then the V-orthogonal projection P l V : V span Ψ l has the following representation in the H-orthonormal basis Ψ l : P l Vφ = arg min φ span Ψ l φ φ 2 V = k,l=1 M 1 V (ψ) lk φ, ψ k H ψ l (10) where M V (ψ) R l l is the weights matrix M V (ψ) lk = ψ l, ψ k V. We get [15] y(t) PVy(t) l 2 V dt = λ l ψ l P Vψ l l 2 V l=l+1 l 0. (11) Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
5 Projection error Let again be X = H. Then we have the error formula [1] y(t) PHy(t) l 2 V dt = λ l ψ l 2 V l=l+1 l 0. (12) The estimates (11) and (6) allow to bound the projection errors in V allthough the POD basis elements are orthogonal in H. This will be useful when POD is applied to partial differential equations. If y H 1 (, V), then (6) will allow not only to approximate y, but also its time derivative ẏ as we will see later. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Method of snapshots We define the multiplication operator Q(y) : L 2 (, R) X and it s adjoints Q(y) : X L 2 (, R), Q(y)φ = φ(t)y(t) dt, (Q(y) φ)(t) = φ, y(t) X. (13) Then the composition satisfies R(y) = Q(y)Q(y) : V V. Let K(y) = Q(y) Q(y) : L 2 (, R) L 2 (, R). Since R(y) and K(y) have the same eigenvalues with the same multiplicities (except of possible 0), a POD basis is also given by a spectral decomposition (λ l, φ l ) l=1,...,l L 2 (, R) l R l of (K(y)φ)(t) = φ(s)y(s) ds, y(t) = φ(s) y(s), y(t) X ds, (14) X choosing ψ l = 1 λl Q(y)φ l V. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
6 Discretization Let X n X be a finite-dimensional subspace of X spanned by the linearly independent elements ϕ 1,..., ϕ n V with mass matrix M(ϕ) R n n, M(ϕ) jj = ϕ j, ϕ j X. Let {t 1,..., t m }. We replace y L 2 (, V) by y R n m such that y(t i ) n j=1 y jiϕ j and consider min ψ 1,...,ψ l R n m α i y i i=1 2 y i, ψ l R n ϕ ψ l l=1 R n ϕ s.t. ψ k, ψ l R n ϕ = δ kl (15) with the weighted scalar product, R n ϕ =, M(ϕ) R n and time weights α i > 0. Then a POD basis can be calculated by a decomposition of one of the matrices R(y) = ym(α)y T M(ϕ) R n n, K(y) = y T M(ϕ)yM(α) R m m (16) where M(α) jj = α j δ jj [4]. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Discretization 1 Let ( ψ l, λ l ) be a decomposition of the symmetrized matrix R(y) = M(ϕ) 1/2 ym(α)y T M(ϕ) 1/2, then appropriate POD elements are ψ l = M(ϕ) 1/2 ψl R n. We require the decomposition of an n n matrix, the root of the mass matrix and solution steps for the transformation ψ ψ. 2 Let ( ψ l, λ l ) be a decomposition of the symmetrized matrix K(y) = M(α) 1/2 y T M(ϕ)yM(α) 1/2, then appropriate POD elements are ψ l = λ 1/2 l ym(α) 1/2 ψl : An m m matrix has to be decomposed, but no matrix roots or solving steps are needed. 3 A singular value decomposition ( ψ l, λ l, ψ l ) of M(ϕ) 1/2 ym(α) 1/2 is costly, but more robust; the eigenvalues corresponding to the POD elements ψ l are λ l = λ 2 l. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
7 Discretization % % POD determines a weighted POD matrix by solving the eigenvalue problem % R(Y)*psi = Y*diag(T)*Y *W*psi = lambda*psi. % % Input: % Y... nxm snapshot matrix % l... 1x1 pod basis rank % W... nxn space weights % T... mx1 time weights % % Output: % Psi... nxl pod basis % Lambda... lx1 eigenvalues % function [Psi,Lambda] = pod(y,l,w,t) % Build up pod operator R(Y) Alpha = spdiags(t,0,size(t,1),size(t,1)); % Alpha... mxm R = Y*Alpha*Y *W; % R... nxn % Solve eigenvalue problem by EIG [Psi,Lambda] = eig(r); [Lambda,Ord] = sort(diag(lambda), descend ); % Lambda... nx1 Lambda = Lambda(1:l,1); % Lambda... lx1 Psi = Psi(:,Ord); % Psi... nxn Psi = Psi(:,1:l); % Psi... nxl end Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
8 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
9 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 September 17, / 37 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD
10 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 September 17, / 37 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD
11 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Application: Filtering and compression of frogs Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
12 Application: Filtering and compression of random data Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Application: Filtering and compression of random data Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
13 Application: Filtering and compression of dynamical flows Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Application: Filtering and compression of dynamical flows Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
14 Application: Frogs, randoms and dynamics 1 Photo compression with POD works quite well: from 10% of the original data onwards, the visible approximation errors vanish. 2 For random data, POD is completely useless (and it should be: it recognizes structurs only if they are there...). Taking 95% of the original data still does not lead to appropriate results although the original data storage is exceeded by 150%. 3 Dynamical flows in nice settings (such as diffusion dominated processes with smooth data) are reconstructed perfectly: In the example above, we just used two POD basis functions... Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Reduced order modeling Let Ψ l = {ψ 1,..., ψ l } be an orthonormal system in X {V, H}. In the following, we interprete the parabolic partial differential equation ẏ(t) + Ay(t) = f (t) in V, y(0) = y in H (17) as a variational problem in the reduced space span Ψ l V: ẏ(t), φ V,V + a(y(t), φ) = f (t), φ V,V, y(0), φ H = y, φ H (18) for all φ span Ψ l. The solution to (18) has the form y l H 1 (, V), y l (t) = y l (t)ψ l (19) l=1 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
15 Reduced order modeling The coefficient function y H 1 (, R l ) is given by the system of ordinary differential equations M(ψ)ẏ(t) + A(ψ)y(t) = f(ψ; t), M(ψ)y(0) = y (ψ); (20) M(ψ), A(ψ) R l l are defined as M(ψ) kl = ψ k, ψ l H, A(ψ) kl = a(ψ k, ψ l ) and the reduced data functions are f(ψ) L 2 (, R l ), and y (ψ) R l, (20) admits the unique solution f(ψ; t) l = f (t), ψ l V,V, y (ψ) l = y, ψ l H. y(t) = e tm(ψ) 1 A(ψ) y + t e (τ t)m(ψ) 1 A(ψ) M(ψ) 1 f(ψ; τ) dτ. (21) 0 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Model reduction error Assume y = 0. Let X = V, (ψ 1,..., ψ l ) be a POD basis satisfying the problem ψ l, y(t) V y(t) dt = λ l ψ l and let y H 1 (, R l ) be the solution to the reduced problem (20). Then there exists a constant C > 0 just depending on the final time and the geometric data such that the ROM error can be estimated by y 2 y l ψ l l=1 L 2 (,V) H 1 (,V ) ( C l=l+1 λ l + ỹ(t) P l V ỹ(t) 2 L 2 (,V ) ). (22) Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
16 Model reduction error Let X = V, (ψ 1,..., ψ l ) be a POD basis satisfying the problem ψ l, y(t) V y(t) dt + ψ l, ẏ(t) V ẏ(t) dt = λ l ψ l and let y H 1 (, R l ) be the solution to the reduced problem (20). Then there exists a constant C > 0 just depending on the final time and the geometric data such that the ROM error can be estimated by y 2 y l ψ l l=1 L 2 (,V) H 1 (,V ) C λ l. (23) l=l+1 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Model reduction error Let X = H, (ψ 1,..., ψ l ) be a POD basis satisfying the problem ψ l, y(t) H y(t) dt + ψ l, ẏ(t) H ẏ(t) dt = λ l ψ l and let y H 1 (, R l ) be the solution to the reduced problem (20). Then there exists a constant C > 0 just depending on the final time and the geometric data such that the ROM error can be estimated by y 2 y l ψ l l=1 L 2 (,V) H 1 (,V ) C λ l ψ l P Vψ l l 2 V. (24) l=l+1 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
17 Model reduction error Let X = V, (ψ 1,..., ψ l ) be a POD basis satisfying the problem ψ l, y(t) V y(t) dt = λ l ψ l and let y H 1 (, R l ) be the solution to the reduced problem (20). Then there exists a constant C > 0 just depending on the final time and the geometric data such that the ROM error can be estimated by y 2 y l ψ l l=1 L 2 (,V) C λ l ψ l 2 V. (25) l=l+1 Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 Motivation 1 Combination of POD with nonlinear PDE solvers such as Sequential Quadratic Programming [2], Trust Region Method [14] or Primal Dual Active Set Method [5]. 2 How does the reduction error react if the state y which builds up R(y) corresponds to a different source term f then the reduced system [6]? 3 In this case, the presented a-priori estimates are not valid any more. The design of efficient a-posteriori error bounds [16], especially for nonlinear equations [7], is in work. 4 The a-priori bounds [10] and convergence rates [9] are available if an appropriate POD basis update strategy (OS-POD) [11], [3] is used. 5 Applications to optimal control [5], parameter identification [12] and inverse problems [13]. 6 Combination of POD model reduction and Greedy algorithm in the reduced basis context [8]. Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
18 References References I [1] Chapelle, D., Gariah, A. & Sainte-Marie, J.: Galerkin approximation with Proper Orthogonal Decomposition: New error estimates and illustrative examples. ESAIM: M2AN, vol. 46, no. 4: pp , [2] Gräßle, C.: POD based inexact SQP methods for optimal control problems governed by a semilinear heat equation. Master s thesis, Universität Konstanz, [3] Grimm, E., Gubisch, M. & Volkwein, S.: A-Posteriori Error Analysis and OS-POD for Optimal Control. Comp. Eng., vol. 3: pp , [4] Gubisch, M. & Volkwein, S.: POD Reduced-Order Modelling for PDE Constrained Optimization. proceeding Model Reduction and Approximation (Luminy), submitted, Oct URL [5] Gubisch, M. & Volkwein, S.: POD a-posteriori error analysis for optimal control Problems with mixed constraints. Comp. Opt. Appl., vol. 58, no. 3: pp , [6] Hinze, M. & Volkwein, S.: Error estimates for abstract linear-quadratic optimal control problems using POD. Comput. Optim. Appl., vol. 39: pp , [7] Kammann, E., Tröltzsch, F. & Volkwein, S.: A method of a-posteriori error estimation with application to proper orthogonal decomposition. ESAIM M2AN, vol. 47, no. 2: pp , [8] Kärcher, M. & Grepl, M. A.: A posteriori error estimation for reduced order solutions of parametrized parabolic optimal control problems. Math. Mod. Num., vol. 48, no. 6: pp , Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37 References II References [9] Kunisch, K. & Müller, M.: Uniform convergence of the Pod method and applications to optimal control. Disc. Cont. Dyn. Syst., vol. 35, no. 9: pp , [10] Kunisch, K. & Volkwein, S.: Galerkin proper orthogonal decomposition methods for parabolic problems. Numer. Math., vol. 90, no. 1: pp , [11] Kunisch, K. & Volkwein, S.: Proper orthogonal decomposition for optimality systems. ESAIM M2AN, vol. 42: pp. 1 23, [12] Lass, O. & Volkwein, S.: Parameter identification for nonlinear elliptic-parabolic systems with application in lithium-ion battery modeling. Comp. Opt. Appl., vol. 62: pp , [13] Ostrowsky, Z., Białecki, R. & Kassab, A.: Advances in Application of POD in Inverse Problems. Proc. Inv,. Prob. Eng., vol. 5: pp. 1 10, [14] Rogg, S.: Trust Region POD for Optimal Boundary Control of a Semilinear Heat Equation. Master s thesis, Universität Konstanz, [15] Singler, J. R.: New POD error expressions, error bounds, and asymptotic results for reduced order models. SIAM J. Numer. Anal., vol. 52, no. 2: pp , [16] Tröltzsch, F. & Volkwein, S.: POD a-posteriori error estimates for linear-quadratic optimal control problems. Comput. Optim. Appl., vol. 44: pp , Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
19 References Motivation Martin Gubisch (University of Konstanz) Model order reduction via POD September 17, / 37
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
Linear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing
NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Finite Dimensional Hilbert Spaces and Linear Inverse Problems
Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,
Numerical Verification of Optimality Conditions in Optimal Control Problems
Numerical Verification of Optimality Conditions in Optimal Control Problems Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Julius-Maximilians-Universität Würzburg vorgelegt von
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Introduction to the Finite Element Method
Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001,
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa
ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS Mikhail Tsitsvero and Sergio Barbarossa Sapienza Univ. of Rome, DIET Dept., Via Eudossiana 18, 00184 Rome, Italy E-mail: [email protected], [email protected]
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
Extrinsic geometric flows
On joint work with Vladimir Rovenski from Haifa Paweł Walczak Uniwersytet Łódzki CRM, Bellaterra, July 16, 2010 Setting Throughout this talk: (M, F, g 0 ) is a (compact, complete, any) foliated, Riemannian
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Principles of Scientific Computing Nonlinear Equations and Optimization
Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related
More than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL NIKOLAI TARKHANOV AND NIKOLAI VASILEVSKI
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition
MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification
CHAPTER IV - BROWNIAN MOTION
CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time
Designing Efficient Software for Solving Delay Differential Equations. C.A.H. Paul. Numerical Analysis Report No. 368. Oct 2000
ISSN 1360-1725 UMIST Designing Efficient Software for Solving Delay Differential Equations C.A.H. Paul Numerical Analysis Report No. 368 Oct 2000 Manchester Centre for Computational Mathematics Numerical
Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
BLOCK JACOBI-TYPE METHODS FOR LOG-LIKELIHOOD BASED LINEAR INDEPENDENT SUBSPACE ANALYSIS
BLOCK JACOBI-TYPE METHODS FOR LOG-LIKELIHOOD BASED LINEAR INDEPENDENT SUBSPACE ANALYSIS Hao Shen, Knut Hüper National ICT Australia, Australia, and The Australian National University, Australia Martin
Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: [email protected] Narvik 6 PART I Task. Consider two-point
Part II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION
Sixth Mississippi State Conference on Differential Equations and Computational Simulations, Electronic Journal of Differential Equations, Conference 5 (7), pp. 5 65. ISSN: 7-669. UL: http://ejde.math.txstate.edu
RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS
RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS T. J. CHRISTIANSEN Abstract. We consider scattering by an obstacle in R d, d 3 odd. We show that for the Neumann Laplacian if
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation
Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation M. Donatelli 1 M. Semplice S. Serra-Capizzano 1 1 Department of Science and High Technology
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions
Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions Jennifer Zhao, 1 Weizhong Dai, Tianchan Niu 1 Department of Mathematics and Statistics, University of Michigan-Dearborn,
ORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.
Understanding the Impact of Weights Constraints in Portfolio Theory
Understanding the Impact of Weights Constraints in Portfolio Theory Thierry Roncalli Research & Development Lyxor Asset Management, Paris [email protected] January 2010 Abstract In this article,
Lecture 13 Linear quadratic Lyapunov theory
EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
Lecture 6 Black-Scholes PDE
Lecture 6 Black-Scholes PDE Lecture Notes by Andrzej Palczewski Computational Finance p. 1 Pricing function Let the dynamics of underlining S t be given in the risk-neutral measure Q by If the contingent
CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
DFG-Schwerpunktprogramm 1324
DFG-Schwerpunktprogramm 324 Extraktion quantifizierbarer Information aus komplexen Systemen Adaptive Wavelet Methods and Sparsity Reconstruction for Inverse Heat Conduction Problems T. Bonesky, S. Dahlke,
Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
Nonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris [email protected] Nonlinear optimization c 2006 Jean-Philippe Vert,
MATHEMATICS (MATH) 3. Provides experiences that enable graduates to find employment in sciencerelated
194 / Department of Natural Sciences and Mathematics MATHEMATICS (MATH) The Mathematics Program: 1. Provides challenging experiences in Mathematics, Physics, and Physical Science, which prepare graduates
and s n (x) f(x) for all x and s.t. s n is measurable if f is. REAL ANALYSIS Measures. A (positive) measure on a measurable space
RAL ANALYSIS A survey of MA 641-643, UAB 1999-2000 M. Griesemer Throughout these notes m denotes Lebesgue measure. 1. Abstract Integration σ-algebras. A σ-algebra in X is a non-empty collection of subsets
Chapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS
INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS NATHAN BROWN, RACHEL FINCK, MATTHEW SPENCER, KRISTOPHER TAPP, AND ZHONGTAO WU Abstract. We classify the left-invariant metrics with nonnegative
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS
POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
PHY4604 Introduction to Quantum Mechanics Fall 2004 Practice Test 3 November 22, 2004
PHY464 Introduction to Quantum Mechanics Fall 4 Practice Test 3 November, 4 These problems are similar but not identical to the actual test. One or two parts will actually show up.. Short answer. (a) Recall
Lecture 10. Finite difference and finite element methods. Option pricing Sensitivity analysis Numerical examples
Finite difference and finite element methods Lecture 10 Sensitivities and Greeks Key task in financial engineering: fast and accurate calculation of sensitivities of market models with respect to model
Derivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
Lecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
Linear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
MATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
Some Problems of Second-Order Rational Difference Equations with Quadratic Terms
International Journal of Difference Equations ISSN 0973-6069, Volume 9, Number 1, pp. 11 21 (2014) http://campus.mst.edu/ijde Some Problems of Second-Order Rational Difference Equations with Quadratic
2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering
2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering Compulsory Courses IENG540 Optimization Models and Algorithms In the course important deterministic optimization
Finite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
Fitting Subject-specific Curves to Grouped Longitudinal Data
Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: [email protected] Currie,
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Bag of Pursuits and Neural Gas for Improved Sparse Coding
Bag of Pursuits and Neural Gas for Improved Sparse Coding Kai Labusch, Erhardt Barth, and Thomas Martinetz University of Lübec Institute for Neuro- and Bioinformatics Ratzeburger Allee 6 23562 Lübec, Germany
1 Variational calculation of a 1D bound state
TEORETISK FYSIK, KTH TENTAMEN I KVANTMEKANIK FÖRDJUPNINGSKURS EXAMINATION IN ADVANCED QUANTUM MECHAN- ICS Kvantmekanik fördjupningskurs SI38 för F4 Thursday December, 7, 8. 13. Write on each page: Name,
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
Numerical Methods for Differential Equations
Numerical Methods for Differential Equations Course objectives and preliminaries Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis
SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I
Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged
Domain Decomposition Methods. Partial Differential Equations
Domain Decomposition Methods for Partial Differential Equations ALFIO QUARTERONI Professor ofnumericalanalysis, Politecnico di Milano, Italy, and Ecole Polytechnique Federale de Lausanne, Switzerland ALBERTO
Duality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
