Reduced Basis Method for the parametrized Electric Field Integral Equation (EFIE)
|
|
- Oscar Preston
- 8 years ago
- Views:
Transcription
1 Reduced Basis Method for the parametrized Electric Field Integral Equation (EFIE) Matrix Computations and Scientific Computing Seminar University of California, Berkeley, Benjamin Stamm Department of Mathematics and LBL University of California, Berkeley
2 Project description In collaboration with Prof. J. Hesthaven and Prof. Y. Maday (Brown U. and Paris VI) Prof. R. Nochetto (U. of Maryland) Dr. M Barek Fares (CERFACS, Toulouse) J. Eftang (MIT), Prof. A. Patera (MIT), Prof. M. Grepl (RWTH Aachen) Prof. M. Ganesh (Colorado School of Mines) From mathematics to applied computations with many practical applications Sponsored by SNF grant PBELP , Brown University, UC Berkeley
3 Outline Introduction to parametrized scattering problems The Reduced Basis Method The Empirical Interpolation Method Numerical results First results on multi-object scattering Conclusions
4 Introduction to parametrized Electromagnetic scattering
5 Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) Γ = Ω i Γ n where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ))
6 Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) n Γ = Ω i Γ θ where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ)) φ
7 Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) n E s (x; µ) Γ = Ω i Γ θ where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ)) φ
8 Governing equations (not parametrized for sake of simplicity) Assume that Ω is a homogenous media with magnetic permeability µ and electrical permittivity ε. Then, the electric field E = E i + E s H(curl, Ω) satisfies curl curl E k 2 E = 0 in Ω, Maxwell E n = 0 on Γ, boundary condition curle s (x) x x ikes 1 (x) = O x as x. Silver-Müller radiation condition Boundary condition is equivalent to γ t E = 0 where γ t denotes the tangential trace operator on surface Γ, γ t E = n (E n).
9 Integral representation Stratton-Chu representation formula: E(x) =E i (x)+ikzt k u(x) x Ω where T k : single layer potential u: electrical current on surface Z = µ/ε: impedance (k = ω µε: wave number) Applying the tangential trace operator and invoking the boundary conditions yield the strong form of the Electric Field Integral Equation (EFIE): Find u V s.t. ikzγ t (T k u)(x) = γ t E i (x), x Γ for some appropriate complex functional space V on Γ.
10 Integral representation Stratton-Chu representation formula: E(x) =E i (x)+ikzt k u(x) x Ω where T k : single layer potential u: electrical current on surface Z = µ/ε: impedance (k = ω µε: wave number) Applying the tangential trace operator and invoking the boundary conditions yield the strong form of the Electric Field Integral Equation (EFIE): Find u V s.t. ikzγ t (T k u)(x) = γ t E i (x), x Γ for some appropriate complex functional space V on Γ. Model reduction: 3d 2d problem
11 Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V.
12 Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, v; µ) =ikz G k (x, y) u(x) v(y) 1 k div 2 Γ,x u(x)div Γ,y v(y) Γ Γ f(v; µ) = γ t E i (x; µ) v(x) dx Γ dx dy
13 Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, v; µ) =ikz G k (x, y) u(x) v(y) 1 k div 2 Γ,x u(x)div Γ,y v(y) dx dy Γ Γ f(v; µ) = γ t E i (x; µ) v(x) dx Γ 1) Sesquilinear form a(, ; µ) is symmetric but not coercive [Colton, 2) G k (x, Kress y) = 1992],[Nédélec eik x y x y is the2001] fundamental solution of the Helmholtz operator + k 2 and depends on the paramter k.
14 Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, 1) Sesquilinear v; µ) =ikzform a(, G k (x, ; µ) y) is symmetric u(x) v(y) but 1 not k divcoercive 2 Γ,x u(x)div Γ,y v(y) dx dy Γ Γ 2) G k (x, y) = eik x y x y is the fundamental solution of the Helmholtz operator f(v; + k 2 µ) and = depends γ t E i on (x; the µ) paramter v(x) dx k. Γ [Colton, Kress 1992],[Nédélec 2001]
15 Parametrized EFIE and its discretization Galerkin approach: replace continuous space V by the finite dimensional subspace V h : For any fixed parameter µ D, find u h (µ) V h such that a(u h (µ), v h ; µ) =f(v h ; µ) v h V h. (1) For V h we use the lowest order (complex) Raviart-Thomas space RT 0, also called Rao-Wilton-Glisson (RWG) basis in the electromagnetic community. Boundary Element Method (BEM). In practice the code CESC is used, CESC: CERFACS Electromagnetic Solver Code. [Bendali 1984],[Schwab, Hiptmair 2002],[Buffa et al. 2002,2003], [Christiansen 2004]
16 Example of parametrized solution Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). θ φ
17 Example of parametrized solution Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). θ φ
18 Output functional: Radar Cross Section (RCS) Describes pattern/energy of electrical field at infinity Functional of the current on body A (u, ˆd) = ikz ikx ˆd (u(x) ˆd)e ˆddx 4π Γ RCS(u, ˆd) A (u, = 10 log ˆd) 2 10 A (u, ˆd 0 ) 2 where u: current on surface ˆd: given directional unit vector ˆd 0 : reference unit direction
19 Output functional: Radar Cross Section (RCS) Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). Directional unit vector given by ˆd = ˆd (θ,φ) with θ [0, π], φ = 0. z x y θ φ
20 Reduced basis method
21 Reduced basis method: Overview Assume that we want to compute the scattered field for many different values of the parameters: Applying the BEM many times is too expensive and unnecessary since the parametrized solutions lie often on a low order manifold. On a discrete level, assume that: Assumption (Existence of ideal reduced basis): The subspace M h := {u h (µ) µ D}, is of low dimensionality, i.e. M h Tol = span{ζ i i =1,...,N} up to a certain given tolerance T ol for some properly chosen {ζ i } N i=1 and moderate N N = dim(v h )). More precisely, we assume an exponentially decreasing T ol in function of N. The reduced basis method is a tool to construct an approximation {ξ i } N i=1 of the ideal reduced basis.
22 Example: Existence of an ideal reduced basis POD: Proper Orthogonal Decomposition Parameters: (k, θ) [1, 25] [0, π], φ is fixed. For a fine discretization of [1, 25] [0, π], compute the BEMsolution for each parameter value. Save all solutions in a matrix and compute the singular values x10-6 1x10-7 1x10-8 1x10-9 1x x x x x x x x singular values Geometry: z y x Indication of linear dependence of solutions
23 Example: Existence of an ideal reduced basis POD: Proper Orthogonal Decomposition Parameters: (k, θ) [1, 25] [0, π], φ is fixed. For a fine discretization of [1, 25] [0, π], compute the BEMsolution for each parameter value. Save all solutions in a matrix and compute the singular values x10-6 1x10-7 1x10-8 1x10-9 1x x x x x x x x singular values Geometry: x z With 200 basis functions you can reach a precision of 1e-7! Indication of linear dependence of solutions y
24 Reduced basis method - Interpolation between snapshots In practice we use {ξ i } N i=1 as reduced basis where 1) ξ i = u h (µ i ) are solutions of (1) with µ = µ i D (snapshots), 2) S N = {µ i } N i=1 carefully chosen. Requires only N BEM computations. The reduced basis approximation is the solution of: For µ D, find u N (µ) W N such that: a(u N (µ), v N ; µ) =f(v N ; µ) v N W N (2) with W N = span{ξ i i =1,...,N}. Parameter dependent Ritz-type projection onto reduced basis. Questions: 1) Accuracy: How to choose S N? 2) Efficiency: How to solve (2) in a fast way?
25 Reduced basis method - Interpolation between snapshots In practice we use {ξ i } N i=1 as reduced basis where 1) ξ i = u h (µ i ) are solutions of (1) with µ = µ i D (snapshots), 2) S N = {µ i } N i=1 carefully chosen. Requires only N BEM computations. The reduced basis approximation is the solution of: For µ D, find u N (µ) W N such that: a(u N (µ), v N ; µ) =f(v N ; µ) v N W N (2) with W N = span{ξ i i =1,...,N}. Parameter dependent Ritz-type projection onto reduced basis. Questions: 1) Accuracy: How to choose S N? 2) Efficiency: How to solve (2) in a fast way? See [Rozza et al. 2008] for a review.
26 Reduced basis method - Overall strategy Step 1: Construct an approximation W N V h (reduced basis) to the solution space W N span{m h } with M h = {u h (µ) µ D}. Step 2: Project the exact solution u(µ) onto the reduced basis using a parameter dependent Ritz-projection: P N (µ) :V W N. In other words: find u N (µ) W N such that a(u N (µ),v N ; µ) =f(v N ; µ), v N W N. V u(u ) u(ũ) W N u N ( µ) u N (µ )
27 Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop:
28 Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop:
29 Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop: Online Loop: 1) For any new µ D, compute u N (µ) W Nmax solution of (2) 2) Compute the output functional of RCS(u N (µ), ˆd)
30 Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms,
31 Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms, Caution: This is not feasible in the framework of the EFIE! e a(u h, v h ; µ) =ikz ik x y x y u h (x) v h (y) 1 k div 2 Γ,x u h (x) div Γ,y v h (y) Γ Γ f(v h ; µ) =n (p n) e ikx ŝ (θ,φ) v h (x) dx Γ dx dy
32 Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms, Caution: This is not feasible in the framework of the EFIE! e a(u h, v h ; µ) =ikz ik x y x y u h (x) v h (y) 1 k div 2 Γ,x u h (x) div Γ,y v h (y) Γ Γ f(v for h ; µ) =n (p n) now that the e ikx ŝ assumption (θ,φ) v h (x) dxholds approximatively Luckily this problem can be fixed (later in this talk), assume Γ dx dy
33 Efficiency: How to solve (2) in a fast way? Offline: Given W N = span{ξ i i =1,...,N} precompute Online: (A m ) i,j = a m (ξ j, ξ i ), 1 i, j N, (F m ) i = f m (ξ i ), 1 i N. Rem. Depends on N = dim(v h ). Rem. Size of A m and F m is N 2 resp. N. For a given parameter value µ D 1) Assemble (depending on M and N, i.e. MN 2 resp. MN) A = M m=1 Θ m (µ)a m F = M m=1 Θ m f (µ)f m 2) Solve Au N (µ) =F. (depending on N, i.e N 3 for LU factorization)
34 Efficiency: How to solve (2) in a fast way? Offline: Given W N = span{ξ i i =1,...,N} precompute Online: (A m ) i,j = a m (ξ j, ξ i ), 1 i, j N, (F m ) i = f m (ξ i ), 1 i N. Rem. Depends on N = dim(v h ). Rem. Size of A m and F m is N 2 resp. N. For a given parameter value µ D 1) Assemble (depending on M and N, i.e. MN 2 resp. MN) A = M m=1 Θ m (µ)a m F = M m=1 Θ m f (µ)f m 2) Solve Au N (µ) =F. (depending on N, i.e N 3 for LU factorization) In the same vein we can compute the a posteriori estimate and the RCS/output functional Computation time also depends on M!
35 Efficiency: Empirical Interpolation Method (EIM) (allows to realize the affine assumption approximatively)
36 Efficiency: EIM Let f : Ω D C such that f( ; µ) C 0 (Ω) for all µ D. The EIM is a procedure that provides {µ m } M m=1 such that I M (f)(x; µ) = M m=1 α m (µ)f(x; µ m ) is a good approximation of f(x; µ) for all (x, µ) Ω D. Uses also a greedy algorithm to pick the parameters {µ m } M m= f(x; µ) =x µ f(x;!) 10 f(x; µ) 3 m=1 α m (µ)x µ m 1 [Grepl et al. 2007], [Maday et al. 2007] x
37 Efficiency: EIM Let f : Ω D C such that f( ; µ) C 0 (Ω) for all µ D. The EIM is a procedure that provides {µ m } M m=1 such that I M (f)(x; µ) = M m=1 α m (µ)f(x; µ m ) is a good approximation of f(x; µ) for all (x, µ) Ω D. Uses also a greedy algorithm to pick the parameters {µ m } M m=1. Examples: 1) Non-singular part of kernel function: 2) Incident plane wave: with µ =(k, θ, φ). G ns k (r) =G ns (r; k) = eikr 1, r R +,k R + r E i (x; µ) = p e ikˆk(θ,φ) x, x Γ, µ D,
38 Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part G k (r) =r 1 + G ns k (r)
39 Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form G k (r) =r 1 + G ns k (r) a(w, v; k) = + Γ Γ Γ Γ 1 4π x y w(x) v(y) 1 k div 2 Γ w(x) div Γ v(y) G ns k ( x y ) dx dy w(x) v(y) 1 k 2 div Γ w(x) div Γ v(y) dx dy.
40 Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M m=1 α m(k)g ns k m (r) a(w, v; k) = + Γ Γ Γ Γ 1 4π x y w(x) v(y) 1 k div 2 Γ w(x) div Γ v(y) G ns k ( x y ) G k (r) =r 1 + G ns k (r) dx dy w(x) v(y) 1 k 2 div Γ w(x) div Γ v(y) dx dy.
41 Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form G k (r) =r 1 + G ns k (r) 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M m=1 α m(k)g ns k m (r) a(w, v; k) 1 Γ Γ 1 k 2 + w(x) v(y) 4π x y Γ Γ M m=1 dx dy div Γ w(x) div Γ v(y) dx dy 4π x y α m (k) M m=1 α m (k) k 2 Γ Γ G ns k m ( x y )w(x) v(y)dx dy Γ Γ blue: parameter independent red: parameter dependent G ns k m ( x y )div Γ w(x) div Γ v(y)dx dy
42 Efficiency: EIM implementation for EFIE 1) Split the kernel function w(x) v(y) into the singular part and non-singular part a(w, v; k) 1 dx dy Γ Γ 4π x y blue: parameter independent G k (r) =r 1 + G ns k (r) 2) Insert it into 1 red: parameter dependent div Γ w(x) div Γ v(y) k 2 the sequilinear form dx dy Γ Γ 4π x y 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M M m=1 + α m(k)g α ns m (k) k m (r) G ns k m ( x y )w(x) v(y)dx dy m=1 M m=1 α m (k) k 2 Γ Γ Γ Γ G ns k m ( x y )div Γ w(x) div Γ v(y)dx dy In the same manner for F (v; µ) M f m=1 α f (µ) Γ γ t E i (y; µ m ) v(y)dy
43 Numerical results for EIM f(x; k) = eikx 1, x (0,R max ],k [1,k max ] x 1! k! 25 1! k! 50 1! k! k Picked parameters km in the parameter domain relative error x10-6 1x10-7 1x10-8 1x10-9 1x x x x x ! k! 25 1! k! 50 1! k! M Interpolation error depending on the length of the expansion
44 Numerical results for EIM f(x; µ) =e ikˆk(θ,φ) x, x Γ, µ D, µ =(k, θ), φ fixed, D =[1,k max ] [0, π] relative error ! k! ! k! ! k! 25! x10-6 1x10-7 1x M Interpolation error depending on the length of the expansion k Picked parameters in the parameter domain for kmax=25 Surface Γ given by:
45 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain):
46 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
47 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
48 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Gravity centre
49 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Gravity centre
50 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
51 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
52 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
53 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D
54 Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Refinement until on each subdomain a certain tolerance is reached Parameter domain only is refined Generalization to any dimension of the parameter space possible
55 Numerical results elementwise EIM f(x; µ) =e ikˆk(θ,φ) x, x Γ, µ D, µ =(k, θ), φ fixed, D =[1, 25] [0, π] Surface Γ given by: element 4 elements 16 elements 64 elements 100 tol = 1e-8 tol = 1e-10 tol = 1e relative error x10-6 1x10-7 1x10-8 1x10-9 1x x x10-12 # of elements M 100 M
56 Numerical results elementwise EIM element 4 elements 16 elements 64 elements 100 tol = 1e-8 tol = 1e-10 tol = 1e relative error x10-6 1x10-7 1x10-8 1x10-9 1x x x10-12 # of elements M 100 M Conclusion: A reduction of M implies a algebraic increase of number of elements (and dofs) needed. In this case: #elements CM 3.7 But it reduces the online computing time (at the cost of a longer Offline procedure and more memory) Shift of workload from Online part to Offline part
57 Numerical results elementwise EIM Picked parameter values and EIM elements (tol=1e-12): M=218 (without refinement) M=151 (1 refinement) k k M=106 (2 refinements) M=74 (3 refinements) k k
58 Numerical result for reduced basis method
59 Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed Surface Γ given by: Picked parameters: Convergence: RB approximation error Singular values 2.36 θ relative error k N relative error = max µ u h (µ) u N (µ) L 2 (Ω) max µ u h (µ) L 2 (Ω)
60 Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed RCS signal RCS signal 0.04 RCS signal Time [sec] Time [sec] Time [sec] Residual error 0.01 Residual error 0.01 Residual error 0.00 LU factorization 0.00 LU factorization 0.00 LU factorization N N N Number of elements used for EEIM Kernel function Right hand side RCS
61 Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed θ = π/4 k = 25
62 More complex scatterer and parallelization complex double unknowns BEM matrix has 160 Mio complex double entries Used 160 processors with distributed memory for computations Solving linear system: Cyclic distribution by Scalapack: parallel LU-factorization Matrix-matrix, matrix-vector multiplication: Blockwise computations using blacs/blas z x y
63 Numerical results: test 2 Convergence: 1 parameter, µ = k with D = [1, 25.5] (θ, φ) =( π 6, 0) fixed Surface Γ given by: relative error x10-6 1x Repartition of 23 first picked parameters: 1x N k
64 Numerical results: test 3 2 parameters, µ =(k, θ) with D = [1, 13] [0, π] φ = 0 fixed Surface Γ given by: Picked parameters: Convergence: ! k relative error x10-6 1x10-7 1x N
65 Current research - Multi object scattering Strategy: 1) Train a reduced basis for each type of geometry to be accurate for all incident angles, polarizations and wavenumber. 2) Approximate the interaction matrices between each pair of bodies using the EIM. -> parametrization of the location of each object. 3) For a fixed wavenumber, incident angle and polarization we solve the problem in the reduced basis space using a Jacobi-type iteration scheme. Remark: Observe that only for each new geometry a reduced basis needs to be assembled. The reduced basis is invariant under translation. Example: for a lattice of 100x100 identical objects we need to assemble one reduced basis!
66 Current research - Multi object scattering Strategy: 1) Train a reduced basis for each type of geometry to be accurate for all incident angles, polarizations and wavenumber. 2) Approximate the interaction matrices between each pair of bodies using the EIM. -> parametrization of the location of each object. 3) For a fixed wavenumber, incident angle and polarization we solve the problem in the reduced basis space using a Jacobi-type iteration scheme. Remark: Observe that only for each new geometry a reduced basis needs to be assembled. The reduced basis is invariant under translation. Example: for a lattice of 100x100 identical objects we need to assemble one reduced basis!
67 Current research - Multi object scattering Endfire incidence for k= Benchmark available in literature IEEE TRdNS.4CTIONS ON ANTENNaS AND PROPAWTION, EdAY 1971 From broadside to endfire to broadsice for k= $=I0355 Lo = kb = I , = m kd (b) incidence. (a) ka = (b) ka = Modal approach (-) xperiment (a a). kd
68 Accuracy - 2 spheres 2 spheres of variable distance: Relative error of the currents N=160 N=180 N= relative error x10-6 1x minimal distance of objects [fraction of wavelength]
69 Lattice 6 x 6 spheres The wavenumber is fixed to k = 3 for the simulation. The parameter is the angle φ =1,...,2π and θ is fixed at 90 degrees. The RCS is also measured for φ =0,...,2π and θ is fixed at 90 degrees.
70 Lattice 6 x 6 spheres 40 RB approximation Truth approximation The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.
71 Lattice 6 x 6 spheres angle of incident plane wave (input) 40 RB approximation Truth approximation The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.
72 Lattice 6 x 6 spheres angle of incident plane wave (input) 40 RB approximation Truth approximation angle of RCS (measurement) The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.
73 Conclusions: For the first time, the reduced basis method is applied to integral equations. EIM interpolation is an essential tool for parametrized integral equations due to the kernel function Efficiency For large parameter domains EIM elements are used to speed up the computation of the online routine Current Promising initial results for multi-object scattering using RB Future hp-rbm for large parameter domains (and dimensions) CFIE for wavenumber parametrization for scatterers with volume
74 Thank you for your attention
Error Control and Adaptivity for Reduced Basis Approximations of Parametrized Evolution Equations. Mario Ohlberger
Error Control and Adaptivity for Reduced Basis Approximations of Parametrized Evolution Equations Mario Ohlberger In cooperation with: M. Dihlmann, M. Drohmann, B. Haasdonk, G. Rozza Workshop on A posteriori
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More information6 J - vector electric current density (A/m2 )
Determination of Antenna Radiation Fields Using Potential Functions Sources of Antenna Radiation Fields 6 J - vector electric current density (A/m2 ) M - vector magnetic current density (V/m 2 ) Some problems
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationGPR Polarization Simulation with 3D HO FDTD
Progress In Electromagnetics Research Symposium Proceedings, Xi an, China, March 6, 00 999 GPR Polarization Simulation with 3D HO FDTD Jing Li, Zhao-Fa Zeng,, Ling Huang, and Fengshan Liu College of Geoexploration
More informationHigh-fidelity electromagnetic modeling of large multi-scale naval structures
High-fidelity electromagnetic modeling of large multi-scale naval structures F. Vipiana, M. A. Francavilla, S. Arianos, and G. Vecchi (LACE), and Politecnico di Torino 1 Outline ISMB and Antenna/EMC Lab
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationIntroduction to the Finite Element Method
Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross
More informationHøgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point
More informationA QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
More information3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
More information1. Introduction. O. MALI, A. MUZALEVSKIY, and D. PAULY
Russ. J. Numer. Anal. Math. Modelling, Vol. 28, No. 6, pp. 577 596 (2013) DOI 10.1515/ rnam-2013-0032 c de Gruyter 2013 Conforming and non-conforming functional a posteriori error estimates for elliptic
More informationMath 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
More informationEquations Involving Lines and Planes Standard equations for lines in space
Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity
More informationFINITE DIFFERENCE METHODS
FINITE DIFFERENCE METHODS LONG CHEN Te best known metods, finite difference, consists of replacing eac derivative by a difference quotient in te classic formulation. It is simple to code and economic to
More informationHow High a Degree is High Enough for High Order Finite Elements?
This space is reserved for the Procedia header, do not use it How High a Degree is High Enough for High Order Finite Elements? William F. National Institute of Standards and Technology, Gaithersburg, Maryland,
More informationSection 9.5: Equations of Lines and Planes
Lines in 3D Space Section 9.5: Equations of Lines and Planes Practice HW from Stewart Textbook (not to hand in) p. 673 # 3-5 odd, 2-37 odd, 4, 47 Consider the line L through the point P = ( x, y, ) that
More informationFast online inverse scattering with reduced basis method (RBM) for a 3D phase grating with specific line roughness
Fast online inverse scattering with reduced basis method (RBM) for a 3D phase grating with specific line roughness Bernd H. Kleemann a, Julian Kurz b, Jochen Hetzler c, Jan Pomplun de, Sven Burger de,
More informationNonlinear Algebraic Equations. Lectures INF2320 p. 1/88
Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have
More informationarxiv:1111.4354v2 [physics.acc-ph] 27 Oct 2014
Theory of Electromagnetic Fields Andrzej Wolski University of Liverpool, and the Cockcroft Institute, UK arxiv:1111.4354v2 [physics.acc-ph] 27 Oct 2014 Abstract We discuss the theory of electromagnetic
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationStudy and improvement of the condition number of the Electric Field Integral Equation
Monografías del Seminario Matemático García de Galdeano. 27: 73 80, (2003). Study and improvement of the condition number of the Electric Field Integral Equation X. Antoine 1, A. Bendali 2 and M. Darbas
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationIntroduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
More informationCONCEPT-II. Overview of demo examples
CONCEPT-II CONCEPT-II is a frequency domain method of moment (MoM) code, under development at the Institute of Electromagnetic Theory at the Technische Universität Hamburg-Harburg (www.tet.tuhh.de). Overview
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationIntroduction. Background
A Multiplicative Calderón Preconditioner for the Electric Field Integral Equation 1 Francesco P. Andriulli*, 2 Kristof Cools, 1 Hakan Bağcı, 2 Femke Olyslager, 3 Annalisa Buffa, 4 Snorre Christiansen,
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationAP Physics - Vector Algrebra Tutorial
AP Physics - Vector Algrebra Tutorial Thomas Jefferson High School for Science and Technology AP Physics Team Summer 2013 1 CONTENTS CONTENTS Contents 1 Scalars and Vectors 3 2 Rectangular and Polar Form
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationFINITE ELEMENT : MATRIX FORMULATION. Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 7633
FINITE ELEMENT : MATRIX FORMULATION Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 76 FINITE ELEMENT : MATRIX FORMULATION Discrete vs continuous Element type Polynomial approximation
More informationDynamics at the Horsetooth, Volume 2A, Focussed Issue: Asymptotics and Perturbations
Dynamics at the Horsetooth, Volume A, Focussed Issue: Asymptotics and Perturbations Asymptotic Expansion of Bessel Functions; Applications to Electromagnetics Department of Electrical Engineering Colorado
More informationThe waveguide adapter consists of a rectangular part smoothly transcending into an elliptical part as seen in Figure 1.
Waveguide Adapter Introduction This is a model of an adapter for microwave propagation in the transition between a rectangular and an elliptical waveguide. Such waveguide adapters are designed to keep
More informationDRAFT. Further mathematics. GCE AS and A level subject content
Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationCo-simulation of Microwave Networks. Sanghoon Shin, Ph.D. RS Microwave
Co-simulation of Microwave Networks Sanghoon Shin, Ph.D. RS Microwave Outline Brief review of EM solvers 2D and 3D EM simulators Technical Tips for EM solvers Co-simulated Examples of RF filters and Diplexer
More informationLinköping University Electronic Press
Linköping University Electronic Press Report Well-posed boundary conditions for the shallow water equations Sarmad Ghader and Jan Nordström Series: LiTH-MAT-R, 0348-960, No. 4 Available at: Linköping University
More information12.5 Equations of Lines and Planes
Instructor: Longfei Li Math 43 Lecture Notes.5 Equations of Lines and Planes What do we need to determine a line? D: a point on the line: P 0 (x 0, y 0 ) direction (slope): k 3D: a point on the line: P
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationClassification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationUseful Mathematical Symbols
32 Useful Mathematical Symbols Symbol What it is How it is read How it is used Sample expression + * ddition sign OR Multiplication sign ND plus or times and x Multiplication sign times Sum of a few disjunction
More informationCalculation of Eigenfields for the European XFEL Cavities
Calculation of Eigenfields for the European XFEL Cavities Wolfgang Ackermann, Erion Gjonaj, Wolfgang F. O. Müller, Thomas Weiland Institut Theorie Elektromagnetischer Felder, TU Darmstadt Status Meeting
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationVector Algebra CHAPTER 13. Ü13.1. Basic Concepts
CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have
More informationINTEGRAL METHODS IN LOW-FREQUENCY ELECTROMAGNETICS
INTEGRAL METHODS IN LOW-FREQUENCY ELECTROMAGNETICS I. Dolezel Czech Technical University, Praha, Czech Republic P. Karban University of West Bohemia, Plzeft, Czech Republic P. Solin University of Nevada,
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationFast Iterative Solvers for Integral Equation Based Techniques in Electromagnetics
Fast Iterative Solvers for Integral Equation Based Techniques in Electromagnetics Mario Echeverri, PhD. Student (2 nd year, presently doing a research period abroad) ID:30360 Tutor: Prof. Francesca Vipiana,
More informationNotes on the representational possibilities of projective quadrics in four dimensions
bacso 2006/6/22 18:13 page 167 #1 4/1 (2006), 167 177 tmcs@inf.unideb.hu http://tmcs.math.klte.hu Notes on the representational possibilities of projective quadrics in four dimensions Sándor Bácsó and
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationSection 13.5 Equations of Lines and Planes
Section 13.5 Equations of Lines and Planes Generalizing Linear Equations One of the main aspects of single variable calculus was approximating graphs of functions by lines - specifically, tangent lines.
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
More informationVector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
More informationProblem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
More informationDot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product
Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More information1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
More informationOrthogonal Projections and Orthonormal Bases
CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
More informationWAVES AND FIELDS IN INHOMOGENEOUS MEDIA
WAVES AND FIELDS IN INHOMOGENEOUS MEDIA WENG CHO CHEW UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN IEEE PRESS Series on Electromagnetic Waves Donald G. Dudley, Series Editor IEEE Antennas and Propagation Society,
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationIntroduction to Online Learning Theory
Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent
More informationElasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
More informationSection 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables
The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,
More informationProjective Geometry: A Short Introduction. Lecture Notes Edmond Boyer
Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Contents 1 Introduction 2 11 Objective 2 12 Historical Background 3 13 Bibliography 4 2 Projective Spaces 5 21 Definitions 5 22 Properties
More informationIterative Solvers for Linear Systems
9th SimLab Course on Parallel Numerical Simulation, 4.10 8.10.2010 Iterative Solvers for Linear Systems Bernhard Gatzhammer Chair of Scientific Computing in Computer Science Technische Universität München
More informationFEM Software Automation, with a case study on the Stokes Equations
FEM Automation, with a case study on the Stokes Equations FEM Andy R Terrel Advisors: L R Scott and R C Kirby Numerical from Department of Computer Science University of Chicago March 1, 2006 Masters Presentation
More informationTwo Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
More informationEfficient numerical simulation of time-harmonic wave equations
Efficient numerical simulation of time-harmonic wave equations Prof. Tuomo Rossi Dr. Dirk Pauly Ph.Lic. Sami Kähkönen Ph.Lic. Sanna Mönkölä M.Sc. Tuomas Airaksinen M.Sc. Anssi Pennanen M.Sc. Jukka Räbinä
More informationClass Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
More informationLecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
More informationMAT 1341: REVIEW II SANGHOON BAEK
MAT 1341: REVIEW II SANGHOON BAEK 1. Projections and Cross Product 1.1. Projections. Definition 1.1. Given a vector u, the rectangular (or perpendicular or orthogonal) components are two vectors u 1 and
More informationStirling s formula, n-spheres and the Gamma Function
Stirling s formula, n-spheres and the Gamma Function We start by noticing that and hence x n e x dx lim a 1 ( 1 n n a n n! e ax dx lim a 1 ( 1 n n a n a 1 x n e x dx (1 Let us make a remark in passing.
More informationSupport Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
More informationFast Multipole Method for particle interactions: an open source parallel library component
Fast Multipole Method for particle interactions: an open source parallel library component F. A. Cruz 1,M.G.Knepley 2,andL.A.Barba 1 1 Department of Mathematics, University of Bristol, University Walk,
More informationBest practices for efficient HPC performance with large models
Best practices for efficient HPC performance with large models Dr. Hößl Bernhard, CADFEM (Austria) GmbH PRACE Autumn School 2013 - Industry Oriented HPC Simulations, September 21-27, University of Ljubljana,
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationThe Geometry of the Dot and Cross Products
Journal of Online Mathematics and Its Applications Volume 6. June 2006. Article ID 1156 The Geometry of the Dot and Cross Products Tevian Dray Corinne A. Manogue 1 Introduction Most students first learn
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More informationName Class. Date Section. Test Form A Chapter 11. Chapter 11 Test Bank 155
Chapter Test Bank 55 Test Form A Chapter Name Class Date Section. Find a unit vector in the direction of v if v is the vector from P,, 3 to Q,, 0. (a) 3i 3j 3k (b) i j k 3 i 3 j 3 k 3 i 3 j 3 k. Calculate
More informationGeometric evolution equations with triple junctions. junctions in higher dimensions
Geometric evolution equations with triple junctions in higher dimensions University of Regensburg joint work with and Daniel Depner (University of Regensburg) Yoshihito Kohsaka (Muroran IT) February 2014
More informationEffects of Cell Phone Radiation on the Head. BEE 4530 Computer-Aided Engineering: Applications to Biomedical Processes
Effects of Cell Phone Radiation on the Head BEE 4530 Computer-Aided Engineering: Applications to Biomedical Processes Group 3 Angela Cai Youjin Cho Mytien Nguyen Praveen Polamraju Table of Contents I.
More informationPractical Grid Generation Tools with Applications to Ship Hydrodynamics
Practical Grid Generation Tools with Applications to Ship Hydrodynamics Eça L. Instituto Superior Técnico eca@marine.ist.utl.pt Hoekstra M. Maritime Research Institute Netherlands M.Hoekstra@marin.nl Windt
More informationTwo-Dimensional Conduction: Shape Factors and Dimensionless Conduction Heat Rates
Two-Dimensional Conduction: Shape Factors and Dimensionless Conduction Heat Rates Chapter 4 Sections 4.1 and 4.3 make use of commercial FEA program to look at this. D Conduction- General Considerations
More informationThe Geometry of the Dot and Cross Products
The Geometry of the Dot and Cross Products Tevian Dray Department of Mathematics Oregon State University Corvallis, OR 97331 tevian@math.oregonstate.edu Corinne A. Manogue Department of Physics Oregon
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationCombustion Engine Optimization
Combustion Engine Optimization Stefan Jakobsson, Muhammad Saif Ul Hasnain,, Robert Rundqvist, Fredrik Edelvik, Michael Patriksson Mattias Ljungqvist Volvo Cars, Johan Wallesten and Dimitri Lortet Volvo
More informationSoftware package for Spherical Near-Field Far-Field Transformations with Full Probe Correction
SNIFT Software package for Spherical Near-Field Far-Field Transformations with Full Probe Correction Historical development The theoretical background for spherical near-field antenna measurements and
More informationCross product and determinants (Sect. 12.4) Two main ways to introduce the cross product
Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.
More informationState of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
More information