Reduced Basis Method for the parametrized Electric Field Integral Equation (EFIE)

Similar documents
α = u v. In other words, Orthogonal Projection

6 J - vector electric current density (A/m2 )

1 Introduction to Matrices

3. INNER PRODUCT SPACES

GPR Polarization Simulation with 3D HO FDTD

High-fidelity electromagnetic modeling of large multi-scale naval structures

1 VECTOR SPACES AND SUBSPACES

Introduction to the Finite Element Method

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

3 Orthogonal Vectors and Matrices

Math 241, Exam 1 Information.

Equations Involving Lines and Planes Standard equations for lines in space

FINITE DIFFERENCE METHODS

How High a Degree is High Enough for High Order Finite Elements?

Section 9.5: Equations of Lines and Planes

Fast online inverse scattering with reduced basis method (RBM) for a 3D phase grating with specific line roughness

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

arxiv: v2 [physics.acc-ph] 27 Oct 2014

SOLVING LINEAR SYSTEMS

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

CONCEPT-II. Overview of demo examples

Orthogonal Projections

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

AP Physics - Vector Algrebra Tutorial

MATH1231 Algebra, 2015 Chapter 7: Linear maps

THREE DIMENSIONAL GEOMETRY

FINITE ELEMENT : MATRIX FORMULATION. Georges Cailletaud Ecole des Mines de Paris, Centre des Matériaux UMR CNRS 7633

Dynamics at the Horsetooth, Volume 2A, Focussed Issue: Asymptotics and Perturbations

The waveguide adapter consists of a rectangular part smoothly transcending into an elliptical part as seen in Figure 1.

DRAFT. Further mathematics. GCE AS and A level subject content

Section 1.1. Introduction to R n

Co-simulation of Microwave Networks. Sanghoon Shin, Ph.D. RS Microwave

Linköping University Electronic Press

12.5 Equations of Lines and Planes

Solution of Linear Systems

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Inner Product Spaces

The Heat Equation. Lectures INF2320 p. 1/88

Useful Mathematical Symbols

Calculation of Eigenfields for the European XFEL Cavities

Vector and Matrix Norms

Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

INTEGRAL METHODS IN LOW-FREQUENCY ELECTROMAGNETICS

Notes on Symmetric Matrices

Fast Iterative Solvers for Integral Equation Based Techniques in Electromagnetics

Notes on the representational possibilities of projective quadrics in four dimensions

Solutions to Math 51 First Exam January 29, 2015

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Section 13.5 Equations of Lines and Planes

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Vector Math Computer Graphics Scott D. Anderson

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Nonlinear Iterative Partial Least Squares Method

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Orthogonal Projections and Orthonormal Bases

WAVES AND FIELDS IN INHOMOGENEOUS MEDIA

Introduction to General and Generalized Linear Models

Introduction to Online Learning Theory

Elasticity Theory Basics

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Projective Geometry: A Short Introduction. Lecture Notes Edmond Boyer

Iterative Solvers for Linear Systems

FEM Software Automation, with a case study on the Stokes Equations

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Efficient numerical simulation of time-harmonic wave equations

Class Meeting # 1: Introduction to PDEs

Lecture 14: Section 3.3

MAT 1341: REVIEW II SANGHOON BAEK

Stirling s formula, n-spheres and the Gamma Function

Support Vector Machines with Clustering for Training with Very Large Datasets

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

NOTES ON LINEAR TRANSFORMATIONS

6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.

Fast Multipole Method for particle interactions: an open source parallel library component

Best practices for efficient HPC performance with large models

Least-Squares Intersection of Lines

The Geometry of the Dot and Cross Products

9 Multiplication of Vectors: The Scalar or Dot Product

Name Class. Date Section. Test Form A Chapter 11. Chapter 11 Test Bank 155

Geometric evolution equations with triple junctions. junctions in higher dimensions

Effects of Cell Phone Radiation on the Head. BEE 4530 Computer-Aided Engineering: Applications to Biomedical Processes

Practical Grid Generation Tools with Applications to Ship Hydrodynamics

Two-Dimensional Conduction: Shape Factors and Dimensionless Conduction Heat Rates

The Geometry of the Dot and Cross Products

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Software package for Spherical Near-Field Far-Field Transformations with Full Probe Correction

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

State of Stress at Point

Transcription:

Reduced Basis Method for the parametrized Electric Field Integral Equation (EFIE) Matrix Computations and Scientific Computing Seminar University of California, Berkeley, 15.09.2010 Benjamin Stamm Department of Mathematics and LBL University of California, Berkeley

Project description In collaboration with Prof. J. Hesthaven and Prof. Y. Maday (Brown U. and Paris VI) Prof. R. Nochetto (U. of Maryland) Dr. M Barek Fares (CERFACS, Toulouse) J. Eftang (MIT), Prof. A. Patera (MIT), Prof. M. Grepl (RWTH Aachen) Prof. M. Ganesh (Colorado School of Mines) From mathematics to applied computations with many practical applications Sponsored by SNF grant PBELP2-123078, Brown University, UC Berkeley

Outline Introduction to parametrized scattering problems The Reduced Basis Method The Empirical Interpolation Method Numerical results First results on multi-object scattering Conclusions

Introduction to parametrized Electromagnetic scattering

Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) Γ = Ω i Γ n where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ))

Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) n Γ = Ω i Γ θ where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ)) φ

Parametrized Electromagnetic Scattering (time-harmonic ansatz) E i (x; µ) = p e ikx ˆk (θ,φ) Ω = R 3 \Ω i Ω i R 3 (perfect conductor) n E s (x; µ) Γ = Ω i Γ θ where µ =(k, θ, φ, p) D R 7 is a vector of parameters: 1) k: wave number 2) ˆk(θ, φ): wave direction in spherical coordinates 3) p: polarization (is complex and lies in the plane perpendicular to ˆk(θ, φ)) φ

Governing equations (not parametrized for sake of simplicity) Assume that Ω is a homogenous media with magnetic permeability µ and electrical permittivity ε. Then, the electric field E = E i + E s H(curl, Ω) satisfies curl curl E k 2 E = 0 in Ω, Maxwell E n = 0 on Γ, boundary condition curle s (x) x x ikes 1 (x) = O x as x. Silver-Müller radiation condition Boundary condition is equivalent to γ t E = 0 where γ t denotes the tangential trace operator on surface Γ, γ t E = n (E n).

Integral representation Stratton-Chu representation formula: E(x) =E i (x)+ikzt k u(x) x Ω where T k : single layer potential u: electrical current on surface Z = µ/ε: impedance (k = ω µε: wave number) Applying the tangential trace operator and invoking the boundary conditions yield the strong form of the Electric Field Integral Equation (EFIE): Find u V s.t. ikzγ t (T k u)(x) = γ t E i (x), x Γ for some appropriate complex functional space V on Γ.

Integral representation Stratton-Chu representation formula: E(x) =E i (x)+ikzt k u(x) x Ω where T k : single layer potential u: electrical current on surface Z = µ/ε: impedance (k = ω µε: wave number) Applying the tangential trace operator and invoking the boundary conditions yield the strong form of the Electric Field Integral Equation (EFIE): Find u V s.t. ikzγ t (T k u)(x) = γ t E i (x), x Γ for some appropriate complex functional space V on Γ. Model reduction: 3d 2d problem

Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V.

Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, v; µ) =ikz G k (x, y) u(x) v(y) 1 k div 2 Γ,x u(x)div Γ,y v(y) Γ Γ f(v; µ) = γ t E i (x; µ) v(x) dx Γ dx dy

Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, v; µ) =ikz G k (x, y) u(x) v(y) 1 k div 2 Γ,x u(x)div Γ,y v(y) dx dy Γ Γ f(v; µ) = γ t E i (x; µ) v(x) dx Γ 1) Sesquilinear form a(, ; µ) is symmetric but not coercive [Colton, 2) G k (x, Kress y) = 1992],[Nédélec eik x y x y is the2001] fundamental solution of the Helmholtz operator + k 2 and depends on the paramter k.

Variational formulation of the EFIE (also called the Rumsey principle) Multiplying by a test function v V and taking the scalar product yields: Find u V s.t. ikzγ t (T k u), v Γ = γ t E i (x), v Γ, v V. After integration by parts and introducing the parameter dependence we get: for any fixed µ D, find u(µ) V s.t. a(u(µ), v; µ) =f(v; µ), v V with a(u, 1) Sesquilinear v; µ) =ikzform a(, G k (x, ; µ) y) is symmetric u(x) v(y) but 1 not k divcoercive 2 Γ,x u(x)div Γ,y v(y) dx dy Γ Γ 2) G k (x, y) = eik x y x y is the fundamental solution of the Helmholtz operator f(v; + k 2 µ) and = depends γ t E i on (x; the µ) paramter v(x) dx k. Γ [Colton, Kress 1992],[Nédélec 2001]

Parametrized EFIE and its discretization Galerkin approach: replace continuous space V by the finite dimensional subspace V h : For any fixed parameter µ D, find u h (µ) V h such that a(u h (µ), v h ; µ) =f(v h ; µ) v h V h. (1) For V h we use the lowest order (complex) Raviart-Thomas space RT 0, also called Rao-Wilton-Glisson (RWG) basis in the electromagnetic community. Boundary Element Method (BEM). In practice the code CESC is used, CESC: CERFACS Electromagnetic Solver Code. [Bendali 1984],[Schwab, Hiptmair 2002],[Buffa et al. 2002,2003], [Christiansen 2004]

Example of parametrized solution Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). θ φ

Example of parametrized solution Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). θ φ

Output functional: Radar Cross Section (RCS) Describes pattern/energy of electrical field at infinity Functional of the current on body A (u, ˆd) = ikz ikx ˆd (u(x) ˆd)e ˆddx 4π Γ RCS(u, ˆd) A (u, = 10 log ˆd) 2 10 A (u, ˆd 0 ) 2 where u: current on surface ˆd: given directional unit vector ˆd 0 : reference unit direction

Output functional: Radar Cross Section (RCS) Incident plane wave parametrized by E i (x; k) = p e ikx ˆk ( π 4,0). Directional unit vector given by ˆd = ˆd (θ,φ) with θ [0, π], φ = 0. z x y θ φ

Reduced basis method

Reduced basis method: Overview Assume that we want to compute the scattered field for many different values of the parameters: Applying the BEM many times is too expensive and unnecessary since the parametrized solutions lie often on a low order manifold. On a discrete level, assume that: Assumption (Existence of ideal reduced basis): The subspace M h := {u h (µ) µ D}, is of low dimensionality, i.e. M h Tol = span{ζ i i =1,...,N} up to a certain given tolerance T ol for some properly chosen {ζ i } N i=1 and moderate N N = dim(v h )). More precisely, we assume an exponentially decreasing T ol in function of N. The reduced basis method is a tool to construct an approximation {ξ i } N i=1 of the ideal reduced basis.

Example: Existence of an ideal reduced basis POD: Proper Orthogonal Decomposition Parameters: (k, θ) [1, 25] [0, π], φ is fixed. For a fine discretization of [1, 25] [0, π], compute the BEMsolution for each parameter value. Save all solutions in a matrix and compute the singular values. 0.1 0.01 0.001 0.0001 0.00001 1x10-6 1x10-7 1x10-8 1x10-9 1x10-10 1x10-11 1x10-12 1x10-13 1x10-14 1x10-15 1x10-16 1x10-17 0 500 1000 1500 2000 2500 3000 singular values Geometry: z y x Indication of linear dependence of solutions

Example: Existence of an ideal reduced basis POD: Proper Orthogonal Decomposition Parameters: (k, θ) [1, 25] [0, π], φ is fixed. For a fine discretization of [1, 25] [0, π], compute the BEMsolution for each parameter value. Save all solutions in a matrix and compute the singular values. 0.1 0.01 0.001 0.0001 0.00001 1x10-6 1x10-7 1x10-8 1x10-9 1x10-10 1x10-11 1x10-12 1x10-13 1x10-14 1x10-15 1x10-16 1x10-17 0 500 1000 1500 2000 2500 3000 singular values Geometry: x z With 200 basis functions you can reach a precision of 1e-7! Indication of linear dependence of solutions y

Reduced basis method - Interpolation between snapshots In practice we use {ξ i } N i=1 as reduced basis where 1) ξ i = u h (µ i ) are solutions of (1) with µ = µ i D (snapshots), 2) S N = {µ i } N i=1 carefully chosen. Requires only N BEM computations. The reduced basis approximation is the solution of: For µ D, find u N (µ) W N such that: a(u N (µ), v N ; µ) =f(v N ; µ) v N W N (2) with W N = span{ξ i i =1,...,N}. Parameter dependent Ritz-type projection onto reduced basis. Questions: 1) Accuracy: How to choose S N? 2) Efficiency: How to solve (2) in a fast way?

Reduced basis method - Interpolation between snapshots In practice we use {ξ i } N i=1 as reduced basis where 1) ξ i = u h (µ i ) are solutions of (1) with µ = µ i D (snapshots), 2) S N = {µ i } N i=1 carefully chosen. Requires only N BEM computations. The reduced basis approximation is the solution of: For µ D, find u N (µ) W N such that: a(u N (µ), v N ; µ) =f(v N ; µ) v N W N (2) with W N = span{ξ i i =1,...,N}. Parameter dependent Ritz-type projection onto reduced basis. Questions: 1) Accuracy: How to choose S N? 2) Efficiency: How to solve (2) in a fast way? See [Rozza et al. 2008] for a review.

Reduced basis method - Overall strategy Step 1: Construct an approximation W N V h (reduced basis) to the solution space W N span{m h } with M h = {u h (µ) µ D}. Step 2: Project the exact solution u(µ) onto the reduced basis using a parameter dependent Ritz-projection: P N (µ) :V W N. In other words: find u N (µ) W N such that a(u N (µ),v N ; µ) =f(v N ; µ), v N W N. V u(u ) u(ũ) W N u N ( µ) u N (µ )

Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop:

Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop:

Accuracy: Choice of reduced basis (Greedy algorithm) Offline Initialization: Choose initial parameter value µ 1 Ξ and set S 1 = {µ 1 }, put W 0 = For N =1,...,N max Ξ D is a finite dimensional pointset of D. Offline Loop: 1) Compute the truth solution u h (µ N ), solution of (1), with µ = µ N 2) W N = W N 1 {u h (µ N )} 3) For all µ Ξ, do: i) Compute u N (µ) W N solution of (2) ii) Compute a posteriori error estimation η(µ) u h (µ) u N (µ) 4) Choose µ N+1 = arg max µ Ξ η(µ) 5) S N+1 = S N {µ N+1 } Online Loop: Online Loop: 1) For any new µ D, compute u N (µ) W Nmax solution of (2) 2) Compute the output functional of RCS(u N (µ), ˆd)

Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms,

Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms, Caution: This is not feasible in the framework of the EFIE! e a(u h, v h ; µ) =ikz ik x y x y u h (x) v h (y) 1 k div 2 Γ,x u h (x) div Γ,y v h (y) Γ Γ f(v h ; µ) =n (p n) e ikx ŝ (θ,φ) v h (x) dx Γ dx dy

Efficiency: Affine assumption Assumption: a(w, v; µ) = f(v; µ) = M Θ m (µ) a m (w, v), m=1 M m=1 Θ m f (µ) f m (v), where for m =1,...,M Θ m, Θ m f : D C µ dependent functions, a m : V h V h C µ independent forms, f m : V h C µ independent forms, Caution: This is not feasible in the framework of the EFIE! e a(u h, v h ; µ) =ikz ik x y x y u h (x) v h (y) 1 k div 2 Γ,x u h (x) div Γ,y v h (y) Γ Γ f(v for h ; µ) =n (p n) now that the e ikx ŝ assumption (θ,φ) v h (x) dxholds approximatively Luckily this problem can be fixed (later in this talk), assume Γ dx dy

Efficiency: How to solve (2) in a fast way? Offline: Given W N = span{ξ i i =1,...,N} precompute Online: (A m ) i,j = a m (ξ j, ξ i ), 1 i, j N, (F m ) i = f m (ξ i ), 1 i N. Rem. Depends on N = dim(v h ). Rem. Size of A m and F m is N 2 resp. N. For a given parameter value µ D 1) Assemble (depending on M and N, i.e. MN 2 resp. MN) A = M m=1 Θ m (µ)a m F = M m=1 Θ m f (µ)f m 2) Solve Au N (µ) =F. (depending on N, i.e N 3 for LU factorization)

Efficiency: How to solve (2) in a fast way? Offline: Given W N = span{ξ i i =1,...,N} precompute Online: (A m ) i,j = a m (ξ j, ξ i ), 1 i, j N, (F m ) i = f m (ξ i ), 1 i N. Rem. Depends on N = dim(v h ). Rem. Size of A m and F m is N 2 resp. N. For a given parameter value µ D 1) Assemble (depending on M and N, i.e. MN 2 resp. MN) A = M m=1 Θ m (µ)a m F = M m=1 Θ m f (µ)f m 2) Solve Au N (µ) =F. (depending on N, i.e N 3 for LU factorization) In the same vein we can compute the a posteriori estimate and the RCS/output functional Computation time also depends on M!

Efficiency: Empirical Interpolation Method (EIM) (allows to realize the affine assumption approximatively)

Efficiency: EIM Let f : Ω D C such that f( ; µ) C 0 (Ω) for all µ D. The EIM is a procedure that provides {µ m } M m=1 such that I M (f)(x; µ) = M m=1 α m (µ)f(x; µ m ) is a good approximation of f(x; µ) for all (x, µ) Ω D. Uses also a greedy algorithm to pick the parameters {µ m } M m=1. 100 f(x; µ) =x µ f(x;!) 10 f(x; µ) 3 m=1 α m (µ)x µ m 1 [Grepl et al. 2007], [Maday et al. 2007] 0 0.2 0.4 0.6 0.8 1 x

Efficiency: EIM Let f : Ω D C such that f( ; µ) C 0 (Ω) for all µ D. The EIM is a procedure that provides {µ m } M m=1 such that I M (f)(x; µ) = M m=1 α m (µ)f(x; µ m ) is a good approximation of f(x; µ) for all (x, µ) Ω D. Uses also a greedy algorithm to pick the parameters {µ m } M m=1. Examples: 1) Non-singular part of kernel function: 2) Incident plane wave: with µ =(k, θ, φ). G ns k (r) =G ns (r; k) = eikr 1, r R +,k R + r E i (x; µ) = p e ikˆk(θ,φ) x, x Γ, µ D,

Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part G k (r) =r 1 + G ns k (r)

Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form G k (r) =r 1 + G ns k (r) a(w, v; k) = + Γ Γ Γ Γ 1 4π x y w(x) v(y) 1 k div 2 Γ w(x) div Γ v(y) G ns k ( x y ) dx dy w(x) v(y) 1 k 2 div Γ w(x) div Γ v(y) dx dy.

Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M m=1 α m(k)g ns k m (r) a(w, v; k) = + Γ Γ Γ Γ 1 4π x y w(x) v(y) 1 k div 2 Γ w(x) div Γ v(y) G ns k ( x y ) G k (r) =r 1 + G ns k (r) dx dy w(x) v(y) 1 k 2 div Γ w(x) div Γ v(y) dx dy.

Efficiency: EIM implementation for EFIE 1) Split the kernel function into the singular part and non-singular part 2) Insert it into the sequilinear form G k (r) =r 1 + G ns k (r) 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M m=1 α m(k)g ns k m (r) a(w, v; k) 1 Γ Γ 1 k 2 + w(x) v(y) 4π x y Γ Γ M m=1 dx dy div Γ w(x) div Γ v(y) dx dy 4π x y α m (k) M m=1 α m (k) k 2 Γ Γ G ns k m ( x y )w(x) v(y)dx dy Γ Γ blue: parameter independent red: parameter dependent G ns k m ( x y )div Γ w(x) div Γ v(y)dx dy

Efficiency: EIM implementation for EFIE 1) Split the kernel function w(x) v(y) into the singular part and non-singular part a(w, v; k) 1 dx dy Γ Γ 4π x y blue: parameter independent G k (r) =r 1 + G ns k (r) 2) Insert it into 1 red: parameter dependent div Γ w(x) div Γ v(y) k 2 the sequilinear form dx dy Γ Γ 4π x y 3) Replace non-singular kernel function by its EIM interpolant G ns k (r) M M m=1 + α m(k)g α ns m (k) k m (r) G ns k m ( x y )w(x) v(y)dx dy m=1 M m=1 α m (k) k 2 Γ Γ Γ Γ G ns k m ( x y )div Γ w(x) div Γ v(y)dx dy In the same manner for F (v; µ) M f m=1 α f (µ) Γ γ t E i (y; µ m ) v(y)dy

Numerical results for EIM f(x; k) = eikx 1, x (0,R max ],k [1,k max ] x 1! k! 25 1! k! 50 1! k! 100 0 20 40 60 80 100 k Picked parameters km in the parameter domain relative error 1 0.1 0.01 0.001 0.0001 0.00001 1x10-6 1x10-7 1x10-8 1x10-9 1x10-10 1x10-11 1x10-12 1x10-13 1x10-14 1! k! 25 1! k! 50 1! k! 100 0 10 20 30 40 M Interpolation error depending on the length of the expansion

Numerical results for EIM 180 160 f(x; µ) =e ikˆk(θ,φ) x, x Γ, µ D, µ =(k, θ), φ fixed, D =[1,k max ] [0, π] relative error 10 1 0.1 0.01 0.001 0.0001 0.00001 1! k! 6.25 1! k! 12.5 1! k! 25! 140 120 100 80 60 40 20 1x10-6 1x10-7 1x10-8 0 20 40 60 80 100 120 140 M Interpolation error depending on the length of the expansion 0 5 10 15 20 25 k Picked parameters in the parameter domain for kmax=25 Surface Γ given by:

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain):

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Gravity centre

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Gravity centre

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D

Efficiency: Elementwise EIM/hp-Interpolant Problems with large parameter domains, the expansion becomes too large and this can become that severe that the computing time of the RB solution (only online time) is in the order of a direct computation. As solution, the parameter domain can adaptively be split into subelements on which the function is approximated by a different Magic point expansion. Schematic illustration (2d parameter domain): D Refinement until on each subdomain a certain tolerance is reached Parameter domain only is refined Generalization to any dimension of the parameter space possible

Numerical results elementwise EIM f(x; µ) =e ikˆk(θ,φ) x, x Γ, µ D, µ =(k, θ), φ fixed, D =[1, 25] [0, π] Surface Γ given by: 10 1 0.1 0.01 1 element 4 elements 16 elements 64 elements 100 tol = 1e-8 tol = 1e-10 tol = 1e-12 0.001 relative error 0.0001 0.00001 1x10-6 1x10-7 1x10-8 1x10-9 1x10-10 1x10-11 1x10-12 # of elements 10 1 0 50 100 150 200 M 100 M

Numerical results elementwise EIM 10 1 0.1 0.01 1 element 4 elements 16 elements 64 elements 100 tol = 1e-8 tol = 1e-10 tol = 1e-12 0.001 relative error 0.0001 0.00001 1x10-6 1x10-7 1x10-8 1x10-9 1x10-10 1x10-11 1x10-12 # of elements 10 1 0 50 100 150 200 M 100 M Conclusion: A reduction of M implies a algebraic increase of number of elements (and dofs) needed. In this case: #elements CM 3.7 But it reduces the online computing time (at the cost of a longer Offline procedure and more memory) Shift of workload from Online part to Offline part

Numerical results elementwise EIM Picked parameter values and EIM elements (tol=1e-12): M=218 (without refinement) M=151 (1 refinement) 3.14 3.14 2.36 2.36 1.57 1.57 0.79 0.79 0.00 3.14 5 10 15 20 25 k 0.00 5 10 15 20 25 k M=106 (2 refinements) M=74 (3 refinements) 3.14 2.36 2.36 1.57 1.57 0.79 0.79 0.00 0.00 5 10 15 20 25 k 5 10 15 20 25 k

Numerical result for reduced basis method

Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed Surface Γ given by: Picked parameters: Convergence: 3.14 1 RB approximation error Singular values 2.36 θ 1.57 0.79 relative error 0.1 0.01 0.00 0.001 5 10 15 20 25 k 0 20 40 60 80 100 120 N relative error = max µ u h (µ) u N (µ) L 2 (Ω) max µ u h (µ) L 2 (Ω)

Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed 0.12 0.10 RCS signal 0.06 0.05 RCS signal 0.04 RCS signal Time [sec] 0.08 0.06 0.04 Time [sec] 0.04 0.03 0.02 Time [sec] 0.03 0.02 0.02 Residual error 0.01 Residual error 0.01 Residual error 0.00 LU factorization 0.00 LU factorization 0.00 LU factorization 20 40 60 80 100 120 140 N 20 40 60 80 100 120 140 N 20 40 60 80 100 120 140 N Number of elements used for EEIM Kernel function 1 4 16 Right hand side 1 4 16 RCS 4 16 64

Numerical results: test 1 2 parameters, µ =(k, θ) with D =[1, 25] [0, π] φ = 0 fixed θ = π/4 k = 25

More complex scatterer and parallelization 12620 complex double unknowns BEM matrix has 160 Mio complex double entries Used 160 processors with distributed memory for computations Solving linear system: Cyclic distribution by Scalapack: parallel LU-factorization Matrix-matrix, matrix-vector multiplication: Blockwise computations using blacs/blas z x y

Numerical results: test 2 Convergence: 1 parameter, µ = k with D = [1, 25.5] (θ, φ) =( π 6, 0) fixed Surface Γ given by: relative error 10 1 0.1 0.01 0.001 0.0001 0.00001 1x10-6 1x10-7 1 Repartition of 23 first picked parameters: 1x10-8 0 5 10 15 20 25 N 0.5 0-0.5-1 0 5 10 15 20 25 k

Numerical results: test 3 2 parameters, µ =(k, θ) with D = [1, 13] [0, π] φ = 0 fixed Surface Γ given by: Picked parameters: Convergence: 180 10 160 1 140 0.1! 120 100 80 60 40 20 0 2 4 6 8 10 12 k relative error 0.01 0.001 0.0001 0.00001 1x10-6 1x10-7 1x10-8 0 20 40 60 80 N

Current research - Multi object scattering Strategy: 1) Train a reduced basis for each type of geometry to be accurate for all incident angles, polarizations and wavenumber. 2) Approximate the interaction matrices between each pair of bodies using the EIM. -> parametrization of the location of each object. 3) For a fixed wavenumber, incident angle and polarization we solve the problem in the reduced basis space using a Jacobi-type iteration scheme. Remark: Observe that only for each new geometry a reduced basis needs to be assembled. The reduced basis is invariant under translation. Example: for a lattice of 100x100 identical objects we need to assemble one reduced basis!

Current research - Multi object scattering Strategy: 1) Train a reduced basis for each type of geometry to be accurate for all incident angles, polarizations and wavenumber. 2) Approximate the interaction matrices between each pair of bodies using the EIM. -> parametrization of the location of each object. 3) For a fixed wavenumber, incident angle and polarization we solve the problem in the reduced basis space using a Jacobi-type iteration scheme. Remark: Observe that only for each new geometry a reduced basis needs to be assembled. The reduced basis is invariant under translation. Example: for a lattice of 100x100 identical objects we need to assemble one reduced basis!

Current research - Multi object scattering Endfire incidence for k=11.048.. Benchmark available in literature IEEE TRdNS.4CTIONS ON ANTENNaS AND PROPAWTION, EdAY 1971 From broadside to endfire to broadsice for k=7.41 8.06 $=I0355 Lo = kb = I1 048 6, = m 4.03 0.00-4.03-8.06-12.09-16.12-20.15-24.18 kd (b) incidence. (a) ka = 7.41. (b) ka = 11.048. Modal approach (-) xperiment 25.24 30.48 35.72 40.96 46.2 51.44 56.68 61.92 67.16 (a a). kd

Accuracy - 2 spheres 2 spheres of variable distance: Relative error of the currents. 0.1 0.01 N=160 N=180 N=200 0.001 relative error 0.0001 0.00001 1x10-6 1x10-7 0.01 0.1 1 10 minimal distance of objects [fraction of wavelength]

Lattice 6 x 6 spheres The wavenumber is fixed to k = 3 for the simulation. The parameter is the angle φ =1,...,2π and θ is fixed at 90 degrees. The RCS is also measured for φ =0,...,2π and θ is fixed at 90 degrees.

Lattice 6 x 6 spheres 40 RB approximation Truth approximation 30 20 10 0-10 -20 0 1 2 3 4 5 6 The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.

Lattice 6 x 6 spheres angle of incident plane wave (input) 40 RB approximation Truth approximation 30 20 10 0-10 -20 0 1 2 3 4 5 6 The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.

Lattice 6 x 6 spheres angle of incident plane wave (input) 40 RB approximation Truth approximation 30 20 10 0-10 -20 0 1 2 3 4 5 6 angle of RCS (measurement) The Jacobi iterations need between 60 and 90 iterations, depending on the angle. The total "online" simulation time for that plot was around 3-5 minutes, which involved 360 computations/different angles.

Conclusions: For the first time, the reduced basis method is applied to integral equations. EIM interpolation is an essential tool for parametrized integral equations due to the kernel function Efficiency For large parameter domains EIM elements are used to speed up the computation of the online routine Current Promising initial results for multi-object scattering using RB Future hp-rbm for large parameter domains (and dimensions) CFIE for wavenumber parametrization for scatterers with volume

Thank you for your attention