ICES REPORT Constructively WellPosed Approximation Methods with Unity INFSUP and Continuity


 Matilda Carson
 8 days ago
 Views:
Transcription
1 ICES REPORT April 2011 Constructively WellPosed Approximation Methods with Unity INFSUP and Continuity by Tan BuiThanh, Leszek Demkowicz, and Omar Ghattas The Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, Texas Reference: Tan BuiThanh, Leszek Demkowicz, and Omar Ghattas, "Constructively WellPosed Approximation Methods with Unity INFSUP and Continuity Constants for Partial Differential Equations", ICES REPORT 1110, The Institute for Computational Engineering and Sciences, The University of Texas at Austin, April 2011
2 MATHEMATICS OF COMPUTATION Volume 00, Number 0, Pages S (XX) CONSTRUCTIVELY WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP AND CONTINUITY CONSTANTS FOR PARTIAL DIFFERENTIAL EQUATIONS TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Abstract Starting from the generalized LaxMilgram theorem and from the fact that the approximation error is minimized when the continuity and inf sup constants are unity, we develop a theory that surely delivers wellposed approximation methods with unity continuity and inf sup constants We demonstrate our singleframework theory on scalar linear hyperbolic equations to constructively derive two different hp finite element methods The first one coincides with a least squares discontinuous Galerkin method, and the other seems to be new Both methods trivially wellposed, as we shall show, and their optimal hpconvergence rates will be proved The numerical results show that our new discontinuous finite element method is more accurate than the nodal discontinuous Galerkin methods 1 Introduction The work in this paper is inspired from the recent research on the discontinuous Petrov Galerkin method (DPG) of Demkowicz and Gopalakrishnan [1, 2, 3] for partial differential equations The method starts by partitioning the domain of interest into nonoverlapping elements Variational formulations are posed for each element separately and then summed up to form a global variational statement Elemental solutions are connected by introducing hybrid variables (also known as fluxes or traces) that live on the skeleton of the mesh This is therefore a meshdependent variational approach in which both bilinear and linear forms depend on the mesh under consideration In general, the trial and the test spaces are not related to each other In the standard Bubnov Galerkin (also known as Galerkin) approach, the trial and test spaces are identical, while they are different in a Petrov Galerkin scheme Traditionally, one chooses either Galerkin or Petrov Galerkin approach, then prove the consistency, stability both in infinite and finite dimensional settings The DPG method introduces a new paradigm in which one selects both trial and test spaces at the same time to satisfy wellposedness In particular, one can select trial and test function spaces for which the continuity and inf sup constants are unity Given a finite dimensional trial subspace, the finite dimensional test space is constructed in such a way that the wellposedness of the finite dimensional setting is automatically inherited from the infinite dimensional counterpart The DPG method in [2] starts with a given norm in the trial space and then seeks a norm in the test space in order to achieve unity continuity and inf sup constants 2010 Mathematics Subject Classification Primary 65N30; Secondary 65N12, 65N15, 65N22 1 c XXXX American Mathematical Society
3 2 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Another DPG method in [3] fulfills the same goal but reverses the process, ie, looks for a norm in the trial space corresponding to a given norm in test space Clearly, this is one of the advantages of the DPG methodology since it allows one to choose a norm of interest to work with while making the error optimal, ie smallest, in that norm Furthermore, the DPG methods have been shown to be robust and provide optimal hpconvergences We shall not discuss the advantages of the DPG methods further here, and the readers are referred to the original DPG papers [1, 2, 3] for more details Here, we explore a different option In particular, we shall not prescribe either norms in the trial and test spaces Instead, we let the problem speak out its natural energy norms Our goal is to develop a single framework that adapts to problems at hand, while automatically generating accurate finite element methods with trivial and guaranteed stability Like the DPG methods, either test or trial basis functions can be chosen in our methods, the other has to be solved for Depending on how one applies our theory to problems under consideration, basis functions can be trivially obtained or solved for through a partial differential equation, as we shall show The rest of the paper is organized as follows In Section 2, we first develop a single framework for general variational problems that can be written in terms of bilinear and linear forms Then, we develop a few analytical results that help us prove the optimality of the resulting finite dimensional approximations and their wellposedness In Section 3 and Section 4, we apply our single framework, but using two different view points, to linear hyperbolic equations of advectionreaction type We shall show that the framework constructively leads to an existing stabilized hp discontinuous Galerkin (DG) method and a new hp finite element method for advectionreaction equations We discuss characteristics of each method and their hpconvergences in details The chief purpose of the paper is to introduce our theory and to demonstrate its usefulness to partial differential equations in deriving accurate and stable finite element methods We further strengthen our findings by a numerical example in Section 5, and Section 6 concludes the paper 2 Abstract theory development In this section, we develop a theory for constructive approximations of linear partial differential equations Our goal is to construct finite dimensional approximations that are guaranteed to be trivially wellposed with unity continuity and inf sup constants Here, trivial wellposedness means that the wellposedness of the finite dimensional problems is trivially inherited from their infinite dimensional counterparts The starting point of our theory, to be shown, is not new since our work is inspired by the recent discontinuous Petrov Galerkin (DPG) methods of Demkowicz and Gopalakrishnan [1, 2, 3] However, we shall point out the differences between our method and the existing DPG methods Let U and V be Hilbert spaces over the real line (generalizations of our theory to the complex field is straightforward) Consider the following variational problem, (1) { Seek u U such that a(u, v) = l(v), v V,
4 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 3 where l ( ) is a linear form on V, a(, ) is a bilinear form satisfying the continuity condition with continuity constant M, (2) a (u, v) M u U, the inf sup condition with the inf sup constant γ, (3a) γ > 0 : inf sup a (u, v) γ, u U v V u U and the injectivity of the adjoint operator (to be defined), (3b) (a (u, v) = 0, u U) (v = 0) If (2) and (3) hold, then by the generalized LaxMilgram theorem [4, 5] (also known as the BanachNečasBabuška theorem [6]), (1) has a unique solution and the solution is stable in the following sense, u U 1 γ l V, where V is the topological dual of V Note that for convenience in writing, we have abused the notation sup v V instead of sup v V,v 0 (and similarly for inf) Now let Uh n U and V h n V be two finite dimensional trial and test spaces, and consider the following finite dimensional approximation problem, { Seek uh U (4) h n such that a(u h, v h ) = l(v h ), v h Vh n If dim U h = dim V h = n, and the following discrete inf sup condition (5) γ h > 0 : inf u h U h a (u h, v h ) sup γ h v h V h u h U v h V holds, then the finite dimensional problem (4) is wellposed by an application of the generalized LaxMilgram theorem for finite dimensional problem (also known as the Babuška s Theorem [4, 7]) In general, however, the finite dimensional problem (4) does not inherit the wellposedness of the infinite dimensional counterpart (1) except for some special circumstances One, therefore, has to prove the nontrivial discrete inf sup condition [6] In this paper, we constructively develop a class of finite dimensional approximations in which the discrete inf sup condition (5) trivially follows from the continuous one (3a) Before doing so, let us first discuss the approximation error between the finite and infinite dimensional solutions To begin, we recall the following projection result in which the norms of the projection and its complement are equal Lemma 21 Let U h U be a subspace of a Hilbert space U Suppose P : U U h, is a projection, ie, P 2 = P, and P is not null or identity Then P = I P, where the norm P is induced from a norm in U, which is in turn induced from an inner product in U Proof Many proofs of this result can be found in [8]
5 4 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS If we define P as u h = P u through a (u h, v h ) = a (u, v h ), v h V h, then, by the discrete inf sup condition (5), it is trivial to see that P is a projection operator In addition, we can bound the norm of the projection operator P as follows Lemma 22 There holds P M γ h Proof See [9] for a simple proof Now comes the projection error result Theorem 23 (Babuška [7]) Suppose that both continuous problem (1) and discrete problem (4) are wellposed, then u u h U M γ h inf u w h w h U U h Proof A standard proof that uses Lemmas 21 and 22 can be found in [10, 9] The following best approximation error result immediately follows from Theorem 23 Corollary 1 If M = γ h, then u u h U = inf u w h w h U U h In particular, M = γ h = 1 satisfies Corollary 1 That is, if the continuity constant and the discrete inf sup constant are unity, then the error incurred from the discrete approximation (4) is the best, ie, it is smallest Up to this point, although not in the form presented here, the theory has already been discussed in the DPG methods [1, 2, 3] As can be seen, there are two spaces to work with, namely the trial and test spaces, respectively The first DPG method [2] starts with a given norm in the trial space U, and then seeks a norm in the test space V so that M = γ = 1 In the second DPG method [3], on the other hand, one defines a norm in U from a given norm in V such that M = γ = 1 Clearly, this is one of the advantages of the DPG methodology since it allows one to choose a norm of interest to work with while making the error optimal, ie smallest, in that norm In this paper, we explore a different option, namely, we shall not prescribe either norms in the spaces U and V Instead, we let the problem speak out its natural energy norms, thus which norms to be chosen to work with is out of the question in our new approach Our singleframework method will be applied to linear hyperbolic equations of advectionreaction type, and we shall show that it constructively leads to existing stabilized numerical methods and a new one Nevertheless, care must be taken since our idea may not be applicable for cases in which one prefers to work with particular norms The rest of this section exploits the goal of having M = γ = 1 to some extents In particular, we characterize a few properties for problems having M = γ = 1, and then present sufficient conditions for the infinite dimensional to have M = γ = 1 Next, we study constructions of finite dimensional subspaces Uh n and V h n such that the wellposedness of the resulting finite dimensional approximation problems is trivially inherited from the infinite dimensional settings In fact, as shall be shown, our method automatically delivers unity discrete continuity and inf sup constants,
6 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 5 ie, M h = γ h = 1 It should be pointed out that the rest of our theory is general so that it can apply not only to our method in this paper, but also the other DPG methods We first note that the inf sup condition (3a) is typically defined by taking first the supremum over the test space V and then the infimum over the trial space U However, the following wellknown result shows that as long as the wellposedness, and hence the continuity and inf sup constants, is concerned, the distinction between the test and trial spaces are irrelevant That is, there is no reason for us to favor one space over the other Lemma 24 The problem (1) is wellposed if and only if γ > 0 : inf u U sup v V Proof See [9] for a proof a (u, v) u U = inf sup a (u, v) γ v V u U u U We first make the following simple observation Lemma 25 The following are equivalent: (i) M = γ = 1 (ii) u U we have u U = sup v V a(u,v) (ii) v V we have = sup u U a(u,v) u U Proof By Lemma 24, it is sufficient to prove the equivalence of i) and ii) (i) (ii): From the continuity condition we have sup v V a (u, v) u U, which, together with the inf sup condition, implies (ii) (ii) (i): (ii) is equivalent to which implies (i) a (u, v) sup v V sup v V a (u, v) u U, a (u, v) u U, The following useful result will be used as guidelines to construct the natural norms in U and V spaces such that M = γ = 1 Theorem 26 Suppose the continuity condition holds with unity continuity constant, ie, a (u, v) u U Then there holds M = γ = 1 if either of the following conditions holds i) For each u U \ {0}, there exists v u V \ {0} such that a (u, v u ) = u U v u V ii) For each v V \ {0}, there exists u v U \ {0} such that a (u v, v) = u v U
7 6 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Proof We shall show that i) is a sufficient condition for M = γ = 1 to hold, and an analogous proof can be done for ii) It is straightforward to have u U = a (u, v u) v u V and Lemma 25 concludes the proof sup v V a (u, v) u U, Remark 27 In general, the continuity and the inf sup conditions are not related to each other, and it is typically more difficult to establish the later However, Theorem 26 shows that if the continuity constant is unity and the equality is attainable, then the continuity condition actually implies the inf sup condition and the inf sup constant is unity as well This important observation is the key result in this paper and will be exploited throughout for the advectionreaction problem It should be pointed out that if both conditions in Theorem 26 hold, then the 3tuple (U, V, a (, )) is known as a dual pair That is, the bilinear form a (, ) puts U and V in duality To the end of the paper, we call the norms in U and V spaces optimal norms if both continuity and inf sup constants are unity in these norms Moreover, we also call the pair u and v u (and hence for u v and v) as the optimal trial and test functions, respectively Here, optimality is in the sense of Corollary 1 We are now in position to construct the approximation subspaces Uh n and V h n such that the discrete continuity and inf sup constants are unity Lemma 28 Let the assumptions of Theorem 26 hold respectively for item i) and ii) below i) Let Uh n U be a subspace, and construct ii) Let V n h V n h = span { v uh : u h U n h, a (u h, v uh ) = u h U v uh V } V be a subspace, and construct U n h = span { u vh : v h V n h, a (u vh, v h ) = v h V u vh U } If the pair of test space Vh n and trial space U h n are constructed by either i) or ii), then there holds M h = γ h = 1, and the discrete problem (4) is wellposed if dim U n h = dim V n h Proof Again, it is sufficient to show item i) M h = 1 is a direct consequence of M = 1 By construction, we have, for each u h U n h, u h U = a (u h, v uh ) v uh V = sup v V a (u h, v) = sup v h V n h a (u h, v h ) v h V, which implies γ h = 1 The wellposedness of the discrete problem is now clear by the Babuška s theorem It can be seen that Theorem 26 and Lemma 28 do not explicitly specify either the optimal test function v u or the optimal trial function u v A generalpurpose approach for choosing optimal pair of functions is through the Riesz representation theorem as we now show Moreover, if a basis of the trial space Uh n is specified, we can determine the corresponding basis in the test space and vice versa so that the finite dimensional problem is wellposed with M h = γ h = 1
8 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 7 Theorem 29 Define the map T : U u T u V as T u, v V V = a(u, v) Denote v T u as the Riesz representation of T u in V Suppose a (, ) is continuous with unity constant and assumption i) of Theorem 26 holds Take Uh n U and define V h = span {v T uh : u h Uh n } Then, the following hold, (i) M h = γ h = 1 (ii) Let Uh n = span {ϕ i} n i=1, where ϕ i U, i = 1,, n Then {v T ϕi } n i=1 is a basis of Vh n Proof (i) By Lemma 28, it is sufficient to show that a(u h, v T uh ) = u h U v T uh V But, by the proof of Theorem 26, the definition of norm in V, and the Riesz representation theorem, one readily has, u h U = sup v V which yields, a(u h, v) = sup v V T u h, v V V = T u h V = v T uh V, a(u h, v T uh ) = T u h, v T uh V V = v T u h 2 V = u U v T u h V (ii) By virtue of the Riesz representation theorem, v T u is linear in T u Together with the linearity of T, we conclude that V h = span {v T ϕi } n i=1 It remains to prove that the set {v T ϕi } n i=1 is independent Assume, on the contrary, that there exists i {1,, n} such that α i 0 and n α i v T ϕi = 0 v T( n i=1 ϕi) = 0, i=1 which, by the injectivity of T, implies ( n n α i ϕ i = U T α i ϕ i) V = 0, i=1 i=1 which, by the independence of {ϕ i } n i=1, in turn implies a contradiction α i = 0, i = 1,, n, If we call the result in Theorem 29 as the primal approach, then using Lemma 24 we readily have the following dual analog Theorem 210 Define the adjoint map T : V v T v U as T v, u U U = a(u, v) Assume that T is injective, ie, (3b) holds Denote u T v as the Riesz representation of T v in U Suppose a (, ) is continuous with unity constant and assumption ii) of Theorem 26 holds Take Vh n V and define Then, the following hold, U n h = span {u T v h : v h V n h } (i) M h = γ h = 1 (ii) Let Vh n = span {φ i} n i=1, where φ i V, i = 1,, n Then {u T φ i } n i=1 is a basis of Uh n
9 8 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS In the sequel, we shall not distinguish v u and v T u as well as u v and u T v since we shall work exclusively with the Riesz representations We will use only the primal approach to investigate infinite dimensional settings and then deduce the properties of the resulting finite dimensional settings That is, we start with a basis function in the trial space U and then derive the corresponding optimal basis function in the test space V Of course, we can also take a dual approach, ie, we start with a basis in the test space V and then seek the corresponding optimal basis in the trial space U This will as well lead to wellposed approximation methods with M h = γ h = 1 by Theorem 210 However, the approximability is questionable since the resulting finite dimensional trial space is no longer designed to accurately approximate the exact solution 3 The first wellposed approximation of linear scalar hyperbolic equations In this section, we discuss in details how to apply the primal method developed in Section 2 to a weak formulation of advectionreaction problems The consistency and wellposedness of the infinite dimensional setting are analyzed at length We shall use the Cauchy Schwarz inequality to detect the natural norms so that the bilinear form under consideration is continuous with M = 1 The conditions for equalities to be achievable in Cauchy Schwarz inequality then allow us to find optimal pair of functions in U and V In fact, the Riesz representation turns out to be a candidate for equalities to happen Using this fact along with Theorem 29 we obtain a finite dimensional approximation method with guaranteed wellposedness and M h = γ h = 1 31 Infinite dimensional setting Our problem of interest is the first order scalar linear hyperbolic equation of the form (6a) (6b) β u + µu = f, in Ω, u = g, on Γ, where Γ = {x Ω : n (x) β < 0} is the inflow boundary; n (x) denotes the outward normal vector at x on the boundary Ω Assume β [ W 1, (Ω) ] d with d {1, 2, 3} denoting the dimension of the problem, µ L (Ω), f L 2 (Ω), and g L 2 β n (Γ) with { } L 2 β n (Γ) = w : β n w 2 dγ <, Γ and u H 1 β with H 1 β (Ω) = { u L 2 (Ω) : β u L 2 (Ω) } The following wellposedness of the transport equation (6) is proved in [11] } Lemma 31 Let W = {u Hβ 1 (Ω) : u Γ = 0 and define W u T u = β u + µu L 2 (Ω) Suppose that β is a filling field, ie, there exists a characteristic line of the vector field β starting from Γ and arriving at (almost everywhere) x Ω in finite time Then T is a bijective map from W to L 2 (Ω)
10 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 9 We partition the domain Ω into N el nonoverlapping elements K j, j = 1,, N el such that Ω h = N el K j and Ω = Ω h Here, h is defined as h = max j {1,,N el } diam (K j ) In addition, we denote by E h, with cardinal number N ed, the set of all faces in the mesh, namely, the mesh skeleton In this paper, the term faces is used to indicate either edges of 2D elements or faces of 3D elements Finally, we require β n L (e) for e = 1,, N ed, where n is the normal vector on face e Multiplying (6a) by a test function v, integrating by parts, and introducing the singlevalued flux q at the element interfaces, we have, (7) N el [ u (βv) + µuv] dx + K j 1 Kj\Γβ nqv dx K j N el = fv dx K j K j Γ β ngv dx, with 1 Kj\Γ denoting the indicator function (also known as the characteristic function) of K j \ Γ Clearly, for elements with characteristic faces, ie, β n = 0 on K j, the boundary integrals corresponding to these faces simply drop out and q is allowed to be undefined on these boundaries Next, integrating by parts one more time gives (8) N el (β u + µu) v dx + β n ( 1 Kj\Γq u ) v dx K j K j N el = fv dx K j K j Γ β ngv dx If we choose v Kj L 2 (K j ), then the trace v Kj is not defined Therefore, we introduce a new hybrid variable r that lives on the skeleton of the mesh such that r Kj L 2 β n ( K j) Unlike q, which is singlevalued on a face of the skeleton, r is allowed to have double values depending on the side of that face With the introduction of r, (8) can be rewritten as N el (9) a (u, v) = (β u + µu) v dx + β n ( 1 Kj\Γq u ) r dx K j K j N el = l (v) = fv dx K j K j Γ with u = (u, q), v = (v, r) We define the trial and test spaces as { } U = u : u Kj Hβ 1 (K j ) L 2 β n ( K j ) = Hβ 1 (Ω h ) L 2 β n (E h ), { } V = v : v Kj L 2 (K j ) L 2 β n ( K j ) = L 2 (Ω h ) L 2 β n (E h ) We first study the consistency of the weak formulation (9) β ngr dx,
11 10 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Proposition 1 (Consistency) If u is a solution of (6), then ( u, q = u Eh ) is a solution of (9) Conversely, if (u, q) L 2 (Ω) H 1 β (Ω h) L 2 β n (E h) is a solution of (9), then u is a solution of (6) Proof If u is a solution of (6), then it is straightforward to see that ( u, q = u Eh ) is a solution of (9) Conversely, let (u, q) U, with u L 2 (Ω) H 1 β n (Ω h), be a solution of (9) for all (v, r) V First, taking v = 0 and r L 2 β n (e) where e K j is an internal or an outflow face, for some j { 1,, N el}, yields β nq = β nu This implies, for an internal face, β nu = β nu + Similarly, for an inflow face we obtain β nu = β ng Next, taking v D (K j ) and r = 0 (recall that D (K j ) is the space of test functions on K j ), then (9) implies (10) β u + µu = f, in K j, j = 1,, N el, in the distributional sense Since β u + µu, f L 2 (K j ), (10) also holds in L 2 (K j ) Finally, taking v D (Ω), integrating by parts Equation (10), and using β nu = β nu + and u L 2 (Ω), we obtain the following identity, uβ v dx = (f µu + βu) v dx, which implies, in the distributional sense, which in turn implies Hence, (10) is in fact valid globally in Ω Ω Ω (βu) = f µu + βu L 2 (Ω), β u L 2 (Ω) Guided by Theorem 26, we are seeking norms in spaces U and V such that the continuity constant is unity and the equality in the continuity condition is achievable An approach to obtain this goal is to apply the Cauchy Schwarz inequality, ie, (11) a (u, v) β u + µu L2 (K v j) L 2 (K j) N el + 1 Kj\Γq u r L 2 β n ( K j) L 2 β n ( Kj) 1 N el β u + µu 2 L 2 (K + j) 1 Kj\Γq u 2 2 L 2 β n ( Kj) }{{} u U 1 N el 2 v 2 L 2 (K + j) r 2 L 2 β n ( Kj) }{{} It is clearly that the functional V defined above is a norm Before showing that the functional U is indeed a norm, let us check whether the equality is attainable
12 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 11 Since the Cauchy Schwarz inequalities are employed, one easily sees that if v u = (v u, r u ) defined by (12a) (12b) v u = β u + µu, in K j, r u = sgn (β n) ( 1 Kj\Γq u ), on K j, then the equality is attainable, ie, a (u, v u ) = u U v u V Notice that v u is in fact the Riesz representation of T u (see Theorem 29 for the definition of T ) At this point, we immediately have the wellposedness of the weak formulation (9) Theorem 32 (WellPosedness) Assume that β is a filling field and 0 < c 1 β n L ( K j) c 2 <, for j = 1, N el Then, the weak formulation (9) is wellposed with M = γ = 1 in the norms defined in (11) Proof M = γ = 1 is a direct consequence of Theorem 26 What remains to be proved is the adjoint injectivity condition (3b) Let v V and assume that (13) a (u, v) = 0, u U Taking u = 0, then (13) implies N el K j β n1 Kj\Γ qr dx = 0, q L 2 β n (E h ), which in turn implies r = 0 on E h \ Γ since the map L 2 β n ( K j ) q β nq L 2 β n ( K j ) is surjective due to 0 < c 1 β n L ( K c j) 2 < Now, if u W, then (13) implies (β u + µu) v dx = 0, u W, Ω which, together with the surjectivity of T u = β u + µu from Lemma 31, yields v = 0 in Ω Finally, with v = 0 in Ω and r = 0 on E h \ Γ, (13) becomes N el K j Γ β n ur dx = 0, u H 1 β (Ω h ) Since the trace map Hβ 1 (K j) u u Kj L 2 β n ( K j) is surjective, the preceding equation shows that r = 0 on Γ Hence v = 0 Having proved the consistency and the wellposedness of the weak formulation (9), we proceed with finding optimal pairs of trial and test basis functions For basis functions of the form φ = (0, φ) U, where φ is a function in L 2 β n (E h), the corresponding basis functions in V are given by, for j = 1,, N el, (14a) (14b) v φ = 0, in K j, r φ = 1 Kj\Γφ sgn (β n), on K j Similarly, for basis functions of the form ϕ = (ϕ, 0), where ϕ H 1 β (Ω h), the corresponding basis functions in V are given by, for j = 1,, N el, (15a) (15b) v ϕ = β ϕ + µϕ, in K j, r ϕ = ϕ sgn (β n), on K j
13 12 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Next, it is natural to substitute the test functions (14) or (15) into (9) to establish equations to solve for the unknowns u = (u, q) Let us proceed with the generic test basis in (14) first If φ is not a zero function on the interface K i K j of elements K i and K j and zero elsewhere, then testing with v φ defined in (14) the weak formulation (9) turns into K i K j β n (q {{u}}) φ dx = 0, φ L 2 β n ( K i K j ), where {{u}} = u +u + 2 denotes the average The positive and negative signs indicate element interior and exterior, respectively Since (q {{u}}) L 2 β n ( K i K j ), the preceding equation implies q = {{u}} on K i K j Similarly, taking φ to be nonzero function on ( Ω \ Γ) K j and zero elsewhere, and then testing (9) with v φ defined in (14), we obtain, q = u on Ω K j On the other hand, due to the indicator function 1 Kj\Γ, there is no test function r φ on the inflow boundary faces In summary, testing with the test basis (14), the unknown q on the skeleton is found explicitly as { {{u}} on Ki K (16) q = j u on ( Ω \ Γ) K j As a result, the trial space can be rewritten as { } U = u : u Kj Hβ 1 (K j ) = Hβ 1 (Ω h ), and the norm in U now becomes (17) N el 2 u U = β u + µu 2 L 2 (K + 1 j) 2 [u] + u 2 L 2 L 2 β n ( Kj\Γ) β n ( Kj Γ) We are now in position to prove that (17) indeed defines a norm in U Proposition 2 Assume that β is a filling field, then the functional defined in (17) is a norm U : U [0, ) Proof The fact that the functional U satisfies the positive homogeneity and the triangle inequality is obvious Therefore, it remains to prove that u U = 0 implies u = 0 We first consider elements whose faces are subsets of the inflow boundary In this case, u U = 0 implies (18a) (18b) (18c) β u + µu = 0, in K j, u = 0, on K j Γ, [u] = 0, on K j \ Γ By Lemma 31, u = 0 in K j is the unique solution of (18a) and (18b) Since β is a filling field, K j \Γ must be inflow boundaries of other adjacent elements However, (18c) implies that u = 0 on these inflow boundaries By induction, we conclude that u = 0 is zero on Ω h, and this ends the proof 1 2
14 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 13 (19) Next, substituting the optimal test basis function (15) into (9) we have N el (β u + µu) (β ϕ + µϕ) dx β n ( 1 Kj\Γq u ) ϕ dx K j K j N el = f (β ϕ + µϕ) dx + K j K j Γ β n gϕ dx We can simplify (19) further by using the explicit value of q in (16), renaming ϕ to v, and rewriting the bilinear and linear forms in (9) as N el a (u, v) = (β u + µu) (β v + µv) dx K j + 1 β n [u] v dx + β n uv dx = 2 (20) K j\ Ω N el l (v) = f (β v + µv) dx + K j K j Γ K j Γ β n gv dx, where the jump operator [u] = u u + is employed It is important to point out that the weak formulation (20) is completely equivalent to (9) except that the auxiliary hybrid variables q and r are now eliminated, and that the trial and test spaces are now the same, ie, U = V = H 1 β (Ω h ) 32 Finite dimensional setting and convergence analysis Now, we select the same finite dimensional subspace for both trial and test functions, ie, V n h = U n h = span {ϕ i U, i = 1,, n} The consistency of the weak formulation (9) on finite dimensional subspaces U h and V h is automatically inherited from Proposition 1 On the other hand, using Lemma 28 we immediately have the wellposedness of the finite dimensional approximate problem Theorem 33 Let the bilinear form a (, ) and the linear form l ( ) be defined as in (20) The following problem { Seek uh U h such that a(u h, v h ) = l(v h ), v h V h, is wellposed with M h = γ h = 1 That is, our effort in Section 2 is now paid off by the trivial wellposedness of the finite dimensional problem, which is typically not using other methods [6] In order to use polynomial approximation results [12, 13, 14, 15], we specify the finite dimensional subspaces U h and V h as { Uh n = u Hβ 1 (Ω h ) : u Kj P pj (K j ), j = 1,, N el} = Vh n, where P p denotes the polynomial spaces of order at most p For d {2, 3}, P p could be the usual polynomial spaces for triangular and tetrahedral meshes, and the tensor product polynomial spaces for quadrilateral and hexahedral meshes
15 14 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS At this point, a closer look tells us that our simplified weak formulation (20) is one of the existing least squares DG (LSDG) methods [16] However, it is important to emphasize that, for the DG methods in general, and the other LSDG methods in particular, one usually starts by introducing a numerical flux (typically borrowed from finite volume methods) One then defines bilinear and linear forms (adhoc typically) over finite dimensional polynomial subspaces, and proves the wellposedness of the resulting finite dimensional approximate problem with some prescribed norm Here, starting from the requirement of having unity continuity and inf sup constants, and choosing the weak formulation (9) to apply our theory developed in Section 2, we constructively (and accidentally) derive a LSDG method from a wellposed infinite dimensional setting The distinct feature of our method is that the flux is introduced as a new unknown, and then found by fulfilling stability It should be pointed out that our theory is general in the following sense If it is applied to different weak formulations, one will constructively obtain different numerical methods, again, with trivial wellposedness for the finite dimensional approximation problem, as we shall show in the next sections Since our resulting approximation method coincides to a LSDG approach, the following hp convergence result is immediate from [16] Theorem 34 Suppose that u Kj H sj (K j ), s j > 1 2, j = 1,, N el and the mesh is affine Then, there exists a positive constant C, independent of h j = diam (K j ), p j, and u such that el N u u h 2 U C h 2σj 2 j p 2sj 2 u 2 H s j (K, j) j with p j 1 and 0 σ j min (p j + 1, s j ) Furthermore, if the mesh is quadrilateral or hexahedral and the exact solution u is elementwise analytic, then the following estimation holds el N u u h 2 U C ( ) 2σj 2 hj p j e 2bjpj meas (K j ), 2 where C depends on u, d, and the shaperegularity, b j depends on d, and meas (K j ) denotes length, area, and volume of K j for d = 1, d = 2, and d = 3, respectively Proof A proof can be found in [16] and the references therein Remark 35 By Corollary 1, the error estimation is optimal in both h and p In particular, if the exact solution is elementwise analytic, our approximation method delivers exponential convergence in p 4 The second wellposed approximation of linear scalar hyperbolic equations In the first application of our abstract theory for linear scalar hyperbolic equations in Section 3, we have chosen the weak formulation (9) obtained after integrating (6) by parts twice In this section, we apply our theory to the weak formulation (7) obtained after integrating (6) by parts once As will be shown, we obtain a different wellposed infinite dimensional setting, hence leading to different finite dimensional approximation with guaranteed wellposedness and M h = γ h = 1
16 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT Infinite dimensional setting Our starting point is the weak formulation (7) obtained by integrating (6) by parts once This formulation can be written in a more compact form as (21) N el K j u [ (βv) + µv] dx K j β nq [v ] dx N el = fv dx K j K j Γ β ngv dx, where on the outflow boundary we conventionally define v + = v and on the inflow v + = v We define the trial and test spaces as { } U = u : u Kj L 2 (K j ) L 2 β n ( K j ) = L 2 (Ω h ) L 2 β n (E h ), { } V = v : v Kj Hβ 1 (K j ) = Hβ 1 (Ω h ) Again, guided by Theorem 26, we are looking for natural norms in spaces U and V such that the continuity constant is unity and the equality in the continuity condition is achievable Similar to Section 3, we apply Cauchy Schwarz inequality to obtain, (22) (23a) (23b) a (u, v) (βv) + µv L 2 (K u j) L 2 (K j) N el q L 2 β n ( Kj) [v ] L 2 β n ( K j) 1 N el 2 2 (βv) + µv 2 L 2 (K + 1 j) 2 [v ] L 2 β n ( Kj) }{{} 1 N el 2 2 u 2 L 2 (K + 1 j) 2 q L 2 β n ( Kj) }{{} u U Equalities in (22) are achievable if we use the Riesz representations, ie, u = (βv u ) + µv u, in K j, q = sgn (β n) [v u ], on K j Once we know that the equality is obtainable under condition (23), the consistency and wellposedness of (21) with respect to the norms defined in (22) similarly follows Proposition 1 and Theorem 32 We next use (23) to find optimal pairs of trial and (corresponding) test basis functions For basis functions of the form φ = (0, φ) U, where φ is a function in L 2 β n (E h), the corresponding basis functions in V are given by, for j = 1,, N el, (24a) (24b) (βv φ ) + µv φ = 0, in K j, [v φ ] = sgn (β n) φ, on K j
17 16 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS Similarly, for basis functions of the form ϕ = (ϕ, 0) U, where ϕ L 2 (Ω h ), the corresponding basis functions in V are given by, for j = 1,, N el, (25a) (25b) (βv ϕ ) + µv ϕ = ϕ, in K j, [v ϕ ] = 0, on K j Once the test functions are found using (24) or (25), we can substitute them into (21) to establish equations to solve for the unknowns u = (u, q) Let us proceed with the generic test basis in (24) first If φ is different from zero on e E h and zero elsewhere on the skeleton, then testing (21) with v = (0, v φ ) yields, (26) β n q φ dx = f v φ dx β n g v φ dx e Ω h As can be seen in (26), for each φ L 2 β n (e), e E h, q L 2 β n (e) can be locally solved independently of u Now if ϕ Kj is a nonzero function in K j but zero elsewhere, then testing (21) with v = (0, v ϕ ) gives, (27) uϕ dx = f v ϕ dx β n g v ϕ dx, K j Ω h Γ which shows that the unknown u can also be computed locally elementbyelement and independently of q 42 Finite dimensional setting and convergence analysis Now, given a finite dimensional subspace where u i = U n h = span {u i, i = 1,, n} U, { φi = (0, φ i ) i = 1,, m, ϕ i = (ϕ i, 0) i = m + 1,, n, where φ i and ϕ i are local functions previously used to obtain (26) and (27) The corresponding optimal finite dimensional test space is given by V n h = span {v ui, i = 1,, n}, with basis vectors v ui computed as in (24) or (25) The consistency of the weak formulation (26) and (27) on finite dimensional subspaces Uh n and V h n is automatically inherited from the infinite dimensional setting On the other hand, using Lemma 28 we immediately have the wellposedness of the finite dimensional approximate problem similar to Theorem 33 Theorem 41 Let the bilinear form a (, ) and the linear form l ( ) be defined as in (26) and (27) The following problem { Seek u n h = (u n h, qn h ) U h n such that a(u n h, vn h ) = l(vn h ), vn h V h n, is wellposed with M h = γ h = 1 It is important to point out that the computation of each test basis function requires a solution of adjoint hyperbolic equation (24) or (25) For each adjoint equation, we march the solution element by element from the outflow boundary to the inflow boundary of the domain Ω h Once the finite dimensional space V h is constructed, the weak formulation (21) becomes decoupled weak formulations Γ
18 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 17 (26) and (27) This allows us to locally compute the trace q facebyface, and the approximate solution u elementbyelement, independently The solution procedure can be divided into offline and online stages as follows Since the computation of the test basis functions does not depend on the boundary condition g and the forcing f, it is done once in the offline stage The test basis functions can be then used for any g and f, while preserving the best approximation property Therefore, in the online stage, our resulting finite dimensional approximation method provides fast and best (in the U norm) solution u for any admissible g and/or f It is fast because we only need to locally solve for u elementbyelement by inverting the elemental mass matrices as shown in (27) It is best in the U  norm due to Corollary 1 We should point out that the forcing f is typically projected onto the subspace span {ϕ i, i = m + 1,, n}, and the boundary condition g onto span {φ i Γ, i = 1,, m} Thus, the right side of (27) can be evaluated once in the offline stage for the basis functions in subspaces span {ϕ i, i = m + 1,, n} and span {φ i Γ, i = 1,, m} Clearly, if f = 0, the online stage is extremely fast and negligible in cost This is particularly useful for realtime source or boundary condition inverse problems using optimization method in which one has to solve the forward equation (6) many times with different g and/or f The above offline online procedure can also be seen as a new model reduction (also known as reducedorder modeling) approach Typical projectionbased model reduction approaches [17, 18] begin with a finite, but large, dimensional approximation problem of dimension n, eg, (4) When n is very large (ie for largescale problems), the discrete problem (4) is intractable for realtime simulation, inverse problems, optimal control, uncertainty quantification, and etc [17, 19] The idea behind model reduction is to seek a subspace U nr h = {ξ i, i = 1,, n r } U n h, n r n, where the basis functions ξ i, i = 1,, n r typically have global support The discrete problem (4) is now solved for the pair {U nr h, V nr h } instead of {U h n, V h n }, which is much cheaper since it has only n r n unknowns Clearly, one has to address the wellposedness of the reduced problem and its accuracy Compared to the existing model reduction techniques, our reduction technique is more expensive in the offline stage In the online stage, the cost of constructing an approximate solution is more or less similar, namely, O (n) However, the distinct feature of our method is that it is a direct model reduction technique That is, it does not require to construct the reduced spaces U nr nr h and Vh Moreover, the wellposedness of our direct model reduction method is trivial with M h = γ h = 1 by Lemma 28, and our reduced solution u n h is the best in the space U h n due to Corollary 1 In addition, we always have guaranteed optimal hp convergence result for the reduced solution u n h, as we shall show momentarily Another important feature of our approximation is worth pointing out That is, the test basis vectors computed from elements while the trial basis is completely discontinuous, implying discontinuous solution u in general Thus, our approximation method can be considered as a new discontinuous finite element method which is in between of continuous finite element methods [20] (in which both trial and test spaces are continuous), and the DG methods [21] or the DPG methods [1, 2, 3] (in which both trial and test spaces are discontinuous) It also has the flavor of the hybridized discontinuous Galerkin method [22] due to the present of the hybrid variable q For ease in writing, let us also call our method as a DPG method
19 18 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS We next choose finite dimensional polynomial spaces and discuss the convergence of the resulting discontinuous finite element method To begin, we specify { U h = u = (u, q) U u Kj P pj (K j ), j = 1,, N el } q e P pe (e), e = 1,, N ed We first recall the following useful result on polynomial approximation theory [12, 13, 14, 15], (28) u u n h 2 L 2 (K C h2σj j j) p 2sj j u 2 H s j (K j), s 0, where σ j = min {p j + 1, s j } Then, we have the following convergence result whose proof is almost trivial Theorem 42 Suppose that u Kj H sj (K j ), s j > 1 2, j = 1,, N el, and q e H se (e), e = 1,, N ed Let the mesh be affine Then, there exists a positive constant C, independent of h j = diam (K j ), h e = diam (e), p j, q, and u such that N el u u n h 2 U C h2σj j u 2 p 2sj H s j (K + e j) q 2 p 2se j e Hse (e), e K j,e E h h2σe with p j, p e 1, 0 σ j min (p j + 1, s j ), and 0 σ e min (p e + 1, s e ) Proof The proof is obvious using the definition of the norm in (22), the approximation error (28), and the best approximation error in Corollary 1 5 Numerical results In this section, we present some numerical results to support our findings Since our first method coincides with a least squares DG described by Houston et al [16], we refer the readers to this paper for extensive numerical results We will therefore show only numerical results for our DPG method described in Section 4 Since we are only interested in u n h and un h is independent of qn h, we will ignore the computation of qh n We consider a one dimensional example in which the optimal test functions can be computed analytically In particular, we choose Ω = (0, 1), β = 1, µ = 0, g = arctan ( 100), and the forcing function f to be f(x) = (x 1) 2 Under these assumptions, the exact solution can be found as u (x) = arctan [100 (x 1)] For one dimensional problems, our mesh is simply given by 0 = x 0 < x 1 < < x N el = 1, and K j = (x j 1, x j ), j = 1,, N el Let us denote F Kj (r), r I = [ 1, 1] as a diffeomorphic map from the master element I to K j From (25), the optimal test function is found by integrating from the outflow to the inflow boundaries, ie, v ϕ = xj x ϕ dx, j = 1,, N el, [v ϕ ] = 0, at x j 1 and x j
20 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 19 If supp ϕ K j, and ϕ F (r) = P m (r) is the mth order Legendre polynomial, then we have, (29) v Pm(r) = J Kj 1 r 2 m (m + 1) dp m dr, m 0, where J Kj is the Jacobian of the map F Kj From (25) and (29) we conclude that v Pm(r) has local support since v Pm( 1) = v Pm(1) = 0, in particular, supp v Pm(r) K j, m 0 For m = 0, we obtain 0 in K i, i > j, v P0(r) = J Kj (1 r) in K j, 2J Kj in K i, i < j We have showed that most of the optimal test functions have locally compact supports This makes the evaluation of the right side of (27) negligible Moreover, the mass matrix on the left side of (27) becomes diagonal due to the orthogonality of the Legendre polynomials This makes our method extremely efficient in the online stage since there is no need to invert any matrices to solve for u n h, no matter how fine the mesh is For the numerical results, we choose uniform order p for all elements We will compare our DPG method and a nodal DG method [23] In Figure 1, we plot the DPG and DG solutions for various polynomial orders on a uniform mesh with four elements In the region with small solution gradient, both methods are comparable However, in the steep gradient region, the DPG solutions are less oscillatory than the DG ones As can also be seen, the DPG solutions have smaller undershootings and overshootings This is due to the fact that the DPG solutions minimize the error in the energy norm (22), a component of which is the error in the L 2 norm Therefore, oscillations with big amplitude that have significant L 2 norm are not allowed in the DPG solutions The DG method, however, does not have such property Figure 2 plots the L 2 error for h refinements in the loglog scale We approximate the convergence rate by least squares fitting the convergence curves with straight lines As can be observed, not only the DPG solutions are more accurate by an order of magnitude, but also they have better convergence rates Figure 3 shows the L 2 error for p refinements in the linearlog scale Both DPG and DG exhibits exponential convergence rate, which is consistent with Theorem 34 Again, for the same polynomial order and number of unknowns, the DPG method is more accurate than the DG one We emphasize again that our DPG method does not require any equation solve in the online stage if orthogonal polynomials or mass lumpings (eg collocation on LegendreGaussLobatto nodes) are used Work is in progress to efficiently apply our DPG method to two and three dimensional problems [24] In particular, we would like most of the optimal test basis functions to have locally compact supports as in the above one dimensional example This will make our method more attractive owing to less expensive offline task We should mention that streamline meshes [25] may be an effective option in reducing the cost and complexity of the offline stage, since one only needs to solve for the optimal test basis functions along predefined streamtubes, accordingly Streamline meshes also imply that the support of any optimal test basis function is confined inside a streamtube, allowing
21 20 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS (a) p=1 (b) p=2 (c) p=3 (d) p=4 Figure 1 The DPG and DG solutions for various orders p = 1, 2, 3, 4 with h = 025 The mesh has four elements us to evaluate the right side of (27) only along streamtubes, and hence making the online stage even faster 6 Conclusions We have developed a singleframework theory that adapts to problems at hand, while automatically generating accurate finite element methods with trivial and guaranteed stability The theory is devised for general variational problems that can be written in terms of bilinear and linear forms We therefore expect that it can be applied to a wide range of partial differential equations Nevertheless, due to space limited, we present only applications of the theory to linear hyperbolic equations, and we will provide more applications to other types of partial differential equations in the near future We have shown that our theory constructively leads to two different hp finite element methods, namely, an existing least squares discontinuous Galerkin method and a new discontinuous finite element method We have analyzed the consistency, stability, and hpconvergence of these methods in details These analytical results are supported by numerical results which show that we indeed have obtained wellposed hp finite element methods with optimal convergence rate in the natural energy norms Moreover, the numerical results show that our new discontinuous finite element method is more accurate than the nodal discontinuous Galerkin methods
22 WELLPOSED APPROXIMATION METHODS WITH UNITY INF SUP CONSTANT 21 (a) DPG method (b) DG method Figure 2 h convergence: 2(a) Loglog scale plot of the error of our DPG method in the L 2 norm, ie u u n h L 2 (Ω h ); 2(b) Loglog scale plot of the error of the nodal DG method in the L 2 norm The mesh is refined in h for different polynomial orders from N = 1 to N = 5 The convergence is shown for four different mesh sizes h = {043, 022, 011, 0054} References 1 L Demkowicz, J Gopalakrishnan, A class of discontinuous Petrov Galerkin methods Part I: The transport equation, Computer Methods in Applied Mechanics and Engineering 199 (2324) (2010) L Demkowicz, J Gopalakrishnan, A class of discontinuous Petrov Galerkin methods Part II: Optimal test functions, Numerical methods for Partial Differential Equations 27 (1) (2011)
23 22 TAN BUITHANH, LESZEK DEMKOWICZ, AND OMAR GHATTAS (a) DPG method (b) DG method Figure 3 p convergence: 3(a) Loglog scale plot of the error of our DPG method in the L 2 norm, ie u u n h L 2 (Ω h ); 3(b) Loglog scale plot of the error of the nodal DG method in the L 2 norm The mesh is refined in p, and the convergence is shown for four different mesh sizes h {0016, 00078, 00039, 0002} 3 L Demkowicz, J Gopalakrishnan, A class of discontinuous Petrov Galerkin methods Part IV: The optimal test norm and timeharmonic wave propagation in 1D, Journal Computational Physics 230 (7) (2011) I Babuška, A Aziz, Survey lectures on the mathematical foundations of the finite element method, in: A Aziz (Ed), The Mathematical Foundations of the Finite Element Method with Applications to Partial Differential Equations, Academic Press, New York, 1972, pp J T Oden, L F Demkowicz, Applied functional analysis, CRC Press, A Ern, JL Guermond, Theory and Practice of Finite Elements, Vol 159 of Applied Mathematical Sciences, SpingerVerlag, 2004
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More information16.3 Fredholm Operators
Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationSeparation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationFinite dimensional topological vector spaces
Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationINDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More informationMath 4310 Handout  Quotient Vector Spaces
Math 4310 Handout  Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationLinear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:
Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),
More informationLet H and J be as in the above lemma. The result of the lemma shows that the integral
Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;
More informationIncreasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.
1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationExample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x
Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written
More informationStress Recovery 28 1
. 8 Stress Recovery 8 Chapter 8: STRESS RECOVERY 8 TABLE OF CONTENTS Page 8.. Introduction 8 8.. Calculation of Element Strains and Stresses 8 8.. Direct Stress Evaluation at Nodes 8 8.. Extrapolation
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. FiniteDimensional
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationMathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationOn the representability of the biuniform matroid
On the representability of the biuniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every biuniform matroid is representable over all sufficiently large
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationHøgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin email: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider twopoint
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More informationFixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationMA651 Topology. Lecture 6. Separation Axioms.
MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples
More informationALMOST COMMON PRIORS 1. INTRODUCTION
ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type
More informationPOLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS
POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly noncontiguous or consisting
More informationUNIVERSITETET I OSLO
NIVERSITETET I OSLO Det matematisknaturvitenskapelige fakultet Examination in: Trial exam Partial differential equations and Sobolev spaces I. Day of examination: November 18. 2009. Examination hours:
More informationElasticity Theory Basics
G22.3033002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationCOMBINATORIAL PROPERTIES OF THE HIGMANSIMS GRAPH. 1. Introduction
COMBINATORIAL PROPERTIES OF THE HIGMANSIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the HigmanSims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationTensor product of vector spaces
Tensor product of vector spaces Construction Let V,W be vector spaces over K = R or C. Let F denote the vector space freely generated by the set V W and let N F denote the subspace spanned by the elements
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationGROUPS ACTING ON A SET
GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationNotes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
More informationAn Overview of the Finite Element Analysis
CHAPTER 1 An Overview of the Finite Element Analysis 1.1 Introduction Finite element analysis (FEA) involves solution of engineering problems using computers. Engineering structures that have complex geometry
More informationHow High a Degree is High Enough for High Order Finite Elements?
This space is reserved for the Procedia header, do not use it How High a Degree is High Enough for High Order Finite Elements? William F. National Institute of Standards and Technology, Gaithersburg, Maryland,
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More information3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
More informationMathematical Induction
Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,
More informationSolutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
More informationThe Basics of FEA Procedure
CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring
More informationIdeal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
More informationThree observations regarding Schatten p classes
Three observations regarding Schatten p classes Gideon Schechtman Abstract The paper contains three results, the common feature of which is that they deal with the Schatten p class. The first is a presentation
More informationFurther Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
More information1 Local Brouwer degree
1 Local Brouwer degree Let D R n be an open set and f : S R n be continuous, D S and c R n. Suppose that the set f 1 (c) D is compact. (1) Then the local Brouwer degree of f at c in the set D is defined.
More informationDRAFT. Further mathematics. GCE AS and A level subject content
Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationOPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov
DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics
More informationFinite Element Formulation for Plates  Handout 3 
Finite Element Formulation for Plates  Handout 3  Dr Fehmi Cirak (fc286@) Completed Version Definitions A plate is a three dimensional solid body with one of the plate dimensions much smaller than the
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More information