# MATH 590: Meshfree Methods

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 MATH 590 Chapter 7 1

2 Outline 1 Conditionally Positive Definite Functions Defined 2 CPD Functions and Generalized Fourier Transforms MATH 590 Chapter 7 2

3 Outline Conditionally Positive Definite Functions Defined 1 Conditionally Positive Definite Functions Defined 2 CPD Functions and Generalized Fourier Transforms MATH 590 Chapter 7 3

4 Conditionally Positive Definite Functions Defined In this chapter we generalize positive definite functions to conditionally positive definite and strictly conditionally positive definite functions of order m. These functions provide a natural generalization of RBF interpolation with polynomial reproduction. Examples of strictly conditionally positive definite (radial) functions are given in the next chapter. MATH 590 Chapter 7 4

5 Conditionally Positive Definite Functions Defined Definition A complex-valued continuous function Φ is called conditionally positive definite of order m on R s if N j=1 k=1 N c j c k Φ(x j x k ) 0 (1) for any N pairwise distinct points x 1,..., x N R s, and c = [c 1,..., c N ] T C N satisfying N c j p(x j ) = 0, j=1 for any complex-valued polynomial p of degree at most m 1. The function Φ is called strictly conditionally positive definite of order m on R s if the quadratic form (1) is zero only for c 0. MATH 590 Chapter 7 5

6 Conditionally Positive Definite Functions Defined An immediate observation is Lemma A function that is (strictly) conditionally positive definite of order m on R s is also (strictly) conditionally positive definite of any higher order. In particular, a (strictly) positive definite function is always (strictly) conditionally positive definite of any order. MATH 590 Chapter 7 6

7 Conditionally Positive Definite Functions Defined An immediate observation is Lemma A function that is (strictly) conditionally positive definite of order m on R s is also (strictly) conditionally positive definite of any higher order. In particular, a (strictly) positive definite function is always (strictly) conditionally positive definite of any order. Proof. The first statement follows immediately from the definition. MATH 590 Chapter 7 6

8 Conditionally Positive Definite Functions Defined An immediate observation is Lemma A function that is (strictly) conditionally positive definite of order m on R s is also (strictly) conditionally positive definite of any higher order. In particular, a (strictly) positive definite function is always (strictly) conditionally positive definite of any order. Proof. The first statement follows immediately from the definition. The second statement is true since (by convention) the case m = 0 yields the class of (strictly) positive definite functions, i.e., (strictly) conditionally positive definite functions of order zero are (strictly) positive definite. MATH 590 Chapter 7 6

9 Conditionally Positive Definite Functions Defined As for positive definite functions we also have (see [Wendland (2005a)] for more details) Theorem A real-valued continuous even function Φ is called conditionally positive definite of order m on R s if N j=1 k=1 N c j c k Φ(x j x k ) 0 (2) for any N pairwise distinct points x 1,..., x N R s, and c = [c 1,..., c N ] T R N satisfying N c j p(x j ) = 0, j=1 for any real-valued polynomial p of degree at most m 1. The function Φ is called strictly conditionally positive definite of order m on R s is zero only for c 0. MATH 590 Chapter 7 7

10 Conditionally Positive Definite Functions Defined Remark If the function Φ is strictly conditionally positive definite of order m, then the matrix A with entries A jk = Φ(x j x k ) can be interpreted as being positive definite on the space of vectors c such that N c j p(x j ) = 0, p Π s m 1. j=1 MATH 590 Chapter 7 8

11 Conditionally Positive Definite Functions Defined Remark If the function Φ is strictly conditionally positive definite of order m, then the matrix A with entries A jk = Φ(x j x k ) can be interpreted as being positive definite on the space of vectors c such that N c j p(x j ) = 0, p Π s m 1. j=1 In this sense A is positive definite on the space of vectors c perpendicular to s-variate polynomials of degree at most m 1. MATH 590 Chapter 7 8

12 Conditionally Positive Definite Functions Defined We can now generalize the theorem we had in the previous chapter for constant precision interpolation to the case of general polynomial reproduction: MATH 590 Chapter 7 9

13 Conditionally Positive Definite Functions Defined We can now generalize the theorem we had in the previous chapter for constant precision interpolation to the case of general polynomial reproduction: Theorem If the real-valued even function Φ is strictly conditionally positive definite of order m on R s and the points x 1,..., x N form an (m 1)-unisolvent set, then the system of linear equations [ A P P T O ] [ c d ] = [ y 0 ] (3) is uniquely solvable. MATH 590 Chapter 7 9

14 Proof Conditionally Positive Definite Functions Defined The proof is almost identical to the proof of the earlier theorem for constant reproduction. MATH 590 Chapter 7 10

15 Proof Conditionally Positive Definite Functions Defined The proof is almost identical to the proof of the earlier theorem for constant reproduction. Assume [c, d] T is a solution of the homogeneous linear system, i.e., with y = 0. MATH 590 Chapter 7 10

16 Proof Conditionally Positive Definite Functions Defined The proof is almost identical to the proof of the earlier theorem for constant reproduction. Assume [c, d] T is a solution of the homogeneous linear system, i.e., with y = 0. We show that [c, d] T = 0 is the only possible solution. MATH 590 Chapter 7 10

17 Proof Conditionally Positive Definite Functions Defined The proof is almost identical to the proof of the earlier theorem for constant reproduction. Assume [c, d] T is a solution of the homogeneous linear system, i.e., with y = 0. We show that [c, d] T = 0 is the only possible solution. Multiplication of the top block of (3) by c T yields c T Ac + c T Pd = 0. MATH 590 Chapter 7 10

18 Proof Conditionally Positive Definite Functions Defined The proof is almost identical to the proof of the earlier theorem for constant reproduction. Assume [c, d] T is a solution of the homogeneous linear system, i.e., with y = 0. We show that [c, d] T = 0 is the only possible solution. Multiplication of the top block of (3) by c T yields c T Ac + c T Pd = 0. From the bottom block of the system we know P T c = 0. This implies c T P = 0 T, and therefore c T Ac = 0. (4) MATH 590 Chapter 7 10

19 Conditionally Positive Definite Functions Defined Since the function Φ is strictly conditionally positive definite of order m by assumption we know that the above quadratic form of A (with coefficients such that P T c = 0) is zero only for c = 0. MATH 590 Chapter 7 11

20 Conditionally Positive Definite Functions Defined Since the function Φ is strictly conditionally positive definite of order m by assumption we know that the above quadratic form of A (with coefficients such that P T c = 0) is zero only for c = 0. Therefore (4) tells us that c = 0. MATH 590 Chapter 7 11

21 Conditionally Positive Definite Functions Defined Since the function Φ is strictly conditionally positive definite of order m by assumption we know that the above quadratic form of A (with coefficients such that P T c = 0) is zero only for c = 0. Therefore (4) tells us that c = 0. The unisolvency of the data sites, i.e., the linear independence of the columns of P (c.f. one of our earlier remarks), and the fact that c = 0 guarantee d = 0 from the top block Ac + Pd = 0 of the homogeneous version of (3). MATH 590 Chapter 7 11

22 Outline CPD Functions and Generalized Fourier Transforms 1 Conditionally Positive Definite Functions Defined 2 CPD Functions and Generalized Fourier Transforms MATH 590 Chapter 7 12

23 CPD Functions and Generalized Fourier Transforms As before, integral characterizations help us identify functions that are strictly conditionally positive definite of order m on R s. MATH 590 Chapter 7 13

24 CPD Functions and Generalized Fourier Transforms As before, integral characterizations help us identify functions that are strictly conditionally positive definite of order m on R s. An integral characterization of conditionally positive definite functions of order m, i.e., a generalization of Bochner s theorem, can be found in the paper [Sun (1993b)]. MATH 590 Chapter 7 13

25 CPD Functions and Generalized Fourier Transforms As before, integral characterizations help us identify functions that are strictly conditionally positive definite of order m on R s. An integral characterization of conditionally positive definite functions of order m, i.e., a generalization of Bochner s theorem, can be found in the paper [Sun (1993b)]. However, since the subject matter is rather complicated, and since it does not really help us solve the scattered data interpolation problem, we do not mention any details here. MATH 590 Chapter 7 13

26 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform In order to formulate the Fourier transform characterization of strictly conditionally positive definite functions of order m on R s we require some advanced tools from analysis (see Appendix B). MATH 590 Chapter 7 14

27 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform In order to formulate the Fourier transform characterization of strictly conditionally positive definite functions of order m on R s we require some advanced tools from analysis (see Appendix B). First we define the Schwartz space of rapidly decreasing test functions S = {γ C (R s ) : lim x α (D β γ)(x) = 0, α, β N s 0 }, x where we use the multi-index notation D β = x β 1 1 β x βs s, β = s β i. i=1 MATH 590 Chapter 7 14

28 CPD Functions and Generalized Fourier Transforms Some properties of the Schwartz space The Schwartz Space and the Generalized Fourier Transform S consists of all those functions γ C (R s ) which, together with all their derivatives, decay faster than any power of 1/ x. MATH 590 Chapter 7 15

29 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Some properties of the Schwartz space S consists of all those functions γ C (R s ) which, together with all their derivatives, decay faster than any power of 1/ x. S contains the space C 0 (Rs ), the space of all infinitely differentiable functions on R s with compact support. MATH 590 Chapter 7 15

30 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Some properties of the Schwartz space S consists of all those functions γ C (R s ) which, together with all their derivatives, decay faster than any power of 1/ x. S contains the space C 0 (Rs ), the space of all infinitely differentiable functions on R s with compact support. C 0 (Rs ) is a true subspace of S since, e.g., the function γ(x) = e x 2 belongs to S but not to C 0 (Rs ). MATH 590 Chapter 7 15

31 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Some properties of the Schwartz space S consists of all those functions γ C (R s ) which, together with all their derivatives, decay faster than any power of 1/ x. S contains the space C 0 (Rs ), the space of all infinitely differentiable functions on R s with compact support. C 0 (Rs ) is a true subspace of S since, e.g., the function γ(x) = e x 2 belongs to S but not to C 0 (Rs ). γ S has a classical Fourier transform ˆγ which is also in S. MATH 590 Chapter 7 15

32 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Of particular importance are the following subspaces S m of S S m = {γ S : γ(x) = O( x m ) for x 0, m N 0 }. MATH 590 Chapter 7 16

33 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Of particular importance are the following subspaces S m of S S m = {γ S : γ(x) = O( x m ) for x 0, m N 0 }. Furthermore, the set V of slowly increasing functions is given by V = {f C(R s ) : f (x) p(x) for some polynomial p Π s }. MATH 590 Chapter 7 16

34 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform The generalized Fourier transform is now given by Definition Let f V be complex-valued. A continuous function ˆf : R s \ {0} C is called the generalized Fourier transform of f if there exists an integer m N 0 such that f (x)ˆγ(x)dx = ˆf (x)γ(x)dx R s R s is satisfied for all γ S 2m. The smallest such integer m is called the order of ˆf. MATH 590 Chapter 7 17

35 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform The generalized Fourier transform is now given by Definition Let f V be complex-valued. A continuous function ˆf : R s \ {0} C is called the generalized Fourier transform of f if there exists an integer m N 0 such that f (x)ˆγ(x)dx = ˆf (x)γ(x)dx R s R s is satisfied for all γ S 2m. The smallest such integer m is called the order of ˆf. Remark Various definitions of the generalized Fourier transform exist in the literature (see, e.g., [Gel fand and Vilenkin (1964)]). MATH 590 Chapter 7 17

36 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Since one can show that the generalized Fourier transform of an s-variate polynomial of degree at most 2m is zero, it follows that the inverse generalized Fourier transform is only unique up to addition of such a polynomial. MATH 590 Chapter 7 18

37 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Since one can show that the generalized Fourier transform of an s-variate polynomial of degree at most 2m is zero, it follows that the inverse generalized Fourier transform is only unique up to addition of such a polynomial. The order of the generalized Fourier transform is nothing but the order of the singularity at the origin of the generalized Fourier transform. MATH 590 Chapter 7 18

38 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Since one can show that the generalized Fourier transform of an s-variate polynomial of degree at most 2m is zero, it follows that the inverse generalized Fourier transform is only unique up to addition of such a polynomial. The order of the generalized Fourier transform is nothing but the order of the singularity at the origin of the generalized Fourier transform. For functions in L 1 (R s ) the generalized Fourier transform coincides with the classical Fourier transform. MATH 590 Chapter 7 18

39 CPD Functions and Generalized Fourier Transforms The Schwartz Space and the Generalized Fourier Transform Since one can show that the generalized Fourier transform of an s-variate polynomial of degree at most 2m is zero, it follows that the inverse generalized Fourier transform is only unique up to addition of such a polynomial. The order of the generalized Fourier transform is nothing but the order of the singularity at the origin of the generalized Fourier transform. For functions in L 1 (R s ) the generalized Fourier transform coincides with the classical Fourier transform. For functions in L 2 (R s ) it coincides with the distributional Fourier transform. MATH 590 Chapter 7 18

40 CPD Functions and Generalized Fourier Transforms Fourier Transform Characterization This general approach originated in the manuscript [Madych and Nelson (1983)]. Many more details can be found in the original literature as well as in [Wendland (2005a)]. The following result is due to [Iske (1994)]. Theorem Suppose the complex-valued function Φ V possesses a generalized Fourier transform ˆΦ of order m which is continuous on R s \ {0}. Then Φ is strictly conditionally positive definite of order m if and only if ˆΦ is non-negative and non-vanishing. MATH 590 Chapter 7 19

41 CPD Functions and Generalized Fourier Transforms Fourier Transform Characterization This general approach originated in the manuscript [Madych and Nelson (1983)]. Many more details can be found in the original literature as well as in [Wendland (2005a)]. The following result is due to [Iske (1994)]. Theorem Suppose the complex-valued function Φ V possesses a generalized Fourier transform ˆΦ of order m which is continuous on R s \ {0}. Then Φ is strictly conditionally positive definite of order m if and only if ˆΦ is non-negative and non-vanishing. Remark This theorem states that strictly conditionally positive definite functions on R s are characterized by the order of the singularity of their generalized Fourier transform at the origin, provided that this generalized Fourier transform is non-negative and non-zero. MATH 590 Chapter 7 19

42 CPD Functions and Generalized Fourier Transforms Fourier Transform Characterization Since integral characterizations similar to our earlier theorems of Schoenberg for positive definite radial functions are so complicated in the conditionally positive definite case we do not pursue the concept of a conditionally positive definite radial function here. Such theorems can be found in [Guo et al. (1993a)]. MATH 590 Chapter 7 20

43 CPD Functions and Generalized Fourier Transforms Fourier Transform Characterization Since integral characterizations similar to our earlier theorems of Schoenberg for positive definite radial functions are so complicated in the conditionally positive definite case we do not pursue the concept of a conditionally positive definite radial function here. Such theorems can be found in [Guo et al. (1993a)]. Examples of radial functions via the Fourier transform approach are given in the next chapter. In Chapter 9 we will explore the connection between completely and multiply monotone functions and conditionally positive definite radial functions. MATH 590 Chapter 7 20

44 References I Appendix References Buhmann, M. D. (2003). Radial Basis Functions: Theory and Implementations. Cambridge University Press. Fasshauer, G. E. (2007). Meshfree Approximation Methods with MATLAB. World Scientific Publishers. Gel fand, I. M. and Vilenkin, N. Ya. (1964). Generalized Functions Vol. 4. Academic Press (New York). Iske, A. (2004). Multiresolution Methods in Scattered Data Modelling. Lecture Notes in Computational Science and Engineering 37, Springer Verlag (Berlin). Wendland, H. (2005a). Scattered Data Approximation. Cambridge University Press (Cambridge). MATH 590 Chapter 7 21

45 References II Appendix References Guo, K., Hu, S. and Sun, X. (1993a). Conditionally positive definite functions and Laplace-Stieltjes integrals. J. Approx. Theory 74, pp Iske, A. (1994). Charakterisierung bedingt positiv definiter Funktionen für multivariate Interpolationsmethoden mit radial Basisfunktionen. Ph.D. Dissertation, Universität Göttingen. Madych, W. R. and Nelson, S. A. (1983). Multivariate interpolation: a variational theory. manuscript. Sun, X. (1993b). Conditionally positive definite functions and their application to multivariate interpolation. J. Approx. Theory 74, pp MATH 590 Chapter 7 22

### These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

MA 134 Lecture Notes August 20, 2012 Introduction The purpose of this lecture is to... Introduction The purpose of this lecture is to... Learn about different types of equations Introduction The purpose

### Moving Least Squares Approximation

Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### 1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

### Geometrical Characterization of RN-operators between Locally Convex Vector Spaces

Geometrical Characterization of RN-operators between Locally Convex Vector Spaces OLEG REINOV St. Petersburg State University Dept. of Mathematics and Mechanics Universitetskii pr. 28, 198504 St, Petersburg

### 2.5 Complex Eigenvalues

1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Math 215 Exam #1 Practice Problem Solutions

Math 5 Exam # Practice Problem Solutions For each of the following statements, say whether it is true or false If the statement is true, prove it If false, give a counterexample (a) If A is a matrix such

### Factoring Cubic Polynomials

Factoring Cubic Polynomials Robert G. Underwood 1. Introduction There are at least two ways in which using the famous Cardano formulas (1545) to factor cubic polynomials present more difficulties than

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

### 4 Solving Systems of Equations by Reducing Matrices

Math 15 Sec S0601/S060 4 Solving Systems of Equations by Reducing Matrices 4.1 Introduction One of the main applications of matrix methods is the solution of systems of linear equations. Consider for example

### 1 Review of Least Squares Solutions to Overdetermined Systems

cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)

Journal of Algerian Mathematical Society Vol. 1, pp. 1 6 1 CONCERNING THE l p -CONJECTURE FOR DISCRETE SEMIGROUPS F. ABTAHI and M. ZARRIN (Communicated by J. Goldstein) Abstract. For 2 < p

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### 2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Integer roots of quadratic and cubic polynomials with integer coefficients

Integer roots of quadratic and cubic polynomials with integer coefficients Konstantine Zelator Mathematics, Computer Science and Statistics 212 Ben Franklin Hall Bloomsburg University 400 East Second Street

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### Linear Span and Bases

MAT067 University of California, Davis Winter 2007 Linear Span and Bases Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (January 23, 2007) Intuition probably tells you that the plane R 2 is of dimension

### Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

### THEORY OF SIMPLEX METHOD

Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

### IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

### CLASS NOTES. We bring down (copy) the leading coefficient below the line in the same column.

SYNTHETIC DIVISION CLASS NOTES When factoring or evaluating polynomials we often find that it is convenient to divide a polynomial by a linear (first degree) binomial of the form x k where k is a real

### minimal polyonomial Example

Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Linear Least Squares

Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### An Introduction to Separation of Variables with Fourier Series Math 391w, Spring 2010 Tim McCrossen Professor Haessig

An Introduction to Separation of Variables with Fourier Series Math 391w, Spring 2010 Tim McCrossen Professor Haessig Abstract: This paper aims to give students who have not yet taken a course in partial

### MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

### Methods for Finding Bases

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### The Factor Theorem and a corollary of the Fundamental Theorem of Algebra

Math 421 Fall 2010 The Factor Theorem and a corollary of the Fundamental Theorem of Algebra 27 August 2010 Copyright 2006 2010 by Murray Eisenberg. All rights reserved. Prerequisites Mathematica Aside

### MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

### MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

### Some Basic Properties of Vectors in n

These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

### Review: Vector space

Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

### 5. Factoring by the QF method

5. Factoring by the QF method 5.0 Preliminaries 5.1 The QF view of factorability 5.2 Illustration of the QF view of factorability 5.3 The QF approach to factorization 5.4 Alternative factorization by the

### CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### 8.1 Examples, definitions, and basic properties

8 De Rham cohomology Last updated: May 21, 211. 8.1 Examples, definitions, and basic properties A k-form ω Ω k (M) is closed if dω =. It is exact if there is a (k 1)-form σ Ω k 1 (M) such that dσ = ω.

### West Windsor-Plainsboro Regional School District Algebra I Part 2 Grades 9-12

West Windsor-Plainsboro Regional School District Algebra I Part 2 Grades 9-12 Unit 1: Polynomials and Factoring Course & Grade Level: Algebra I Part 2, 9 12 This unit involves knowledge and skills relative

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### LECTURE NOTES: FINITE ELEMENT METHOD

LECTURE NOTES: FINITE ELEMENT METHOD AXEL MÅLQVIST. Motivation The finite element method has two main strengths... Geometry. Very complex geometries can be used. This is probably the main reason why finite

### ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

### Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

### Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

### Error Bounds for Solving Pseudodifferential Equations on Spheres by Collocation with Zonal Kernels

Error Bounds for Solving Pseudodifferential Equations on Spheres by Collocation with Zonal Kernels Tanya M. Morton 1) and Marian Neamtu ) Abstract. The problem of solving pseudodifferential equations on

### MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

### A mixed FEM for the quad-curl eigenvalue problem

Noname manuscript No. (will be inserted by the editor) A mixed FEM for the quad-curl eigenvalue problem Jiguang Sun Received: date / Accepted: date Abstract The quad-curl problem arises in the study of

### Math 2280 Section 002 [SPRING 2013] 1

Math 2280 Section 002 [SPRING 2013] 1 Today well learn about a method for solving systems of differential equations, the method of elimination, that is very similar to the elimination methods we learned

### 1111: Linear Algebra I

1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,

### Explicit inverses of some tridiagonal matrices

Linear Algebra and its Applications 325 (2001) 7 21 wwwelseviercom/locate/laa Explicit inverses of some tridiagonal matrices CM da Fonseca, J Petronilho Depto de Matematica, Faculdade de Ciencias e Technologia,

### Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

### 1 Inner Products and Norms on Real Vector Spaces

Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### DISTRIBUTIONS AND FOURIER TRANSFORM

DISTRIBUTIONS AND FOURIER TRANSFORM MIKKO SALO Introduction. The theory of distributions, or generalized functions, provides a unified framework for performing standard calculus operations on nonsmooth

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### 1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

### MATH 289 PROBLEM SET 1: INDUCTION. 1. The induction Principle The following property of the natural numbers is intuitively clear:

MATH 89 PROBLEM SET : INDUCTION The induction Principle The following property of the natural numbers is intuitively clear: Axiom Every nonempty subset of the set of nonnegative integers Z 0 = {0,,, 3,

### LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

### Limit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)

SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA KEITH CONRAD Our goal is to use abstract linear algebra to prove the following result, which is called the fundamental theorem of algebra. Theorem

### FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

### The Dirichlet Unit Theorem

Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

### SOME RESULTS ON THE DRAZIN INVERSE OF A MODIFIED MATRIX WITH NEW CONDITIONS

International Journal of Analysis and Applications ISSN 2291-8639 Volume 5, Number 2 (2014, 191-197 http://www.etamaths.com SOME RESULTS ON THE DRAZIN INVERSE OF A MODIFIED MATRIX WITH NEW CONDITIONS ABDUL

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### Real Roots of Quadratic Interval Polynomials 1

Int. Journal of Math. Analysis, Vol. 1, 2007, no. 21, 1041-1050 Real Roots of Quadratic Interval Polynomials 1 Ibraheem Alolyan Mathematics Department College of Science, King Saud University P.O. Box:

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues