# CHM 532 Notes on Fourier Series, Fourier Transforms and the Dirac Delta Function

Save this PDF as:

Size: px
Start display at page:

Download "CHM 532 Notes on Fourier Series, Fourier Transforms and the Dirac Delta Function"

## Transcription

1 CHM 532 Notes on Fourier Series, Fourier Transforms and the Dirac Delta Function These notes provide additional details about some of the new concepts from mathematics given in lecture. Some of this mathematics is analogous to properties of ordinary vectors in three-dimensional space, and we review a few properties of vectors first. Vectors A vector is a mathematical object that has both magnitude and direction. Variables that have only a magnitude are called scalars. An example of a vector is the coordinate vector r that is defined as the vector that connects the origin of some coordinate system to a point in space. The magnitude of the coordinate vector is the distance between the origin of coordinates and the point in space, and the direction of the vector points to the specific point. A vector of zero length, denoted 0, points in no distinct direction. We now introduce the definition of linear independence of two vectors: Definition: Two distinct non-zero vectors u and v are said to be linearly independent if given two scalar coefficients a and b, a u + b v = 0 only if a = b = 0. Two vectors that are not linearly independent are said to be linearly dependent. Although the definition of linear dependence may appear a bit abstract, two linearly dependent vectors are parallel, and independent vectors are not parallel. An important set of linearly independent vectors are the Cartesian unit vectors, î, ĵ and ˆk. The Cartesian unit vector î is the coordinate vector of unit magnitude that begins at the origin of coordinates and lies on the x-axis. Similarly, ĵ is the unit vector on the y-axis and ˆk is the unit vector along the z-axis. The three Cartesian unit vectors form a complete set of vectors in the sense of the following definition:

2 Definition: A set of vectors { u i }i =, 2 or 3 is said to be complete if given any vector v we can write 3 v = c i u i i= where the set {c i } are a set of scalar coefficients. Because the Cartesian unit vectors are complete, we can write any vector v in the form v = v x î + v y ĵ + v zˆk. () The coefficients v x, v y and v z are called the components of v along the x, y and z-axes respectively. There are two methods of multiplying two vectors. Of the two methods, one (called the cross product) can be defined only for three-dimensional vectors. The other, defined now, is generalizable in a manner to be made clear in subsequent sections: Definition: Let u and v be two vectors that can be written in terms of their Cartesian components u = u x î + u y ĵ + u zˆk and v = v x î + v y ĵ + v zˆk. Then the dot product (or inner product or scalar product) of u and v is defined to be u v = u x v x + u y v y + u z v z. Notice that the dot product of two vectors is a scalar. It is possible to show that for two vectors u and v, u v = uv cos θ (2) where u and v are the lengths of vectors u and v respectively, and θ is the angle between u and v. The expression for the dot product in terms of angles helps us understand the next definition Definition: Two vectors u and v are said to be orthogonal if u v = 0. The two vectors are said to be orthonormal if { u v = 0 u v u v = u = v 2

3 From the expansion of the dot product in terms of the angle between the two vectors, it is clear that two vectors are orthogonal if they are perpendicular. Using the dot product, there is a simple method for determining the components of any vector. For example, consider v = v x î + v y ĵ + v zˆk. (3) We now take the dot product of the left and right hand sides of Eq.(3) with the unit vector î î v = v x î î + v y î ĵ + v z î ˆk. (4) Because the Cartesian unit vectors are orthonormal î ĵ = î ˆk = 0 î î = (5) and In a similar way and v x = î v. (6) v y = ĵ v (7) v z = ˆk v. (8) 2 Complete Sets of Functions The properties of vectors outlined in the previous section are not limited to three-dimensional space. We can imagine vectors in N dimensions described by N orthonormal unit vectors. We can also imagine the case where N becomes infinite. An application of such an infinite dimensional vector space is found with sets of functions having special properties. We begin by considering functions on some domain D. We make the following two definitions: Definition: A set of functions {u n (x)} is said to be linearly independent if c n u n (x) = 0 n for all x only if c n = 0 for all n. If the set of functions are not linearly independent, the set is said to be linearly dependent. Any member of a linearly dependent set of functions can be expressed as a linear sum of the remaining members of the set. Linearly independent sets are said to be complete in the sense of this next definition: 3

4 Definition: A set of functions {u n (x)} defined on a domain D is said to be complete if given any function f(x) defined on D we can write f(x) = n c n u n (x) where c n is a scalar coefficient. We emphasize that all functions considered here can be complex, and as a consequence the coefficients c n can be complex numbers. It is important to see the analogy between the definition of a complete set of functions and a complete set of vectors. In each case, a general member of the space can be expressed as a simple linear combination of the members of the complete set. The members of the complete set are often called a basis for the space. For functions, we have the analog of the dot product as well. We define the notation u v = u (x)v(x)dx (9) D where we use the standard notation u (x) for the complex conjugate of the function u(x). We see that u v is a kind of product between two functions the result of which is a number. In analogy with the dot product of two vectors, we refer to u v as a scalar product. Additional analogies between the properties of vectors and sets of functions are found in the following definitions: Definition: Two functions u(x) and v(x) defined on a domain D are said to be orthogonal if u v = u (x)v(x)dx = 0. D Definition: The Kronecker delta is defined by { 0 i j δ ij = i = j. Definition: A set of functions {u n (x)} defined on a domain D is said to be orthonormal if u n u m = u n(x)u m (x) dx = δ nm. D An important class of functions defined on a domain D are those that are both complete and orthonormal. Suppose {u n (x)} is such a complete orthonormal set and f(x) is any function defined on the domain D. We can expand f(x) in terms of our complete set, and a simple procedure can provide the expansion coefficients. We begin with the expression f(x) = n c n u n (x). (0) 4

5 In the case of a complete orthonormal set of vectors, the coefficients associated with each vector are determined by taking the dot product of each vector with the sum [See Eq.(4) and Eqs.(6)-(8)]. We perform the same procedure to obtain the coefficients c n in Eq.(0). We multiply each side of Eq.(0) by the complex conjugate of a particular member of the complete set and the integrate over the domain D. The procedure, as given in the next equations, uses the scalar product of the complete orthonormal set instead of the dot product. We have u m(x)f(x) dx = u m(x) c n u n (x) dx () D D n = c n u m(x)u n (x) dx (2) n D = n c n δ mn = c m. (3) In summary, we can expand any function f(x) in a complete orthonormal set as given in Eq.(0) with the expansion coefficients given by c n = u n(x)f(x) dx. (4) D It is easiest to understand this procedure with the important example of the Fourier series. 3 The Fourier Series Fourier has proved that the set of functions {sin nx, cos nx} for n = 0,, 2, is complete and orthogonal for any piecewise continuous function f(x) defined on the domain x π. For any piecewise continuous function f(x), we can write f(x) = B [A n sin nx + B n cos nx] (5) where we have separated the n = 0 term to make the formulas for B n uniform for any n. Equation (5) is usually called a Fourier series. To obtain the expansion coefficients, the following integrals are of use π sin nx sin mx dx = δ mn (6) π π cos nx cos mx dx = δ mn (7) π and π sin nx cos mx dx = 0. (8) π The set of functions {sin nx, cos nx} are orthogonal but not orthonormal on the domain x π. To find the coefficients of the Fourier series, we multiply each side of Eq. (5) 5

6 by a member of the complete set, multiply by /π, and integrate on the interval x π. For example, to find the coefficient A m π sin mx f(x) dx = [ π ] B0 sin mx π π 2 + {A n sin nx + B n cos nx} dx (9) = π π sin mx B 0 2 dx + π To find the coefficient B m for m 0 = π π A n sin mx sin nx dx + π B n sin mx cos nx dx (20) π = A n δ mn = A m. (2) π cos mx f(x) dx = [ π ] B0 cos mx π π 2 + {A n sin nx + B n cos nx} dx (22) π cos mx B 0 2 dx + π π A n cos mx sin nx dx + π B n cos mx cos nx dx (23) π = B n δ mn = B m. (24) Finally, to find B 0, we multiply both sides of Eq. (5) by /π and integrate over the same integration domain π π f(x) dx = π π [ B0 ] 2 + {A n sin nx + B n cos nx} dx (25) = B 0. (26) The general expressions for the Fourier expansion coefficients are then given by A n = π π sin nxf(x) dx (27) and B n = π cos nxf(x) dx (28) π for all n. In applications, it is generally best to determine the coefficients A n and B n first for n 0 and then find the expression for B 0 separately. As an example, consider the function f(x) = { /2 x π/2 0 otherwise. (29) 6

7 The function f(x) is often called a square wave. coefficients, it is easy to show that A n = 0 B 0 = B n = 2 nπ sin nπ 2 so that the Fourier series for f(x) is given by f(x) = [ cos x cos(3x) π 3 Using the expressions for the Fourier + cos(5x) 5... The square wave and the Fourier series representation of the square wave using a maximum Fourier index of 5 are shown in the figure ] (30) (3) f(x) x The Fourier series for the square wave gives the discrete sum of sinusoidal waves into which the square wave is decomposed. Each sinusoidal wave contribution [e.g. the cos x, cos 3x,...] is included with a weight, the largest weights having the greatest contribution to the overall wave form. We can think of the Fourier series as a sum of sinusoidal waves with decreasing discrete wave lengths. The Fourier coefficients provide information about the importance of the different possible wave lengths to the overall wave form. The wavelengths possible in a Fourier series are discrete. There are times when it is important to decompose a function into sinusoidal waves having continuous rather than discrete wavelengths. The extension that makes such a decomposition possible is called the Fourier transform. Before introducing the Fourier transform, we need a continuous analog of the Kronecker delta. The continuous analog is the subject of the next section. 4 The Dirac Delta Function The Dirac delta function, invented by P.A.M. Dirac for his important formulations of quantum mechanics, is a continuous analog of the Kronecker delta. To appreciate the connection 7

8 between the Kronecker delta and the delta function, it is important to recognize that subscripts are a kind of function. Recall that a function is a mapping between a set of numbers called the domain to another set of numbers called the range. When we use a subscript, we are defining a function whose domain is the set of integers. For example, suppose a j = j 2 for j = 0,, 2,... Then for each integer j, a j is a number (the square of j), and it is clear that the subscript operation just defines a function. In this sense, the Kronecker delta δ ij is a function of two integer variables i and j. If i = j, the Kronecker delta returns, and the Kronecker delta returns 0 if i j. We next consider the effect of the Kronecker delta on a discrete sum. Consider a i δ ij = a j. (32) i=0 The result of the summation is a j, because for any value of i j, the Kronecker delta is 0. The Kronecker delta picks a single member from such an infinite sum. To provide the continuous analogue of the Kronecker delta, we make the following definition: Definition: The Dirac delta function (or just the delta function) δ(x y) is defined by { 0 x y δ(x y) = x = y such that for any function f(x) f(x)δ(x y) dx = f(y) The Kronecker delta extracts a single element from an infinite sum, and the delta function extracts the value of a function at a single point from an integral. The Kronecker delta is zero everywhere except where i = j when the Kronecker delta is. The delta function is zero everywhere except at the point x = y where it is infinite in a way that allows the delta function to extract the value of a function at a single point. It is important to emphasize that as defined, the delta function is not a function in the usual sense. The delta function does not satisfy the properties reserved for functions. When Dirac introduced the delta function in the 930 s, he introduced new mathematics that had no rigorous foundation. It was Dirac s insight that such a mathematical object had to exist. The rigorous mathematics connected with the delta function was developed in the 950 s with the theory of distributions. From our point of view, we can follow Dirac and avoid questions of mathematical rigor. We need to learn the rules for using the delta function, always keeping in mind that some standard mathematical practices may not work. We now try some examples of applying the delta function in integration. Consider the integral x 3 δ(x 2) dx. 8

9 The integrand is zero everywhere except at the point x = 2. Then the integral must be 8. Next consider x 3 δ(x + 2) dx. In this case, the integrand is zero everywhere except at the point x = 2. Then the integral is -8. Next consider x 3 δ(x + 2) dx. 0 In this case, the integrand is zero everywhere except at x = 2, a point that is outside the range of integration. Consequently, the integral is 0. An important property of the delta function can be understood from the integral δ(x) dx. We can rewrite this integral δ(x) dx = δ(x 0)f(x) dx (33) where we take f(x) = for all x. Because the integral is zero except at the point x = 0, the result is f(0) =. The implication is that although the value of the integrand is infinite at x = 0, the integral is finite and equal to ; i.e. the delta function is an infinite spike at a single point with unit area. This description of the delta function can allow the representation of the delta function by various pre-limiting forms. For example, a Gaussian is a pre-limit form of a delta function as its width (standard deviation) becomes small /2σ δ(x) = lim 2 σ 0 σ 2 e x2. (34) An important homework problem in CHM 532 is the proof that if {φ n (x)} is a complete and orthonormal set of functions, it follows that φ n(a)φ n (x) = δ(x a). (35) n=0 Because the delta function is not really a function in the usual sense, it is of interest to see how Eq.(35) evolves numerically. As discussed in class, the set of function {(2/L) /2 sin nπx/l} for n =, 2,... is complete and orthonormal on the domain 0 x L for all functions that vanish at x = 0 and x = L. We evaluate the sum N S(x) = sin nπx sin nπ 4 for different values of N. From Eq. (35) we know that (36) lim S(x) = δ(x 0.25). (37) N 2 9

10 Graphs of S(x) for N = 8, 6, 32 and 64 are plotted below: N = S(x) x 8 N = S(x) x 0

11 S(x) S(x) N = x N = x We see S(x) constructively grows about the point x = 0.25 while the contributions to S(x) outside this region approach zero. We can also observe that the magnitude of S(x) about the point x = 0.25 increases with increasing N. 5 The Fourier Transform We have seen how the Fourier series allows the decomposition of a function on a finite domain into a series of sinusoidal functions at discrete wavelengths. The Fourier transform allows the same kind of decomposition for functions defined on an infinite interval in terms of a continuous set of wavelengths. The basic notion of the Fourier transform is based on the

12 completeness of the set of functions φ k (x) = e ikx (38) For < k <. The exponential evaluated at an imaginary argument is given in terms of Euler s formula e ix = cos x + i sin x. (39) The parameter k is often called the wave vector, and k is simply related to the wavelength of the sinusoidal functions given in Eq.(38). From the properties of the trigonometric functions, it is easy to show that ( [ exp(ikx) = exp ik x + ]) (40) k so that the wavelength is expressed in terms of the wave vector by λ = k. (4) For continuous complete set of functions, Eq.(35) is represented by an integral rather than a sum e i(k k )x dx = δ(k k ). (42) Equation (42) is often called the integral representation of the delta function. It is important not to attempt to evaluate Eq.(42) like an ordinary integral; i.e. we cannot evaluate the simple integral of the exponential in the usual way and then attempt to evaluate the result at the end points of ±. The integral in Eq.(42) so evaluated does not exist. We must just recognize that integrals of the form given in Eq.(42) are delta functions. Remember, delta functions are not really functions, and we learn to work with them using certain rules like that given in Eq.(42). The Fourier transform of a function are the coefficients of a linear combination of the complete set {/ exp(ikx)}. Because the complete set is continuous, we use an integral rather than a sum. We then write for any f(x) for which the Fourier transform exists f(x) = g(k)e ikx dk (43) where g(k) is said to be the Fourier transform of f(x). The Fourier transform g(k) is to the Fourier transform what the coefficients A n and B n are to a Fourier series [See Eq. (5)]; i.e. the expansion coefficients. To determine the Fourier transform coefficients g(k), we follow the same procedure we have used for complete sets of functions. We multiply each side of Eq. (43) by the complex conjugate of a different member of the complete set, and we then integrate over the domain of x. We then obtain e ik x f(x) dx = dx e ik x g(k) e ikx dk (44) 2

13 = = dk g(k) dxe i(k k )x (45) dk g(k) δ(k k ) (46) = g(k ). (47) We see that f(x) is the Fourier transform of its own Fourier transform. The formulas are symmetric. To illustrate Fourier transforms, we calculate the Fourier transform of the function given in Eq.(29) for which we have already determined the Fourier series g(k) = π/2 /2 f(x)e ikx dx (48) e ikx dx (49) = ik e ikx π/2 /2 (50) = ik = 2 k sin [ e ikπ/2 e ikπ/2] (5) ( ) kπ. (52) 2 We interpret g(k) to mean the relative weights of the continuous set of sinusoidal functions each having wavelength /k that combine to give f(x); i.e. we can write f(x) = π sin(kπ/2) e ikx dk. (53) k 3

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### 1 Review of complex numbers

1 Review of complex numbers 1.1 Complex numbers: algebra The set C of complex numbers is formed by adding a square root i of 1 to the set of real numbers: i = 1. Every complex number can be written uniquely

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in

### Basic Quantum Mechanics

Basic Quantum Mechanics Postulates of QM - The state of a system with n position variables q, q, qn is specified by a state (or wave) function Ψ(q, q, qn) - To every observable (physical magnitude) there

### 1 Scalars, Vectors and Tensors

DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical

### Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

### Sine and Cosine Series; Odd and Even Functions

Sine and Cosine Series; Odd and Even Functions A sine series on the interval [, ] is a trigonometric series of the form k = 1 b k sin πkx. All of the terms in a series of this type have values vanishing

### Introduction to Complex Fourier Series

Introduction to Complex Fourier Series Nathan Pflueger 1 December 2014 Fourier series come in two flavors. What we have studied so far are called real Fourier series: these decompose a given periodic function

### SIGNAL PROCESSING & SIMULATION NEWSLETTER

1 of 10 1/25/2008 3:38 AM SIGNAL PROCESSING & SIMULATION NEWSLETTER Note: This is not a particularly interesting topic for anyone other than those who ar e involved in simulation. So if you have difficulty

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### Units, Physical Quantities and Vectors

Chapter 1 Units, Physical Quantities and Vectors 1.1 Nature of physics Mathematics. Math is the language we use to discuss science (physics, chemistry, biology, geology, engineering, etc.) Not all of the

### 1 Inner Products and Norms on Real Vector Spaces

Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from

### 2 Session Two - Complex Numbers and Vectors

PH2011 Physics 2A Maths Revision - Session 2: Complex Numbers and Vectors 1 2 Session Two - Complex Numbers and Vectors 2.1 What is a Complex Number? The material on complex numbers should be familiar

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### Roots and Coefficients of a Quadratic Equation Summary

Roots and Coefficients of a Quadratic Equation Summary For a quadratic equation with roots α and β: Sum of roots = α + β = and Product of roots = αβ = Symmetrical functions of α and β include: x = and

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### Figure 1.1 Vector A and Vector F

CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

### MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### 6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

### TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let

### Infinite Algebra 1 supports the teaching of the Common Core State Standards listed below.

Infinite Algebra 1 Kuta Software LLC Common Core Alignment Software version 2.05 Last revised July 2015 Infinite Algebra 1 supports the teaching of the Common Core State Standards listed below. High School

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### 2 Complex Functions and the Cauchy-Riemann Equations

2 Complex Functions and the Cauchy-Riemann Equations 2.1 Complex functions In one-variable calculus, we study functions f(x) of a real variable x. Likewise, in complex analysis, we study functions f(z)

### Fourier Analysis and its applications

Fourier Analysis and its applications Fourier analysis originated from the study of heat conduction: Jean Baptiste Joseph Fourier (1768-1830) Fourier analysis enables a function (signal) to be decomposed

### Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Chapter 4. Moment - the tendency of a force to rotate an object

Chapter 4 Moment - the tendency of a force to rotate an object Finding the moment - 2D Scalar Formulation Magnitude of force Mo = F d Rotation is clockwise or counter clockwise Moment about 0 Perpendicular

### Conceptual similarity to linear algebra

Modern approach to packing more carrier frequencies within agivenfrequencyband orthogonal FDM Conceptual similarity to linear algebra 3-D space: Given two vectors x =(x 1,x 2,x 3 )andy = (y 1,y 2,y 3 ),

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### Introduction to Green s Functions: Lecture notes 1

October 18, 26 Introduction to Green s Functions: Lecture notes 1 Edwin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-16 91 Stockholm, Sweden Abstract In the present notes I try to give a better

### VECTOR CALCULUS: USEFUL STUFF Revision of Basic Vectors

Prof. S.M. Tobias Jan 2009 VECTOR CALCULUS: USEFUL STUFF Revision of Basic Vectors A scalar is a physical quantity with magnitude only A vector is a physical quantity with magnitude and direction A unit

### v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### 1 Lecture 3: Operators in Quantum Mechanics

1 Lecture 3: Operators in Quantum Mechanics 1.1 Basic notions of operator algebra. In the previous lectures we have met operators: ˆx and ˆp = i h they are called fundamental operators. Many operators

### MATHEMATICS FOR ENGINEERING INTEGRATION TUTORIAL 1 BASIC INTEGRATION

MATHEMATICS FOR ENGINEERING INTEGRATION TUTORIAL 1 ASIC INTEGRATION This tutorial is essential pre-requisite material for anyone studying mechanical engineering. This tutorial uses the principle of learning

### Definition: A vector is a directed line segment that has and. Each vector has an initial point and a terminal point.

6.1 Vectors in the Plane PreCalculus 6.1 VECTORS IN THE PLANE Learning Targets: 1. Find the component form and the magnitude of a vector.. Perform addition and scalar multiplication of two vectors. 3.

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### 9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of

### The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

### A Primer on Index Notation

A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 3.4 Complex Zeros and the Fundamental Theorem of Algebra

86 Polynomial Functions.4 Complex Zeros and the Fundamental Theorem of Algebra In Section., we were focused on finding the real zeros of a polynomial function. In this section, we expand our horizons and

### Index Notation for Vector Calculus

Index Notation for Vector Calculus by Ilan Ben-Yaacov and Francesc Roig Copyright c 2006 Index notation, also commonly known as subscript notation or tensor notation, is an extremely useful tool for performing

### Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### MATH 304 Linear Algebra Lecture 24: Scalar product.

MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B A A vector is represented by a directed segment. Directed segment is drawn as an arrow. Different arrows represent

### ELEMENTS OF VECTOR ALGEBRA

ELEMENTS OF VECTOR ALGEBRA A.1. VECTORS AND SCALAR QUANTITIES We have now proposed sets of basic dimensions and secondary dimensions to describe certain aspects of nature, but more than just dimensions

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Gauss s Law for Gravity

Gauss s Law for Gravity D.G. impson, Ph.D. Department of Physical ciences and Engineering Prince George s Community College December 6, 2006 Newton s Law of Gravity Newton s law of gravity gives the force

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

### Vector algebra Christian Miller CS Fall 2011

Vector algebra Christian Miller CS 354 - Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority

### Vocabulary Words and Definitions for Algebra

Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms

### Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### Engineering Math II Spring 2015 Solutions for Class Activity #2

Engineering Math II Spring 15 Solutions for Class Activity # Problem 1. Find the area of the region bounded by the parabola y = x, the tangent line to this parabola at 1, 1), and the x-axis. Then find

### The continuous and discrete Fourier transforms

FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

### FURTHER VECTORS (MEI)

Mathematics Revision Guides Further Vectors (MEI) (column notation) Page of MK HOME TUITION Mathematics Revision Guides Level: AS / A Level - MEI OCR MEI: C FURTHER VECTORS (MEI) Version : Date: -9-7 Mathematics

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### 4 Sums of Random Variables

Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.

### MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

### LINES AND PLANES IN R 3

LINES AND PLANES IN R 3 In this handout we will summarize the properties of the dot product and cross product and use them to present arious descriptions of lines and planes in three dimensional space.

### The slope m of the line passes through the points (x 1,y 1 ) and (x 2,y 2 ) e) (1, 3) and (4, 6) = 1 2. f) (3, 6) and (1, 6) m= 6 6

Lines and Linear Equations Slopes Consider walking on a line from left to right. The slope of a line is a measure of its steepness. A positive slope rises and a negative slope falls. A slope of zero means

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Vectors, Gradient, Divergence and Curl.

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

### The Point-Slope Form

7. The Point-Slope Form 7. OBJECTIVES 1. Given a point and a slope, find the graph of a line. Given a point and the slope, find the equation of a line. Given two points, find the equation of a line y Slope

### National 5 Mathematics Course Assessment Specification (C747 75)

National 5 Mathematics Course Assessment Specification (C747 75) Valid from August 013 First edition: April 01 Revised: June 013, version 1.1 This specification may be reproduced in whole or in part for

### COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i

COMPLEX NUMBERS _4+i _-i FIGURE Complex numbers as points in the Arg plane i _i +i -i A complex number can be represented by an expression of the form a bi, where a b are real numbers i is a symbol with

### Introduction to Sturm-Liouville Theory

Introduction to Ryan C. Trinity University Partial Differential Equations April 10, 2012 Inner products with weight functions Suppose that w(x) is a nonnegative function on [a,b]. If f (x) and g(x) are

### Review of Vector Analysis in Cartesian Coordinates

R. evicky, CBE 6333 Review of Vector Analysis in Cartesian Coordinates Scalar: A quantity that has magnitude, but no direction. Examples are mass, temperature, pressure, time, distance, and real numbers.

### NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Math 5311 Gateaux differentials and Frechet derivatives

Math 5311 Gateaux differentials and Frechet derivatives Kevin Long January 26, 2009 1 Differentiation in vector spaces Thus far, we ve developed the theory of minimization without reference to derivatives.

### Euler s Formula Math 220

Euler s Formula Math 0 last change: Sept 3, 05 Complex numbers A complex number is an expression of the form x+iy where x and y are real numbers and i is the imaginary square root of. For example, + 3i

### Numerical Methods Lecture 5 - Curve Fitting Techniques

Numerical Methods Lecture 5 - Curve Fitting Techniques Topics motivation interpolation linear regression higher order polynomial form exponential form Curve fitting - motivation For root finding, we used

### Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that