Lecture Notes on Dirac delta function, Fourier transform, Laplace transform

Similar documents
The continuous and discrete Fourier transforms

HEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEV-TYPE INEQUALITY. Received June 1, 2007; revised September 30, 2007

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Taylor and Maclaurin Series

TOPIC 4: DERIVATIVES

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

x a x 2 (1 + x 2 ) n.

MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity

Metric Spaces. Chapter Metrics

SIGNAL PROCESSING & SIMULATION NEWSLETTER

Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Phasors. Phasors. by Prof. Dr. Osman SEVAİOĞLU Electrical and Electronics Engineering Department. ^ V cos (wt + θ) ^ V sin (wt + θ)

Unified Lecture # 4 Vectors

The Convolution Operation

Differentiation of vectors

BANACH AND HILBERT SPACE REVIEW

3. INNER PRODUCT SPACES

Introduction to Complex Numbers in Physics/Engineering

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Representation of functions as power series

THE BANACH CONTRACTION PRINCIPLE. Contents

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Fourier Analysis. u m, a n u n = am um, u m

A power series about x = a is the series of the form

sin(x) < x sin(x) x < tan(x) sin(x) x cos(x) 1 < sin(x) sin(x) 1 < 1 cos(x) 1 cos(x) = 1 cos2 (x) 1 + cos(x) = sin2 (x) 1 < x 2

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

General Theory of Differential Equations Sections 2.8, , 4.1

Systems of Linear Equations

ORDINARY DIFFERENTIAL EQUATIONS

MATH 132: CALCULUS II SYLLABUS

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

The Mean Value Theorem

Inner Product Spaces

Continued Fractions and the Euclidean Algorithm

1 if 1 x 0 1 if 0 x 1

5 Homogeneous systems

DERIVATIVES AS MATRICES; CHAIN RULE

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Review of Fourier series formulas. Representation of nonperiodic functions. ECE 3640 Lecture 5 Fourier Transforms and their properties

8 Primes and Modular Arithmetic

Lecture 7 ELE 301: Signals and Systems

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005

Practice with Proofs

5 Numerical Differentiation

1 Error in Euler s Method

12. Inner Product Spaces

The integrating factor method (Sect. 2.1).

LS.6 Solution Matrices

Lies My Calculator and Computer Told Me

Transistor amplifiers: Biasing and Small Signal Model

A characterization of trace zero symmetric nonnegative 5x5 matrices

Math 4310 Handout - Quotient Vector Spaces

7. DYNAMIC LIGHT SCATTERING 7.1 First order temporal autocorrelation function.

Microeconomic Theory: Basic Math Concepts

The properties of an ideal Fermi gas are strongly determined by the Pauli principle. We shall consider the limit: µ >> k B T βµ >> 1,

Linear Programming Notes V Problem Transformations

Linear Algebra I. Ronald van Luijk, 2012

About the Gamma Function

Differentiation and Integration

E3: PROBABILITY AND STATISTICS lecture notes

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Lectures 5-6: Taylor Series

ENCOURAGING THE INTEGRATION OF COMPLEX NUMBERS IN UNDERGRADUATE ORDINARY DIFFERENTIAL EQUATIONS

The Quantum Harmonic Oscillator Stephen Webb

Scalar Valued Functions of Several Variables; the Gradient Vector

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i i 5 6i

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Cartesian Products and Relations

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem

Chapter 3. Integral Transforms

19.6. Finding a Particular Integral. Introduction. Prerequisites. Learning Outcomes. Learning Style

Chapter 8 - Power Density Spectrum

Series FOURIER SERIES. Graham S McDonald. A self-contained Tutorial Module for learning the technique of Fourier series analysis

The Math Circle, Spring 2004

1.5 / 1 -- Communication Networks II (Görg) Transforms

CHAPTER IV - BROWNIAN MOTION

6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.

Homework # 3 Solutions

arxiv:hep-th/ v1 25 Jul 2005

Class Meeting # 1: Introduction to PDEs

Trigonometric Functions and Equations

5.3 Improper Integrals Involving Rational and Exponential Functions

Course Notes for Math 162: Mathematical Statistics Approximation Methods in Statistics

Section 5.1 Continuous Random Variables: Introduction

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Parabolic Equations. Chapter 5. Contents Well-Posed Initial-Boundary Value Problem Time Irreversibility of the Heat Equation

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Elasticity Theory Basics

f x a 0 n 1 a 0 a 1 cos x a 2 cos 2x a 3 cos 3x b 1 sin x b 2 sin 2x b 3 sin 3x a n cos nx b n sin nx n 1 f x dx y

Metric Spaces. Chapter 1

Introduction to Complex Fourier Series

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

The Fourier Series of a Periodic Function

G(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s).

The Heat Equation. Lectures INF2320 p. 1/88

Transcription:

Lecture Notes on Dirac delta function, Fourier transform, Laplace transform Luca Salasnich Dipartment of Physics and Astronomy Galileo Gailei University of Padua

2

Contents 1 Dirac Delta Function 1 2 Fourier Transform 5 3 Laplace Transform 11 3

4 CONTENTS

Chapter 1 Dirac Delta Function In 188 the self-taught electrical scientist Oliver Heaviside introduced the following function Θ(x) = { 1 for x > for x < (1.1) which is now called Heaviside step function. This is a discontinous function, with a discontinuity of first kind (jump) at x =, which is often used in the context of the analysis of electric signals. Moreover, it is important to stress that the Haviside step function appears also in the context of quantum statistical physics. In fact, the Fermi-Dirac function (or Fermi-Dirac distribution) 1 F β (x) = e β x + 1, (1.2) proposed in 1926 by Enrico Fermi and Paul Dirac to describe the quantum statistical distribution of electrons in metals, where β = 1/(k B T) is the inverse of the absolute temperature T (with k B the Boltzmann constant) and x = ǫ µ is the energy ǫ of the electron with respect to the chemical potential µ, becomes the function Θ( x) in the limit of very small temperature T, namely lim F β(x) = Θ( x) = β + { for x > 1 for x <. (1.3) Inspired by the work of Heaviside, with the purpose of describing an extremely localized charge density, in 193 Paul Dirac investigated the following function δ(x) = { + for x = for x (1.4) imposing that δ(x) dx = 1. (1.5) Unfortunately, this property of δ(x) is not compatible with the definition (1.4). In fact, from Eq. (1.4) it follows that the integral must be equal to zero. In other words, it does 1

2 CHAPTER 1. DIRAC DELTA FUNCTION not exist a function δ(x) which satisfies both Eq. (1.4) and Eq. (1.5). Dirac suggested that a way to circumvent this problem is to interpret the integral of Eq. (1.5) as δ(x) dx = lim ǫ + δ ǫ (x) dx, (1.6) where δ ǫ (x) is a generic function of both x and ǫ such that { + for x = lim δ ǫ(x) = ǫ + for x, (1.7) δ ǫ (x) dx = 1. (1.8) Thus, the Dirac delta function δ(x) is a generalized function (but, strictly-speaking, not a function) which satisfy Eqs. (1.4) and (1.5) with the caveat that the integral in Eq. (1.5) must be interpreted according to Eq. (1.6) where the functions δ ǫ (x) satisfy Eqs. (1.7) and (1.8). There are infinite functions δ ǫ (x) which satisfy Eqs. (1.7) and (1.8). Among them there is, for instance, the following Gaussian δ ǫ (x) = 1 ǫ /ǫ2 e x2, (1.9) π which clearly satisfies Eq. (1.7) and whose integral is equal to 1 for any value of ǫ. Another example is the function { 1 for x ǫ/2 δ ǫ (x) = ǫ for x > ǫ/2, (1.1) which again satisfies Eq. (1.7) and whose integral is equal to 1 for any value of ǫ. In the following we shall use Eq. (1.1) to study the properties of the Dirac delta function. According to the approach of Dirac, the integral involving δ(x) must be interpreted as the limit of the corresponding integral involving δ ǫ (x), namely δ(x) f(x) dx = lim ǫ + for any function f(x). It is then easy to prove that δ ǫ (x) f(x) dx, (1.11) δ(x) f(x) dx = f(). (1.12) by using Eq. (1.1) and the mean value theorem. Similarly one finds δ(x c) f(x) dx = f(c). (1.13)

Several other properties of the Dirac delta function δ(x) follow from its definition. In particular δ( x) = δ(x), (1.14) δ(a x) = 1 δ(x) a with a, (1.15) δ(f(x)) = 1 f (x i i ) δ(x x i) with f(x i ) =. (1.16) Up to now we have considered the Dirac delta function δ(x) with only one variable x. It is not difficult to define a Dirac delta function δ (D) (r) in the case of a D-dimensional domain R D, where r = (x 1, x 2,..., x D ) R D is a D-dimensional vector: 3 δ (D) (r) = { + for r = for r (1.17) and R D δ (D) (r) d D r = 1. (1.18) Notice that sometimes δ (D) (r) is written using the simpler notation δ(r). Clearly, also in this case one must interpret the integral of Eq. (1.18) as δ (D) (r) d D r = lim R D ǫ + δ ǫ (D) (r) d D r, R D (1.19) where δ (D) ǫ (r) is a generic function of both r and ǫ such that lim ǫ + lim ǫ + δ(d) ǫ (r) = { + for r = for r, (1.2) δ (D) ǫ (r) d D r = 1. (1.21) Several properties of δ(x) remain valid also for δ (D) (r). Nevertheless, some properties of δ (D) (r) depend on the space dimension D. For instance, one can prove the remarkable formula { 1 δ (D) 2π (r) = 2 (ln r ) ( ) for D = 2 1 D(D 2)V D 2 1 for D 3, (1.22) r D 2 where 2 = 2 + 2 +...+ 2 and V x 2 1 x 2 2 x 2 D = π D/2 /Γ(1+D/2) is the volume of a D-dimensional D ipersphere of unitary radius, with Γ(x) the Euler Gamma function. In the case D = 3 the previous formula becomes δ (3) (r) = 1 ( ) 1 4π 2 r, (1.23) which can be used to transform the Gauss law of electromagnetism from its integral form to its differential form.

4 CHAPTER 1. DIRAC DELTA FUNCTION

Chapter 2 Fourier Transform It was known from the times of Archimedes that, in some cases, the infinite sum of decreasing numbers can produce a finite result. But it was only in 1593 that the mathematician Francois Viete gave the first example of a function, f(x) = 1/(1 x), written as the infinite sum of power functions. This function is nothing else than the geometric series, given by 1 1 x = x n, for x < 1. (2.1) n= In 1714 Brook Taylor suggested that any real function f(x) which is infinitely differentiable in x and sufficiently regular can be written as a series of powers, i.e. f(x) = c n (x x ) n, (2.2) where the coefficients c n are given by n= c n = 1 n! f(n) (x ), (2.3) with f (n) (x) the n-th derivative of the function f(x). The series (2.2) is now called Taylor series and becomes the so-called Maclaurin series if x =. Clearly, the geometric series (2.1) is nothing else than the Maclaurin series, where c n = 1. We observe that it is quite easy to prove the Taylor series: it is sufficient to suppose that Eq. (2.2) is valid and then to derive the coefficients c n by calculating the derivatives of f(x) at x = x ; in this way one gets Eq. (2.3). In 187 Jean Baptiste Joseph Fourier, who was interested on wave propagation and periodic phenomena, found that any sufficiently regular real function function f(x) which is periodic, i.e. such that f(x + L) = f(x), (2.4) where L is the periodicity, can be written as the infinite sum of sinusoidal functions, namely f(x) = a 2 + [ a n cos (n 2πL ) x + b n sin (n 2πL )] x, (2.5) n=1 5

6 CHAPTER 2. FOURIER TRANSFORM where a n = 2 L b n = 2 L L/2 L/2 L/2 L/2 f(y) cos (n 2πL ) y dy, (2.6) f(y) sin (n 2πL ) y dy. (2.7) It is quite easy to prove also the series (2.5), which is now called Fourier series. In fact, it is sufficient to suppose that Eq. (2.5) is valid and then to derive the coefficients a n and b n by multiplying both side of Eq. (2.5) by cos ( n 2π x) and cos ( n 2π x) respectively and L L integrating over one period L; in this way one gets Eqs. (2.6) and (2.7). It is important to stress that, in general, the real variable x of the function f(x) can represent a space coordinate but also a time coordinate. In the former case L gives the spatial periodicity and 2π/L is the wavenumber, while in the latter case L is the time periodicity and 2π/L the angular frequency. Taking into account the Euler formula e in2π L x = cos (n 2πL ) x + i sin (n 2πL ) x (2.8) with i = 1 the imaginary unit, Fourier observed that his series (2.5) can be re-written in the very elegant form where f(x) = f n = 1 L L/2 + n= L/2 f n e in2π L x, (2.9) f(y) e in2π L y dy (2.1) are complex coefficients, with f = a /2, f n = (a n ib n )/2 if n > and f n = (a n +ib n )/2 if n <, thus fn = f n. The complex representation (2.9) suggests that the function f(x) can be periodic but complex, i.e. such that f : R C. Moreover, one can consider the limit L + of infinite periodicity, i.e. a function which is not periodic. In this limit Eq. (2.9) becomes the so-called Fourier integral (or Fourier anti-transform) with f(x) = 1 2π f(k) = f(k) e ikx dk (2.11) f(y) e iky dy (2.12) the Fourier transform of f(x). To prove Eqs. (2.11) and (2.12) we write Eq. (2.9) taking into account Eq. (2.1) and we find ( + ) 1 L/2 f(x) = f(y) e in2π L y dy e in2π L x. (2.13) L n= L/2

7 Setting k n = n 2π L and k = k n+1 k n = 2π L (2.14) the previous expression of f(x) becomes f(x) = 1 2π + n= ( ) L/2 f(y) e ikny dy e iknx k. (2.15) L/2 In the limit L + one has k dk, k n k and consequently f(x) = 1 2π ( ) f(y) e iky dy e ikx dk, (2.16) which gives exactly Eqs. (2.11) and (2.12). Note, however, that one gets the same result (2.16) if the Fourier integral and its Fourier transform are defined multiplying them respectively with a generic constant and its inverse. Thus, we have found that any sufficiently regular complex function f(x) of real variable x which is globally integrable, i.e. such that f(x) dx < +, (2.17) can be considered as the (infinite) superposition of complex monocromatic waves e ikx. The amplitude f(k) of the monocromatic wave e ikx is the Fourier transform of f(x). f(x) F[f(x)](k) 1 2πδ(k) δ(x) 1 1 Θ(x) + π δ(k) ik e ik x 2π δ(k k ) e x2 /(2a 2 ) a 2πe a2 k 2 /2 e a x 2a a 2 +k 2 2 sgn(x) ik π sin (k x) [δ(k k i ) δ(k + k )] cos (k x) π [δ(k k ) + δ(k + k )] Table: Fourier transforms F[f(x)](k) of simple functions f(x), where δ(x) is the Dirac delta function, sgn(x) is the sign function, and Θ(x) is the Heaviside step function. The Fourier transform f(k) of a function f(x) is sometimes denoted as F[f(x)](k), namely f(k) = F[f(x)](k) = f(x) e ikx dx. (2.18)

8 CHAPTER 2. FOURIER TRANSFORM The Fourier transform F[f(x)](k) has many interesting properties. For instance, due to the linearity of the integral the Fourier transform is clearly a linear map: Moreover, one finds immediately that F[a f(x) + bg(x)](k) = a F[f(x)](k) + b F[g(x)](k). (2.19) F[f(x a)](k) = e ika F[f(x)](k), (2.2) F[e ik x f(x)](k) = F[f(x)](k k ). (2.21) F[xf(x)](k) = i f (k), (2.22) F[f (n) (x)](k) = (ik) n f(k), (2.23) where f (n) (x) is the n-th derivative of f(x) with respect to x. In the Table we report the Fourier transforms F[f(x)](k) of some elementary functions f(x), including the Dirac delta function δ(x) and the Heaviside step function Θ(x). We insert also the sign function sgn(x) defined as: sgn(x) = 1 for x > and sgn(x) = 1 for x <. The table of Fourier transforms clearly shows that the Fourier transform localizes functions which is delocalized, while it delocalizes functions which are localized. In fact, the Fourier transform of a constant is a Dirac delta function while the Fourier transform of a Dirac delta function is a constant. In general, it holds the following uncertainty theorem where x 2 = x k 1 2, (2.24) ( 2 x 2 f(x) 2 dx x f(x) dx) 2 (2.25) and ( k 2 = k 2 f(k) 2 2 dk k dk) f(k) 2 (2.26) are the spreads of the wavepackets respectively in the space x and in the dual space k. This theorem is nothing else than the uncertainty principle of quantum mechanics formulated by Werner Heisenberg in 1927, where x is the position and k is the wavenumber. Another interesting and intuitive relationship is the Parseval identity, given by f(x) 2 dx = f(k) 2 dk. (2.27) It is important to stress that the power series, the Fourier series, and the Fourier integral are special cases of the quite general expansion f(x) = f α φ α (x) dα (2.28) of a generic function f(x) in terms of a set φ α (x) of basis functions spanned by the parameter α, which can be a discrete or a continuous variable. A large part of modern mathematical analysis is devoted to the study of Eq. (2.28) and its generalization.

The Fourier transform is often used in electronics. In that field of research the signal of amplitude f depends on time t, i.e. f = f(t). In this case the dual variable of time t is the frequency ω and the fourier integral is usually written as f(t) = 1 f(ω) e iωt dk (2.29) 2π with f(ω) = F[f(t)](ω) = f(t) e iωt dt (2.3) the Fourier transform of f(t). Clearly, the function f(t) can be seen as the Fourier antitransform of f(ω), in symbols f(t) = F 1 [ f(ω)](t) = F 1 [F[f(t)](ω)](t), (2.31) which obviously means that the composition F 1 F gives the identity. More generally, if the signal f depends on the 3 spatial coordinates r = (x, y, z) and time t, i.e. f = f(r, t), one can introduce Fourier transforms from r to k, from t to ω, or both. In this latter case one obviously obtains f(r, t) = 1 (2π) 4 R 4 f(k, ω) e i(k x ωt) d 3 k dω (2.32) with f(k, ω) = F[f(r, t)](k, ω) = f(r, t) e i(k r ωt) d 3 r dt. (2.33) R 4 Also in this general case the function f(r, t) can be seen as the Fourier anti-transform of f(k, ω), in symbols f(r, t) = F 1 [ f(k, ω)](r, t) = F 1 [F[f(r, t)](k, ω)](r, t). (2.34) 9

1 CHAPTER 2. FOURIER TRANSFORM

Chapter 3 Laplace Transform The Laplace transform is an integral transformation, similar but not equal to the Fourier transform, introduced in 1737 by Leonard Euler and independently in 1782 by Pierre-Simon de Laplace. Nowaday the Laplace transform is mainly used to solve non-homogeneous ordinary differential equations with constant coefficients. Given a sufficiently regular function f(t) of time t, the Laplace transform of f(t) is the function F(s) such that F(s) = f(t) e st dt, (3.1) where s is a complex number. Usually the integral converges if the real part Re(s) of the complex number s is greater than critical real number x c, which is called abscissa of convergence and unfortunately depends on f(t). The Laplace transform F(s) of a function f(t) is sometimes denoted as L[f(t)](s), namely F(s) = L[f(t)](s) = f(t) e st dt. (3.2) For the sake of completeness and clarity, we write also the Fourier transform f(ω), denoted as F[f(t)](ω), of the same function f(ω) = F[f(t)](ω) = f(t) e iωt dt. (3.3) First of all we notice that the Laplace transform depends on the behavior of f(t) for non negative values of t, while the Fourier transform depends also on the behavior of f(t) for negative values of t. This is however not a big problem, because we can set f(t) = for t < (or equivalently we can multiply f(t) by the Heaviside step function Θ(t)), and then the Fourier transform becomes f(ω) = F[f(t)](ω) = 11 f(t) e iωt dt. (3.4)

12 CHAPTER 3. LAPLACE TRANSFORM Moreover, it is important to observe that, comparing Eq. (3.2) with Eq. (3.4), if both F and L of f(t) exist, we obtain F(s) = L[f(t)](s) = F[f(t)](is) = f(is), (3.5) or equivalently f(ω) = F[f(t)](ω) = L[f(t)]( iω) = F( iω). (3.6) Remember that ω is a real variable while s is a complex variable. Thus we have found that for a generic function f(t), such that f(t) = for t <, the Laplace transform F(s) and the Fourier transform f(ω) are simply related to each other. In the following Table we report the Laplace transforms L[f(t)](s) of some elementary functions f(t), including the Dirac delta function δ(t) and the Heaviside step function Θ(t), forgetting about the possible problems of regularity. f(t) L[f(t)](s) 1 1 s e τs δ(t τ) Θ(t τ) e τs s t n n! s n+1 e at 1 e a t sin (at) cos (at) s+a 2a a 2 s 2 a s 2 +a 2 s s 2 +a 2 Table. Laplace transforms L[f(t)](s) of simple functions f(t), where δ(t) is the Dirac delta function and Θ(t) is the Heaviside step function, and τ > and n positive integer. We now show that writing f(t) as the Fourier anti-transform of f(ω) one can deduce the formula of the Laplace anti-transform of F(s). In fact, one has f(t) = F 1 [ f(ω)](t) = 1 2π Because f(ω) = F( iω) one finds also f(ω) e iωt dω. (3.7) f(t) = 1 F( iω) e iωt dω. (3.8) 2π Using s = iω as integration variable, this integral representation becomes f(t) = 1 +i F(s) e st ds, (3.9) 2π i i

where the integration is now a contour integral along any path γ in the complex plane of the variable s, which starts at s = i and ends at s = i. What we have found is exactly the Laplace anti-transform of F(s), in symbols f(t) = L 1 [F(s)](t) = 1 2π i +i i 13 F(s) e st ds. (3.1) Remember that this function is such that f(t) = for t <. The fact that the function f(t) is the Laplace anti-transform of F(s) can be symbolized by f(t) = L 1 [F(s)](t) = L 1 [L[f(t)](s)](t), (3.11) which means that the composition L 1 L gives the identity. The Laplace transform L[f(t)](s) has many interesting properties. For instance, due to the linearity of the integral the Laplace transform is clearly a linear map: Moreover, one finds immediately that L[a f(t) + bg(t)](s) = a L[f(t)](s) + b L[g(t)](s). (3.12) L[f(t a)θ(t a)](s) = e as L[f(t)](s), (3.13) L[e at f(t)](s) = L[f(t)](s a). (3.14) For the solution of non-homogeneous ordinary differential equations with constant coefficients, the most important property of the Laplace transform is the following L[f (n) (t)](s) = s n L[f(t)](s) s n 1 f() s n 2 f (1) ()... s f (n 2) f (n 1) () (3.15) where f (n) (t) is the n-th derivative of f(t) with respect to t. For instance, in the simple cases n = 1 and n = 2 one has L[f (t)](s) = s F(s) f(), (3.16) L[f (t)](s) = s 2 F(s) sf () f(), (3.17) by using F(s) = L[f(t)](s). The proof of Eq. (3.15) is straightforward performing integration by parts. Let us prove, for instance, Eq. (3.16): L[f (t)](s) = = f() + s f (t) e st dt = [ f(t) e st] + = f() + s F(s). f(t) d dt( e st ) dt f(t) e st dt = f() + s L[f(t)](s) (3.18) We now give a simple example of the Laplace method to solve ordinary differential equations by considering the differential problem f (t) + f(t) = 1 with f() = 2. (3.19)

14 CHAPTER 3. LAPLACE TRANSFORM We apply the Laplace transform to both sides of the differential problem L[f (t) + f(t)](s) = L[1](s) (3.2) obtaining s F(s) 2 + F(s) = 1 s. (3.21) This is now an algebric problem with solution F(s) = Finally, we apply the Laplace anti-transform 1 s(s + 1) + 2 s + 1 = 1 s 1 s + 1 + 2 s + 1 = 1 s + 1 s + 1. (3.22) f(t) = L 1 [ 1 s + 1 s + 1 ] (t) = L 1 [ 1 s ] [ ] 1 (t) + L 1 (t). (3.23) s + 1 By using our Table of Laplace transforms we find immediately the solution f(t) = 1 + e t. (3.24) The Laplace transform can be used to solve also integral equations. In fact, one finds [ t ] L f(y) dy (s) = 1 F(s), (3.25) s and more generally [ t ] L f(y) g(t y) dy (s) = I + F(s) G(s), (3.26) s where I = f(y) g(t y) dy. (3.27) Notice that the integral which appears in Eq. (3.25) is, by definition, the convolution (f g)(t) of two functions, i.e. (f g)(t) = t f(y) g(t y) dy. (3.28)