Lecture Notes on PDE s: Separation of Variables and Orthogonality

Size: px
Start display at page:

Download "Lecture Notes on PDE s: Separation of Variables and Orthogonality"

Transcription

1 Lecture Notes on PDE s: Separation of Variables and Orthogonality Richard H. Rand Dept. Theoretical & Applied Mechanics Cornell University Ithaca NY version 15 Copyright 28 by Richard H. Rand 1

2 R.Rand Lecture Notes on PDE s 2 Contents 1 Three Problems 3 2 The Laplacian 2 in three coordinate systems 4 3 Solution to Problem A by Separation of Variables 5 4 Solving Problem B by Separation of Variables 7 5 Euler s Differential Equation 8 6 Power Series Solutions 9 7 The Method of Frobenius 11 8 Ordinary Points and Singular Points 13 9 Solving Problem B by Separation of Variables, continued 17 1 Orthogonality Sturm-Liouville Theory Solving Problem B by Separation of Variables, concluded Solving Problem C by Separation of Variables 27

3 R.Rand Lecture Notes on PDE s 3 1 Three Problems We will use the following three problems in steady state heat conduction to motivate our study of a variety of math methods: Problem A : Heat conduction in a cube 2 u = for <x<l, <y<l, <z<l (1) with the assumption that u = u(x, z,only) (that is, no y dependence)), and with the boundary conditions: Problem B : Heat conduction in a circular cylinder u = on x =,L (2) u = on z = (3) u =1 on z = L (4) 2 u = for <r<a, <z<l (5) with the assumption that u = u(r, z,only) (that is, no θ dependence), and with the boundary conditions: Problem C : Heat conduction in a sphere u = on r = a (6) u = on z = (7) u =1 on z = L (8) 2 u = for <ρ<a (9) with the assumption that u = u(ρ, φ,only) (that is, no θ dependence), and with the boundary conditions: Here φ is the colatitude and θ is the longitude. u = on r = a, π/2 φ π (1) u =1 on r = a, φ<π/2 (11)

4

5 R.Rand Lecture Notes on PDE s 4 2 The Laplacian 2 in three coordinate systems Rectangular coordinates Circular cylindrical coordinates 2 u = 2 u x u y u z 2 (12) where and where Spherical coordinates where and where 2 u = 2 u r + 1 u 2 r r u r 2 θ + 2 u (13) 2 z 2 x = r cos θ, y = r sin θ, that is, r 2 = x 2 + y 2 (14) 2 u = 1 [ ( ρ 2 u ) + 1 ρ 2 ρ ρ sin φ φ θ<2π (15) ( ) u φ sin φ + 1 sin 2 φ 2 ] u θ 2 x = ρ sin φ cos θ, y = ρ sin φ sin θ, z = ρ cos θ, that is, ρ 2 = x 2 + y 2 + z 2 (17) (16) θ<2π, φ π (18)

6 R.Rand Lecture Notes on PDE s 5 3 Solution to Problem A by Separation of Variables In this section we solve Problem A by separation of variables. This is intended as a review of work that you have studied in a previous course. We seek a solution to the PDE (1) (see eq.(12)) in the form Substitution of (19) into (12) gives: u(x, z) =X(x)Z(z) (19) X Z + XZ = (2) where primes represent differentiation with respect to the argument, that is, X means dx/dx whereas Z means dz/dz. Separating variables, we obtain Z Z = X X = λ (21) where the two expressions have been set equal to the constant λ because they are functions of the independent variables x and z, and the only way these can be equal is if they are both constants. This yields two ODE s: X + λx = and Z λz = (22) Substituting the ansatz (19) into the boundary conditions (=B.C.) (2) and (3), we find: The general solution of the X equation in (22) is X() =, X(L) = and Z() = (23) X(x) =c 1 cos λx + c 2 sin λx (24) where c 1 and c 2 are arbitrary constants. The first B.C. of (23), X() =, gives c 1 =. The second B.C. of (23), X(L) =, gives λ = nπ/l, for n =1, 2, 3, : λ = n2 π 2 L, X(x) =c 2 2 sin nπx, n =1, 2, 3, (25) L The general solution of the Z equation in (22) can be written in either of the equivalent forms: Z(z) =c 3 cosh λz + c 4 sinh λz (26) or Z(z) =c 5 e λz + c 6 e λz (27) where λ = n2 π 2 L 2 and where cosh v = ev + e v 2 and sinh v = ev e v 2 (28)

7 R.Rand Lecture Notes on PDE s 6 Choosing the form (26), the third B.C. of (23), Z() =, gives c 3 =. Substituting the derived results into the assumed form of the solution (19), we have u(x, z) =X(x)Z(z) =a n sin nπx L nπz sinh, n =1, 2, 3, (29) L where a n =c 2 c 4 is an arbitrary constant. Since the PDE (1) is linear, we may superimpose solutions to obtain the form: u(x, z) = a n sin nπx nπz sinh (3) L L n=1 We still have to satisfy the B.C. (4), u(x, L)=1, which gives (from (3)): 1= a n sin nπx sinh nπ (31) L n=1 Eq.(31) is a Fourier series. We may obtain the values of the constants a n by using the orthogonality of the eigenfunctions sin nπx on the interval <x<l: L L sin nπx mπx sin dx = n m (32) L L where n and m are integers. Eq.(32) follows from the trig identity: sin nπx L mπx sin L = 1 (n + m)πx cos + 1 (n m)πx cos 2 L 2 L Substituting (33) into (32), the cosines integrate to sines and vanish at the upper and lower limits if n m. In the case that n=m, we have ( L 1 (n + m)πx cos 1 ) (n m)πx L ( cos dx = 1 2mπx cos 2 L 2 L 2 L + 1 ) dx = L (34) 2 2 We return now to eq.(31) and use (32) and (34) to obtain the coefficients a n. Multiplying (31) by sin mπx and integrating from to L, we obtain: L L sin mπx L dx = a L m sinh mπ (35) 2 Changing the index from m to n, this gives L L a n 2 sinh nπ = sin nπx L dx = [ L nπ ] nπx L { 2L cos = nπ L for n odd for n even whereupon we obtain the following solution u(x, z) from eqs.(3) and (36): (33) (36) u(x, z) = n=1,3,5, 4 nπ sinh nπz L sinh nπ sin nπx L (37)

8 R.Rand Lecture Notes on PDE s 7 4 Solving Problem B by Separation of Variables Problem B has the PDE (see (5) and (13)): 2 u = 2 u r + 1 u 2 r r + 2 u z = (38) 2 Following the procedure we used on problem A, we seek a solution to the PDE (38) in the form u(r, z) =R(r)Z(z) (39) Substitution of (39) into (38) gives: R Z + 1 r R Z + RZ = (4) where again primes represent differentiation with respect to the argument. Separating variables, we obtain Z Z = + 1 R r R = λ (41) R This yields two ODE s: R + 1 r R + λr = and Z λz = (42) Substituting the ansatz (39) into the B.C. (6) and (7), we find: R(a) = and Z() = (43) Now if we were to continue to follow the procedure we used on problem A, we would solve the R equation of (42) and use the first B.C. of (43) to find λ, and so on. However, the R equation has a variable coefficient, namely in the 1 r R term. Thus we must digress and find out to how to solve such ODE s before we can continue with the solution of problem B.

9 R.Rand Lecture Notes on PDE s 8 5 Euler s Differential Equation The simplest case of a linear variable coefficient second order ODE is Euler s equation: ax 2 d2 y dy + bx + cy= (44) dx2 dx We look for a solution with the ansatz: y = x r (45) Substitution of (45) into (44) gives ar(r 1) + br + c = that is, ar 2 +(b a)r + c = (46) We may use the quadratic formula to obtain (in general) a pair of complex conjugate roots r 1 and r 2. Thus the general solution may be written y = c 1 x r 1 + c 2 x r 2 (47) If the roots are both real, then eq.(47) suffices. However in the general case in which r 1 and r 2 are complex, say r 1 = µ + iν, we obtain the form y = c 1 x µ+iν + c 2 x µ iν = x µ (c 1 x iν + c 2 x iν ) (48) Using Euler s formula, e iθ = cos θ + i sin θ, and the identity x = e log x, we obtain the real form y = x µ (c 1 e iν log x + c 2 e iν log x )=x µ (c 3 cos(ν log x)+c 4 sin(ν log x)) (49) where c 3 and c 4 are real arbitrary constants, and where log x stands for natural logarithms. In the case that the roots are repeated, r 1 =r 2 =r, the general solution to (44) is y = c 1 x r + c 2 x r log x (5)

10 6 Power Series Solutions R.Rand Lecture Notes on PDE s 9 So now we know how to solve Euler s equation. What about linear differential equations with variable coefficients which are not in the form of Euler s equation? A natural approach would be to look for the solution in the form of a power series: y = c + c 1 x + c 2 x 2 + c 3 x 3 + c 4 x c n x n + (51) where the coefficients c i are to be found. The method is best illustrated with an example. Example 1 We substitute (51) into (52) d 2 y dx + x dy 2 dx + y = (52) 2c 2 +6c 3 x +12c 4 x x(c 1 +2c 2 x +3c 3 x 2 + )+c + c 1 x + c 2 x 2 + c 3 x 3 + = (53) and collect terms: 2c 2 + c + x(6c 3 +2c 1 )+x 2 (12c 4 +3c 2 )+x 3 (2c 5 +4c 3 ) (54) Next we require the coefficient of each power of x n to vanish, giving: 2c 2 + c = (55) 6c 3 +2c 1 = (56) 12c 4 +3c 2 = (57) 2c 5 +4c 3 = (58) Note that eqs.(55)-(58) can be abbreviated by the single recurrence relation: which may be written in the form (n + 2)(n +1)c n+2 +(n +1)c n =, n =, 1, 2, 3, (59) c n c n+2 =, n =, 1, 2, 3, (6) n +2 The nature of the recurrence relation (6) is that c and c 1 can be chosen arbitrarily, after which all the other c i s will be determined in terms of c and c 1. This leads to the following expression for the general solution of eq.(52): y = c f(x)+c 1 g(x) (61) where f(x) =1 x2 2 + x4 2 4 x x (62) g(x) =x x3 3 + x5 3 5 x x (63)

11 R.Rand Lecture Notes on PDE s 1 The method of power series has worked great on Example 1. Let s try it on Example 2: Example 2 We substitute (51) into (64) 2x 2 d2 y dy +3x +( 1+x)y = (64) dx2 dx 2(2c 2 x 2 +6c 3 x 3 +12c 4 x 4 + )+3x(c 1 +2c 2 x+3c 3 x 2 + )+( 1+x)(c +2c 1 x+2c 2 x 2 +2c 3 x 3 + )= (65) and collect terms: This time eqs.(67)-(69) have the recurrence relation: c = (66) 2c 1 + c = (67) 9c 2 + c 1 = (68) 2c 3 + c 2 = (69) c n c n+1 =, n =1, 2, 3, (7) (2n + 1)(n +2) But since eq.(66) requires that c =, we find from (7) that all the c i s must vanish. In other words, the method of power has failed to produce a solution for Example 2, eq.(64). This raises two questions: 1. How can we obtain a solution to eq.(64)?, and 2. For which class of equations will the method of power series work? We answer the first question in the next section by replacing the method of power series by a more general method called the method of Frobenius. We put off answering the second question until later.

12 R.Rand Lecture Notes on PDE s 11 7 The Method of Frobenius The method of Frobenius generalizes the method of power series by seeking a solution in the form of a generalized power series : y = x r (c + c 1 x + c 2 x 2 + c 3 x 3 + c 4 x c n x n + ) (71) Note that eq.(71) reduces to the power series (51) if r=. Let s try this method on Example 2: Example 2, continued 2x 2 d2 y dy +3x +( 1+x)y = (72) dx2 dx We substitute (71) into (72) and collect terms: (2r 2 + r 1)c x r + [(2r 2 +5r +2)c 1 + c ]x r+1 + [(2r 2 +9r +9)c 2 + c 1 ]x r+2 + = (73) Eqs.(75)-(77) have the recurrence relation: (2r 2 + r 1)c = (74) (2r 2 +5r +2)c 1 + c = (75) (2r 2 +9r +9)c 2 + c 1 = (76) [2r 2 +(4n +1)r +(n 2 + n 1)]c n+1 + c n = (77) c n c n+1 =, n =1, 2, 3, (78) (2r +2n 1)(r + n +1) In addition to (78), we also must satisfy, from (74), the indicial equation: 2r r 1= r = 1, 2 We consider each of these r-values separately, taking c =1 for each: For r= 1 we get: (79) For r= 1 2 y = f(x) = 1 x +1 x 2 + x2 18 x x4 126 x5 + (8) 684 we get: y = g(x) = ( x 1 x ) 5 + x2 7 x x x (81) The general solution of Example 2, eq.(72), is thus given by y = k 1 f(x)+k 2 g(x) (82)

13 R.Rand Lecture Notes on PDE s 12 where k 1 and k 2 are arbitrary constants. Next, let s try the method of Frobenius on Example 3: Example 3 x 3 d2 y dx 2 + y = (83) We substitute (71) into (83) and collect terms: c x r +[(r 2 r)c + c 1 ]x r+1 +[(r 2 + r)c 1 + c 2 ]x r+2 +[(r 2 +3r +2)c 2 + c 3 ]x r = (84) Eqs.(86)-(89) have the recurrence relation: c = (85) (r 2 r)c + c 1 = (86) (r 2 + r)c 1 + c 2 = (87) (r 2 +3r +2)c 2 + c 3 = (88) (r + n)(r + n 1)c n + c n+1 = (89) c n+1 = (r + n)(r + n 1)c n, n =1, 2, 3, (9) But since eq.(85) requires that c =, we find from (9) that all the c i s must vanish. In other words, the method of Frobenius has failed to produce a solution for Example 3, eq.(83). Summarizing, we have seen that for some equations the method of power series works (Example 1), whereas for other equations that method fails but the more general method of Frobenius works (Example 2). Now we have seen an example in which the method of Frobenius fails (Example 3). We need a classification scheme which will tell us which equations we can be guaranteed to solve using these two methods.

14 R.Rand Lecture Notes on PDE s 13 8 Ordinary Points and Singular Points Let s consider the general class of linear second order ODE s of the form: A(x) y + B(x) y + C(x) y = (91) in which A(x), B(x) and C(x) have power series expansions about x=: A(x) =A + A 1 x + A 2 x 2 + (92) B(x) =B + B 1 x + B 2 x 2 + (93) C(x) =C + C 1 x + C 2 x 2 + (94) This includes the possibility that A(x), B(x) and C(x) are polynomials. Definition: If A then x= is called an ordinary point. If A = and not both B and C are zero, then x= is called a singular point. Now suppose that x= is a singular point. Then let p(x) be defined by p(x) =x B(x) A(x) = B + B 1 x + B 2 x 2 + A 1 + A 2 x + A 3 x 2 + (95) and let q(x) be defined by q(x) =x 2C(x) A(x) = x C + C 1 x + C 2 x 2 + A 1 + A 2 x + A 3 x 2 + (96) Definition: Let x= be a singular point. If p(x) and q(x) don t blow up as x approaches zero, then x= is called a regular singular point. A singular point which is not regular is called an irregular singular point. These definitions will help us to determine whether the method of power series or the method of Frobenius will work to solve a given equation of the form (91). Rule 1: If x= is an ordinary point, then two linearly independent solutions can be obtained by the method of power series expansions about x=. Rule 2: If x= is a regular singular point, then at least one solution can be obtained by the method of Frobenius expanded about x=. As an example, consider Example 1, eq.(52): d 2 y dx + x dy 2 dx + y = (97) In this case A(x)=1 and x = is an ordinary point. According to Rule 1, two independent solutions can be obtained in the form of power series, see eqs.(62) and (63).

15 R.Rand Lecture Notes on PDE s 14 Consider now Example 2, eq.(72): 2x 2 d2 y dy +3x +( 1+x)y = (98) dx2 dx In this case A = and C = 1 so that x= is a singular point. The quantities p(x) and q(x) are given by p(x) =x B(x) A(x) = 3 2 and q(x) =x 2C(x) A(x) = 1+x 2 and since neither p(x) nor q(x) blows up as x, we see that x= is a regular singular point. Then by Rule 2, at least one solution can be obtained by the method of Frobenius. In fact we found two such solutions, see eqs.(8) and (81). Next consider Example 3, eq.(83): x 3 d2 y dx + y = (1) 2 In this case A = and C =1 so that x= is a singular point. The quantities p(x) and q(x) are given by p(x) =x B(x) C(x) = and q(x) =x2 A(x) A(x) = 1 (11) x Since q(x) blows up as x, we see that x= is an irregular singular point and neither Rule 1 nor Rule 2 applies. In fact we saw that the method of Frobenius did not work on Example 3. You may have noticed that Rule 1 guarantees two linearly independent solutions in the case of an ordinary point, while Rule 2 only guarantees one solution in the case of a regular singular point. Nevertheless in the case of Example 2 (which has a regular singular point) we found two linearly independent solutions. The case where the method of Frobenius only yields one solution is illustrated by Example 4: Example 4 x 2 d2 y dy +4x +(2+x) y = (12) dx2 dx Here x= is a regular singular point. We apply the method of Frobenius by substituting the Frobenius series (71) into eq.(12) and collecting terms: (r 2 +3r +2)c x r +[(r 2 +5r +6)c 1 + c ]x r+1 +[(r 2 +7r + 12)c 2 + c 1 ]x r+2 + = (13) (99) (r 2 +3r +2)c = (14) (r 2 +5r +6)c 1 + c = (15) (r 2 +7r + 12)c 2 + c 1 = (16) [r 2 +(2n +5)r +(n 2 +5n + 6)]c n+1 + c n = (17)

16 R.Rand Lecture Notes on PDE s 15 Eqs.(15)-(17) have the recurrence relation: c n c n+1 =, n =1, 2, 3, (18) (r + n + 2)(r + n +3) In addition to (18), we also must satisfy, from (14), the indicial equation: r 2 +3r +2= r = 1, 2 (19) Let us consider each of these r-values separately: For r= 1 we get: ( 1 y = c x x 12 x x3 288 x x 5 ) For r= 2, eq.(15) requires that we take c =. Then we obtain the following solution: ( 1 y = c 1 x x 12 x x3 288 x x 5 ) (11) (111) Eqs.(11) and (111) are obviously not linearly independent. The method of Frobenius has obtained only one linearly independent solution. This case can be characterized by Rule 3: Rule 3: Ifx= is a regular singular point, and if the two indicial roots r 1 and r 2 are not identical and do not differ by an integer, then the method of Frobenius will yield two linearly independent solutions. In the case of Example 2, eq.(72), the indicial equation (79) gave r 1 = 1 and r 2 = 1. Since these 2 r-values do not differ by an integer, Rule 3 guarantees that the method of Frobenius will generate two linearly independent solutions. See eqs.(8) and (81). However, in the case of Example 4, eq.(12), the indicial roots were r 1 = 1 and r 2 = 2, see eq.(19). In this case Rule 3 does not apply and we have no guarantee that the method of Frobenius will generate two linearly independent solutions. A natural question to ask at this point is: How can we find the second linearly independent solution in cases like that of Example 4, where there is a regular singular point for which the indicial roots are repeated or differ by an integer? The answer is given by Rule 4: Rule 4: If x= is a regular singular point, and if the two indicial roots r 1 and r 2 are either identical or differ by an integer, then one solution can be obtained by the method of Frobenius. Let us refer to that solution as y = f(x). A second linearly independent solution can be obtained in the form y = Cf(x) log x + g(x) (112) where g(x) is a Frobenius series. The constant C may or may not be equal to zero. If C=, then both linearly independent solutions can be obtained in the form of Frobenius series. In the repeated root case r 1 =r 2, the constant C is not equal to zero.

17 R.Rand Lecture Notes on PDE s 16 In the case of repeated indicial roots, we should not be surprised to find the occurrence of log x in the solution, since we have seen that in Euler s equation, eq.(44), which has a regular singular point at x=, the presence of a repeated root implies the presence of log x in the solution, see eq.(5).

18 R.Rand Lecture Notes on PDE s 17 9 Solving Problem B by Separation of Variables, continued Now that we know how to solve linear variable coefficient ODE s, let us return to the problem which motivated our digression, namely the application of separation of variables to problem B. We wrote u(r, z) = R(r)Z(z), see eq.(39), and we found that the unkown function R(r) had to satisfy the following boundary value problem (see eqs.(42),(43)): d 2 R dr r dr dr + λr = with the B.C. R = when r = a (113) To begin with, let s stretch the r coordinate so as to absorb the separation constant λ and thereby make it disappear from the ODE. Let x = λr (114) and let us change notation from R(r) toy(x): dr dr = dy dx dx dr = dy d dx dr ( λr) = λ dy dx (115) where we have used the chain rule. Similarly, d2 R = λ d2 y. The differential equation in (113) dr 2 dx 2 becomes d 2 y dx + 1 dy 2 x dx + y = (116) Next let us multiply eq.(116) by x so as to put it in the form of eq.(91): x d2 y dx + dy + xy = (117) 2 dx Now we have to decide upon which method to use to solve eq.(117). Inspection of (117) shows that x= is not an ordinary point (since A, see eq.(92)). In fact x= is a regular singular point. To prove this we compute the quantities p(x) and q(x) defined in eqs.(95),(96): p(x) =x B(x) A(x) = 1 and q(x) =x2c(x) A(x) = x2 (118) and we note that neither p(x) nor q(x) blows up as x, telling us that x= is a regular singular point. Then by Rule 2 we know that at least one solution of (117) can be obtained by the method of Frobenius. We proceed with the method of Frobenius by substituting a Frobenius series (71) into eq.(117) and collecting terms. We find: r 2 c x r 1 +(r 2 +2r +1)c 1 x r +[(r 2 +4r +4)c 2 +c ]x r+1 +[(r 2 +6r +9)c 3 +c 1 ]x r+2 + = (119) r 2 c = (12) (r 2 +2r +1)c 1 = (121)

19 R.Rand Lecture Notes on PDE s 18 Eqs.(122)-(124) have the recurrence relation: (r 2 +4r +4)c 2 + c = (122) (r 2 +6r +9)c 3 + c 1 = (123) (r + n +2) 2 c n+2 + c n = (124) c n c n+2 =, n =, 1, 2, 3, (125) (r + n +2) 2 The indicial equation for this system is obtained from eq.(12): r 2 = r =, (repeated root) (126) So we are in the case of a regular singular point with a repeated indicial root. Rule 4 applies to this situation and tells us that the method of Frobenius will be able to generate only one solution, and that a second linearly independent solution will involve a log x term, see eq.(112). Note that once r= is chosen to satisfy eq.(12), we find from eq.(121) that c 1 =. This implies, from the recurrence relation (125) that all the c odd coefficients must vanish. Thus we obtain the following solution: ( y = c 1 x2 4 + x4 64 x x x 1 ) (127) The function so generated is very famous and is known as a Bessel function of order zero of the first kind. It is usually represented by the symbol J and can be written in the following form: ( 1) n x 2n J (x) = (128) n= 2 2n (n!) 2 We present without derivation the following expression for J (x) which is valid for large values of x: 2 J (x) (x πx cos π ) as x (129) 4 From Rule 4, a second linearly independent solution will take the form: y = CJ (x) log x + g(x) (13) where g(x) can be written as a power series. Let us abbreviate this second linearly independent solution by the symbol Y. Thus we have obtained the following expression for the general solution to eq.(117): y = k 1 J (x)+k 2 Y (x) (131)

20 R.Rand Lecture Notes on PDE s 19 Recall that we defined x in order to simplify the form of the R differential equation in (113). Now we return to the original independent variable r by setting x= λr (see eq.(114)). The general solution to the R differential equation in (113), R + R /r + λr = can be written: R(r) =k 1 J ( λr)+k 2 Y ( λr) (132) The next step is to apply the B.C. associated with this equation: R = when r = a (133) Before applying the B.C. (133), we note that the presence of the log term in Y will produce infinite temperatures at r=, and hence the Y part of the solution must be removed by choosing k 2 = in (132). The B.C. (133) then requires that R(a) =k 1 J ( λa) = (134) There is an infinite sequence of λ values, {λ i }, each of which satisfies eq.(134). These may be found by utilizing tables of the zeros of J (x). We shall represent the n th zero of J (x) byγ n. The first five zeros of J (x) are: Γ 1 =2.448, Γ 2 =5.521, Γ 3 =8.6537, Γ 4 = , Γ 5 =14.939, (135) The corresponding values of λ are of the form λ n =Γ 2 n/a 2 : λ 1 = ,λ a 2 2 = ,λ a 2 3 = ,λ a 2 4 = ,λ a 2 5 = , (136) a 2 And the corresponding R-eigenfunctions are: R n (r) =J ( ( ) r λ n r)=j Γ n (137) a Returning to the separation of variables solution of problem B, we must also solve the Z- equation (see eqs.(42),(43)): Z λz = with the B.C. Z = when z = (138) As in the separation of variables solution of problem A, the general solution of the Z equation (138) can be written in the form: The B.C. Z() = gives c 3 =. Z(z) =c 3 cosh λz + c 4 sinh λz (139) Substituting the derived results into the assumed form of the solution u(r, z) = R(r)Z(z), we have ( ) r u(r, z) =R(r)Z(z) =a n J Γ n sinh Γ nz, n =1, 2, 3, (14) a a

21 R.Rand Lecture Notes on PDE s 2 where a n is an arbitrary constant. Since the PDE (38) is linear, we may superimpose solutions to obtain the form: ( ) r u(r, z) = a n J Γ n sinh Γ nz (141) n=1 a a We still have to satisfy the B.C. (8), u(r, L)=1, which gives (from (141)): ( ) r 1= a n J Γ n sinh Γ nl (142) a a n=1 Our task now is to find the constants a n. When we were faced with this same task in solving problem A, we had a Fourier series (see eq.(31)). We obtained the values of the constants a n by using the orthogonality of the eigenfunctions sin nπx. In the case of problem B we will ( L ) r similarly use the orthogonality of the eigenfunctions J Γn a.

22

23 1 Orthogonality R.Rand Lecture Notes on PDE s 21 In order to complete our treatment of problem B, we need to use orthogonality of the associated eigenfunctions. In the case of problem A, the orthogonal eigenfunctions X(x) satisfied the following boundary value problem (see eqs.(22) and (23)): X + λx = with the B.C. X() =, X(L) = (143) In order to prove orthogonality, we solved this problem to get X n (x) =sinλ n x = sin nπx (144) L and then we used the properties of trig functions to prove that L sin nπx mπx sin dx = n m (145) L L See eqs.(32)-(34). Unfortunately, we can t use a comparable method to prove the orthogonality of Bessel functions. Instead we will use a different method called Sturm-Liouville theory. We demonstrate the method by using it on eq.(143). Let X n (x) satisfy the eqs: and let X m (x) satisfy the eqs: We want to prove that X n + λ n X n = with the B.C. X n () =, X n (L) = (146) X m + λ m X m = with the B.C. X m () =, X m (L) = (147) We multiply (146) by X m and (147) by X n, and subtract, giving L X m X n dx = for n m (148) X m X n X n X m +(λ n λ m )X n X m = (149) Next we integrate (149) from to L: L L (X m X n X n X m) dx +(λ n λ m ) X n X m dx = (15) The idea now is to get the first term to vanish by using integration by parts and the B.C. in (143). Consider the first part of the first term: L X m X n dx =[X m X n] L L X mx n dx (151) The integrated terms vanish due to the B.C. X m ()=X m (L)=. Thus L L X m X n dx = X m X n dx (152)

24 R.Rand Lecture Notes on PDE s 22 Similarly, Therefore eq.(15) becomes L X n X m dx = L X n X m dx (153) L L (λ n λ m ) X n X m dx = X n X m dx = if n m (154) We have thus proven the orthogonality of the eigenfunctions X n (x) without constructing X n (x). This method is quite different from the method we used in eqs.(32)-(34), where we solved for X n (x) = sin nπx and used various trig identities. L Now in the case of problem B, the eigenfunctions R n (r) satisfy the boundary value problem (113): d 2 R n + 1 dr n dr 2 r dr + λ nr n = with the B.C. R n = when r = a (155) Based on the procedure given above in eqs.(146)-(154), it would seem like the right way to proceed with eq.(155) would be to multiply by R m, subtract the same equation with m and n interchanged, and integrate by parts to eliminate the terms which are not part of the orthogonality condition. Try it: it doesn t work! The correct procedure is to multiply eq.(155) by r and write it in the form: ( d r dr ) n + λ n rr n = (156) dr dr Now we proceed as before, multiplying (156) by R m and subtracting the same equation with m and n interchanged, giving: ( d R m r dr ) ( n d R n r dr ) m +(λ n λ m )rr n R m = (157) dr dr dr dr Now we integrate (157) from to a: ( ( a d R m r dr ) ( n d R n r dr )) a m dr +(λ n λ m ) rr n R m dr = (158) dr dr dr dr As before, the idea is to get the first term to vanish by using integration by parts and the B.C. in (155). Consider the first part of the first term: a ( d R m r dr ) [ ] a n dr a n dr = rr m r dr n dr dr dr dr dr m dr (159) dr The integrated terms vanish at the upper limit because R m = at r=a. At the lower limit, the presence of the factor r kills the term, assuming that R and dr/dr remain bounded as r. So we have a ( d R m r dr ) a n dr = r dr n dr dr dr dr m dr (16) dr

25 R.Rand Lecture Notes on PDE s 23 Similarly, we find that a ( d R n r dr ) a m dr = r dr m dr dr dr dr n dr (161) dr so that eq.(158) becomes a (λ n λ m ) rr n R m dr = a rr n R m dr = if n m (162) Since we showed in eq.(137) R n (r) =J ( ( ) r λ n r)=j Γ n (163) a we have the following orthogonality condition for Bessel functions: a ( ) ( ) r r rj Γ n J Γ m dr = if n m (164) a a where as a reminder we note that Γ n is the n th zero of J (x), see eq.(135).

26 R.Rand Lecture Notes on PDE s Sturm-Liouville Theory In this section we present some results which generalize the orthogonality results in the previous section. A linear second order ODE is said to be in Sturm-Liouville form if it can be written as follows: [ d p(x) dy ] +(q(x)+ λw(x))y = (165) dx dx We suppose that this equation is defined on an interval a x b and that B.C. on y(x) are given at the endpoints x=a and x=b. [In the case of problem B, eq.(156) is in Sturm-Liouville form with R(r) corresponding to y(x), and where p(x)=x, q(x)=, and w(x)=x. The interval r a corresponds to a x b.] Let {λ n } be a set of eigenvalues satisfying the ODE (165) and the B.C., and let {y n (x)} be the corresponding eigenfunctions. The orthogonality condition for this system is b Here w(x) is called the weight function. a In order for eq.(166) to hold, the following terms must vanish: w(x)y n (x)y m (x)dx = if n m (166) [p(x)(y m (x)y n(x) y n (x)y m(x))] b a = (167) A variety of B.C. will satisfy eq.(167), including y(x) vanishing at both endpoints x=a and x=b. As we have seen in the case of problem B, another way to satisfy (167) is if p(x)= at one endpoint and y(x)= at the other endpoint. A related question is given a second order ODE which is not in Sturm-Liouville form: How can it be put into Sturm-Liouville form? A(x)y + B(x)y + C(x)y + λd(x)y = (168) Recall that this was the situation in problem B, where the original R-equation, (155), had to be multiplied by r to get it into the Sturm-Liouville form (156). The procedure is to first calculate the following expression for p(x): p(x) =e (B(x)/A(x))dx (169) Then multiply eq.(168) by p(x)/a(x). The result may then be written in the Sturm-Liouville form (165), where q(x)= p(x)c(x) (17) A(x)

27 R.Rand Lecture Notes on PDE s 25 w(x) = p(x)d(x) A(x) For example, in the case of eq.(155), which we may write in the form: (171) y + 1 x y + λy = (172) we have A(x) =1, B(x) = 1, C(x) =, D(x) = 1 (173) x Then eqs.(169)-(171) give p(x) =e (B(x)/A(x))dx = e (1/x)dx = e log x = x (174) q(x)= p(x)c(x) = A(x) (175) w(x) = p(x)d(x) = x A(x) (176) which is to say that if eq.(172) is multiplied by p(x)/a(x)=x, it can be written in the Sturm- Liouville form: ( d x dy ) + λxy = (177) dx dx

28 R.Rand Lecture Notes on PDE s Solving Problem B by Separation of Variables, concluded Now that we know about the orthogonality of Bessel functions, let s return to solving problem B by separation of variables. We previously obtained the following expression for u(r, z) (see eq.(141): ( ) r u(r, z) = a n J Γ n sinh Γ nz (178) n=1 a a We still have to satisfy the B.C. (8), u(r, L)=1, which gives (from (178)): ( ) r 1= a n J Γ n sinh Γ nl (179) a a n=1 Now we can use the orthogonality result (164) a ( ) ( ) r r rj Γ n J Γ m dr = a a if n m (18) ( ) r We multiply (179) by rj Γm a and integrate from to a. The orthogonality condition (18) zaps everything on the RHS of the resulting equation except for the n=m term: a ( ) r rj Γ m dr = a m sinh Γ ml a ( )] r 2 r [J Γ m dr a a a (181) Solving (181) for a m, we obtain: a m = I 1 (m) I 2 (m) sinh ΓmL a (182) where we have used the notation I 1 and I 2 to abbreviate the following integrals: a ( ) r a ( )] r 2 I 1 (m) = rj Γ m dr and I 2 (m) = r [J Γ m dr a a (183) Using this notation, the expression (178) for u(r, z) becomes: ( ) I 1 (n) u(r, z) = n=1 I 2 (n) J r sinh Γ nz a Γ n a sinh ΓnL a (184) Now it turns out that the integrals I 1 and I 2 can be evaluated in terms of a tabulated function J 1 (x), which is the notation for a Bessel function of order 1 of the first kind. Skipping all the details, the result is: I 1 (n) I 2 (n) = 2 1 Γ n J 1 (Γ n ) (185) Using (185) in (184), we obtain the final solution of problem B : ( ) r 2 J Γn a sinh Γnz a u(r, z) = n=1 Γ n J 1 (Γ n ) sinh ΓnL a (186)

29 R.Rand Lecture Notes on PDE s Solving Problem C by Separation of Variables Problem C has the PDE (see (9) and (16)): 2 u = 1 [ ( ) ρ 2 u + 1 ρ 2 ρ ρ sinφ φ ( )] u φ sinφ = (187) Following the procedure we used on problems A and B, we seek a solution to the PDE (187) in the form u(ρ, φ) = R(ρ)Φ(φ) (188) Substitution of (188) into (187) gives: [ 1 (ρ 2 R ) Φ + 1 ] ρ 2 sinφ (Φ sinφ) R = (189) where primes represent differentiation with respect to the argument. Separating variables, we obtain This yields two ODE s: ρ 2 R + 2ρR R = 1 Φ sinφ + Φ cos φ = λ (19) sinφ Φ ρ 2 R + 2ρR λr = and Φ + Φ cotφ + λφ = (191) Let s begin with the Φ-equation. This equation can be simplified by changing the independent variable from the colatitude φ to x=cosφ. In order to accomplish this step, we will use the chain rule, and we will write Φ(φ) as y(x) instead of Φ(x) to avoid confusion. We have dφ dφ = dy dx dx dφ = sinφ dy dx (192) d 2 Φ dφ = d ( sinφ dy ) = cos φ dy 2 dφ dx dx + sin2 φ d2 y dx = xdy 2 dx + (1 x2 ) d2 y (193) dx 2 Using eqs.(192) and (193), the Φ-equation of (191) becomes: x dy ( dx + (1 x2 ) d2 y dx + cotφ sinφ dy ) + λy = (194) 2 dx Eq.(194) can be simplified by using cot φ sin φ=cos φ=x, giving: (1 x 2 ) d2 y dx 2xdy + λy = (195) 2 dx Note that x= is an ordinary point of eq.(195). By Rule 1, we can use the method of power series expansions about x= to obtain two linearly independent solutions. (See the section on Ordinary Points and Singular Points.) We look for a solution in the form of a power series: y = c + c 1 x + c 2 x 2 + c 3 x 3 + c 4 x c n x n + (196)

30 R.Rand Lecture Notes on PDE s 28 Substituting (196) into (195) and collecting terms gives c λ +2c 2 +(c 1 (λ 2) + 6 c 3 ) x +(c 2 (λ 6) + 12 c 4 ) x 2 +(c 3 (λ 12) + 2 c 5 ) x 3 + (197) Requiring the coefficient of each power of x n to vanish gives: c λ +2c 2 = (198) c 1 (λ 2) + 6 c 3 = (199) c 2 (λ 6) + 12 c 4 = (2) c 3 (λ 12) + 2 c 5 = (21) which gives the following recurrence relation: ( ) n(n +1) λ c n+2 = c n (22) (n + 2)(n +1) Here c and c 1 can be chosen arbitrarily, giving rise to two linearly independent solutions f(x) and g(x): y = c f(x)+c 1 g(x) (23) where f(x) =1 λx2 λ(λ 6) x4 + + (24) 2 24 (λ 2) x3 (λ 12) (λ 2) x5 g(x) =x + + (25) 6 12 It turns out that these two series diverge at x=±1, that is, at cos φ=±1, that is, at φ= and φ=π. Physically this represents unbounded temperatures along the polar axis of the sphere. No choice of the arbitrary constants c and c 1 will eliminate this problem, as it did in the case of the cylinder, compare with eqs.(132),(134). However, if λ is taken equal to n(n + 1), for n an integer, then one of the two infinite series will terminate, that is, will become a polynomial, and the convergence problem will disappear. We therefore take λ=n(n + 1), n=, 1, 2, 3, and we obtain a sequence of polynomial solutions called Legendre polynomials, y = P n (x): P (x) = 1 (26) P 1 (x) = x = cos φ (27) P 2 (x) = 1 2 (3x2 1) = 1 2 (3 cos2 φ 1) (28) P 3 (x) = 1 2 (5x3 3x) = 1 2 (5 cos3 φ 3 cos φ) (29) Rodrigues s formula gives an expression for P n (x): P n (x) = 1 n!2 n d n dx n (x2 1) n (21)

31 R.Rand Lecture Notes on PDE s 29 Let us turn now to the R(ρ)-equation in (191): ρ 2 R +2ρR λr =, λ = n(n + 1) (211) This is an Euler equation, so we look for a solution in the form R(ρ) =ρ r (see eq.(45)). Substitution into (211) gives: r(r 1) + 2r n(n +1)= r = n and r = (n + 1) (212) Thus eq.(211) has the general solution: R(ρ) =k 1 ρ n 1 + k 2 (213) ρ n+1 For bounded temperatures at the sphere s center, ρ=, we require that k 2 =: R(ρ) =k 1 ρ n (214) Substituting the derived results into the assumed form of the solution (188), we have u(ρ, φ) =R(ρ)Φ(φ) =b n P n (cos φ)ρ n, n =, 1, 2, 3, (215) where b n is an arbitrary constant. Since the PDE (187) is linear, we may superimpose solutions to obtain the form: u(ρ, φ) = b n P n (cos φ)ρ n (216) n= Our task now is to choose the constants b n so as to satisfy the B.C. on the outside of the sphere, ρ = a, see eqs.(1),(11): u(a, φ) = b n a n P n (cos φ) = n= or, switching to x=cos φ, { 1 for φ<π/2 for π/2 φ π (217) u(a, x) = b n a n P n (x) = n= { 1 for x 1 for 1 x< (218) As in the case of problems A and B, we need to use orthogonality of the eigenfunctions to find the b n coefficients. We can write Legendre s equation (195) in Sturm-Liouville form (165) as follows: [ d (1 x 2 ) dy ] + λy = (219) dx dx Comparison of (219) with the Sturm-Liouville form (165) gives the weight function w(x)=1, from which we obtain the following orthogonality condition: 1 1 P n (x)p m (x)dx = if n m (22)

32 R.Rand Lecture Notes on PDE s 3 We multiply eq.(217) by P m (x) and integrate from x= 1 tox=1. The orthogonality condition (22) eliminates all terms in the infinite series except for the n=m term: 1 b m a m [P m (x)] 2 dx = 1 1 P m (x)dx (221) Solving (221) for b m, we obtain: b m = I 3(m) a m I 4 (m) where we have used the notation I 3 and I 4 to abbreviate the following integrals: (222) I 3 (m) = 1 P m (x)dx and I 4 (m) = 1 1 [P m (x)] 2 dx (223) Using this notation, the expression (216) for u(ρ, φ) becomes: ( ) I 3 (n) ρ n u(ρ, φ) = I 4 (n) P n(cos φ) (224) a n= Now it turns out that the integrals I 3 and I 4 can be evaluated in closed form. Here are the results, without derivation: 1 I 4 (n) = [P n (x)] 2 dx = 2 1 2n +1 (225) 1 if n= 1 1/2 if n=1 I 3 (n) = P n (x)dx = if n=2,4,6, if n=3,5,7, (226) where ( 1) n 1 2 (n 2)!! (n+1)!! { k(k 2)(k 4) for k even k!! = k(k 2)(k 4) for k odd (227)

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

The one dimensional heat equation: Neumann and Robin boundary conditions

The one dimensional heat equation: Neumann and Robin boundary conditions The one dimensional heat equation: Neumann and Robin boundary conditions Ryan C. Trinity University Partial Differential Equations February 28, 2012 with Neumann boundary conditions Our goal is to solve:

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

An Introduction to Partial Differential Equations in the Undergraduate Curriculum

An Introduction to Partial Differential Equations in the Undergraduate Curriculum An Introduction to Partial Differential Equations in the Undergraduate Curriculum J. Tolosa & M. Vajiac LECTURE 11 Laplace s Equation in a Disk 11.1. Outline of Lecture The Laplacian in Polar Coordinates

More information

A Brief Review of Elementary Ordinary Differential Equations

A Brief Review of Elementary Ordinary Differential Equations 1 A Brief Review of Elementary Ordinary Differential Equations At various points in the material we will be covering, we will need to recall and use material normally covered in an elementary course on

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Introduction to Partial Differential Equations. John Douglas Moore

Introduction to Partial Differential Equations. John Douglas Moore Introduction to Partial Differential Equations John Douglas Moore May 2, 2003 Preface Partial differential equations are often used to construct models of the most basic theories underlying physics and

More information

Reducibility of Second Order Differential Operators with Rational Coefficients

Reducibility of Second Order Differential Operators with Rational Coefficients Reducibility of Second Order Differential Operators with Rational Coefficients Joseph Geisbauer University of Arkansas-Fort Smith Advisor: Dr. Jill Guerra May 10, 2007 1. INTRODUCTION In this paper we

More information

Differentiation and Integration

Differentiation and Integration This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

tegrals as General & Particular Solutions

tegrals as General & Particular Solutions tegrals as General & Particular Solutions dy dx = f(x) General Solution: y(x) = f(x) dx + C Particular Solution: dy dx = f(x), y(x 0) = y 0 Examples: 1) dy dx = (x 2)2 ;y(2) = 1; 2) dy ;y(0) = 0; 3) dx

More information

1. First-order Ordinary Differential Equations

1. First-order Ordinary Differential Equations Advanced Engineering Mathematics 1. First-order ODEs 1 1. First-order Ordinary Differential Equations 1.1 Basic concept and ideas 1.2 Geometrical meaning of direction fields 1.3 Separable differential

More information

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS A second-order linear differential equation has the form 1 Px d y dx dy Qx dx Rxy Gx where P, Q, R, and G are continuous functions. Equations of this type arise

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Differential Operators and their Adjoint Operators

Differential Operators and their Adjoint Operators Differential Operators and their Adjoint Operators Differential Operators inear functions from E n to E m may be described, once bases have been selected in both spaces ordinarily one uses the standard

More information

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain)

Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain) Application of Fourier Transform to PDE (I) Fourier Sine Transform (application to PDEs defined on a semi-infinite domain) The Fourier Sine Transform pair are F. T. : U = 2/ u x sin x dx, denoted as U

More information

The two dimensional heat equation

The two dimensional heat equation The two dimensional heat equation Ryan C. Trinity University Partial Differential Equations March 6, 2012 Physical motivation Consider a thin rectangular plate made of some thermally conductive material.

More information

Solutions to Homework 10

Solutions to Homework 10 Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

1 Inner Products and Norms on Real Vector Spaces

1 Inner Products and Norms on Real Vector Spaces Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from

More information

Math 2280 - Assignment 6

Math 2280 - Assignment 6 Math 2280 - Assignment 6 Dylan Zwick Spring 2014 Section 3.8-1, 3, 5, 8, 13 Section 4.1-1, 2, 13, 15, 22 Section 4.2-1, 10, 19, 28 1 Section 3.8 - Endpoint Problems and Eigenvalues 3.8.1 For the eigenvalue

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s A SERIES OF CLASS NOTES FOR 005-006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 4 A COLLECTION OF HANDOUTS ON PARTIAL DIFFERENTIAL EQUATIONS

More information

arxiv:1201.6059v2 [physics.class-ph] 27 Aug 2012

arxiv:1201.6059v2 [physics.class-ph] 27 Aug 2012 Green s functions for Neumann boundary conditions Jerrold Franklin Department of Physics, Temple University, Philadelphia, PA 19122-682 arxiv:121.659v2 [physics.class-ph] 27 Aug 212 (Dated: August 28,

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

6.1 Add & Subtract Polynomial Expression & Functions

6.1 Add & Subtract Polynomial Expression & Functions 6.1 Add & Subtract Polynomial Expression & Functions Objectives 1. Know the meaning of the words term, monomial, binomial, trinomial, polynomial, degree, coefficient, like terms, polynomial funciton, quardrtic

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m) Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

Partial Fractions Examples

Partial Fractions Examples Partial Fractions Examples Partial fractions is the name given to a technique of integration that may be used to integrate any ratio of polynomials. A ratio of polynomials is called a rational function.

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A RAJALAKSHMI ENGINEERING COLLEGE MA 26 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS. Solve (D 2 + D 2)y = 0. 2. Solve (D 2 + 6D + 9)y = 0. PART A 3. Solve (D 4 + 4)x = 0 where D = d dt 4. Find Particular Integral:

More information

MATH 381 HOMEWORK 2 SOLUTIONS

MATH 381 HOMEWORK 2 SOLUTIONS MATH 38 HOMEWORK SOLUTIONS Question (p.86 #8). If g(x)[e y e y ] is harmonic, g() =,g () =, find g(x). Let f(x, y) = g(x)[e y e y ].Then Since f(x, y) is harmonic, f + f = and we require x y f x = g (x)[e

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

More information

Solving Cubic Polynomials

Solving Cubic Polynomials Solving Cubic Polynomials 1.1 The general solution to the quadratic equation There are four steps to finding the zeroes of a quadratic polynomial. 1. First divide by the leading term, making the polynomial

More information

SOLVING POLYNOMIAL EQUATIONS

SOLVING POLYNOMIAL EQUATIONS C SOLVING POLYNOMIAL EQUATIONS We will assume in this appendix that you know how to divide polynomials using long division and synthetic division. If you need to review those techniques, refer to an algebra

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

SAT Subject Math Level 2 Facts & Formulas

SAT Subject Math Level 2 Facts & Formulas Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Reals: integers plus fractions, decimals, and irrationals ( 2, 3, π, etc.) Order Of Operations: Arithmetic Sequences: PEMDAS (Parentheses

More information

UNCORRECTED PAGE PROOFS

UNCORRECTED PAGE PROOFS number and and algebra TopIC 17 Polynomials 17.1 Overview Why learn this? Just as number is learned in stages, so too are graphs. You have been building your knowledge of graphs and functions over time.

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

INTEGRATING FACTOR METHOD

INTEGRATING FACTOR METHOD Differential Equations INTEGRATING FACTOR METHOD Graham S McDonald A Tutorial Module for learning to solve 1st order linear differential equations Table of contents Begin Tutorial c 2004 g.s.mcdonald@salford.ac.uk

More information

THE COMPLEX EXPONENTIAL FUNCTION

THE COMPLEX EXPONENTIAL FUNCTION Math 307 THE COMPLEX EXPONENTIAL FUNCTION (These notes assume you are already familiar with the basic properties of complex numbers.) We make the following definition e iθ = cos θ + i sin θ. (1) This formula

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Math 432 HW 2.5 Solutions

Math 432 HW 2.5 Solutions Math 432 HW 2.5 Solutions Assigned: 1-10, 12, 13, and 14. Selected for Grading: 1 (for five points), 6 (also for five), 9, 12 Solutions: 1. (2y 3 + 2y 2 ) dx + (3y 2 x + 2xy) dy = 0. M/ y = 6y 2 + 4y N/

More information

MATH 10034 Fundamental Mathematics IV

MATH 10034 Fundamental Mathematics IV MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Introduction. Appendix D Mathematical Induction D1

Introduction. Appendix D Mathematical Induction D1 Appendix D Mathematical Induction D D Mathematical Induction Use mathematical induction to prove a formula. Find a sum of powers of integers. Find a formula for a finite sum. Use finite differences to

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

1.3 Polynomials and Factoring

1.3 Polynomials and Factoring 1.3 Polynomials and Factoring Polynomials Constant: a number, such as 5 or 27 Variable: a letter or symbol that represents a value. Term: a constant, variable, or the product or a constant and variable.

More information

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i

COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i COMPLEX NUMBERS _4+i _-i FIGURE Complex numbers as points in the Arg plane i _i +i -i A complex number can be represented by an expression of the form a bi, where a b are real numbers i is a symbol with

More information

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

Math 1B, lecture 5: area and volume

Math 1B, lecture 5: area and volume Math B, lecture 5: area and volume Nathan Pflueger 6 September 2 Introduction This lecture and the next will be concerned with the computation of areas of regions in the plane, and volumes of regions in

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Dan B. Marghitu and S.C. Sinha 1 Introduction An ordinary differential equation is a relation involving one or several derivatives of a function y(x) with respect to x.

More information

Calculus 1: Sample Questions, Final Exam, Solutions

Calculus 1: Sample Questions, Final Exam, Solutions Calculus : Sample Questions, Final Exam, Solutions. Short answer. Put your answer in the blank. NO PARTIAL CREDIT! (a) (b) (c) (d) (e) e 3 e Evaluate dx. Your answer should be in the x form of an integer.

More information

Method of Green s Functions

Method of Green s Functions Method of Green s Functions 8.303 Linear Partial ifferential Equations Matthew J. Hancock Fall 006 We introduce another powerful method of solving PEs. First, we need to consider some preliminary definitions

More information

The Math Circle, Spring 2004

The Math Circle, Spring 2004 The Math Circle, Spring 2004 (Talks by Gordon Ritter) What is Non-Euclidean Geometry? Most geometries on the plane R 2 are non-euclidean. Let s denote arc length. Then Euclidean geometry arises from the

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

ALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals

ALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals ALGEBRA REVIEW LEARNING SKILLS CENTER The "Review Series in Algebra" is taught at the beginning of each quarter by the staff of the Learning Skills Center at UC Davis. This workshop is intended to be an

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c

More information

MA107 Precalculus Algebra Exam 2 Review Solutions

MA107 Precalculus Algebra Exam 2 Review Solutions MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write

More information

MULTIPLE INTEGRALS. h 2 (y) are continuous functions on [c, d] and let f(x, y) be a function defined on R. Then

MULTIPLE INTEGRALS. h 2 (y) are continuous functions on [c, d] and let f(x, y) be a function defined on R. Then MULTIPLE INTEGALS 1. ouble Integrals Let be a simple region defined by a x b and g 1 (x) y g 2 (x), where g 1 (x) and g 2 (x) are continuous functions on [a, b] and let f(x, y) be a function defined on.

More information

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions College of the Holy Cross, Spring 29 Math 373, Partial Differential Equations Midterm 1 Practice Questions 1. (a) Find a solution of u x + u y + u = xy. Hint: Try a polynomial of degree 2. Solution. Use

More information

15. Symmetric polynomials

15. Symmetric polynomials 15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1 Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

More information

Understanding Basic Calculus

Understanding Basic Calculus Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

ECE 842 Report Implementation of Elliptic Curve Cryptography

ECE 842 Report Implementation of Elliptic Curve Cryptography ECE 842 Report Implementation of Elliptic Curve Cryptography Wei-Yang Lin December 15, 2004 Abstract The aim of this report is to illustrate the issues in implementing a practical elliptic curve cryptographic

More information

14.1. Basic Concepts of Integration. Introduction. Prerequisites. Learning Outcomes. Learning Style

14.1. Basic Concepts of Integration. Introduction. Prerequisites. Learning Outcomes. Learning Style Basic Concepts of Integration 14.1 Introduction When a function f(x) is known we can differentiate it to obtain its derivative df. The reverse dx process is to obtain the function f(x) from knowledge of

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

College Algebra - MAT 161 Page: 1 Copyright 2009 Killoran

College Algebra - MAT 161 Page: 1 Copyright 2009 Killoran College Algebra - MAT 6 Page: Copyright 2009 Killoran Zeros and Roots of Polynomial Functions Finding a Root (zero or x-intercept) of a polynomial is identical to the process of factoring a polynomial.

More information

Using a table of derivatives

Using a table of derivatives Using a table of derivatives In this unit we construct a Table of Derivatives of commonly occurring functions. This is done using the knowledge gained in previous units on differentiation from first principles.

More information

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Polynomials (Ch.1) Study Guide by BS, JL, AZ, CC, SH, HL Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Sasha s method

More information

Factoring Quadratic Expressions

Factoring Quadratic Expressions Factoring the trinomial ax 2 + bx + c when a = 1 A trinomial in the form x 2 + bx + c can be factored to equal (x + m)(x + n) when the product of m x n equals c and the sum of m + n equals b. (Note: the

More information

Tim Kerins. Leaving Certificate Honours Maths - Algebra. Tim Kerins. the date

Tim Kerins. Leaving Certificate Honours Maths - Algebra. Tim Kerins. the date Leaving Certificate Honours Maths - Algebra the date Chapter 1 Algebra This is an important portion of the course. As well as generally accounting for 2 3 questions in examination it is the basis for many

More information

LINEAR MAPS, THE TOTAL DERIVATIVE AND THE CHAIN RULE. Contents

LINEAR MAPS, THE TOTAL DERIVATIVE AND THE CHAIN RULE. Contents LINEAR MAPS, THE TOTAL DERIVATIVE AND THE CHAIN RULE ROBERT LIPSHITZ Abstract We will discuss the notion of linear maps and introduce the total derivative of a function f : R n R m as a linear map We will

More information

5.3 Improper Integrals Involving Rational and Exponential Functions

5.3 Improper Integrals Involving Rational and Exponential Functions Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Second Order Linear Differential Equations

Second Order Linear Differential Equations CHAPTER 2 Second Order Linear Differential Equations 2.. Homogeneous Equations A differential equation is a relation involving variables x y y y. A solution is a function f x such that the substitution

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information