Gianni Bosi and Magalì E. Zuanon. Basic Optimization and Financial Mathematics

Size: px
Start display at page:

Download "Gianni Bosi and Magalì E. Zuanon. Basic Optimization and Financial Mathematics"

Transcription

1 Gianni Bosi and Magalì E. Zuanon Basic Optimization and Financial Mathematics

2

3 i Preface It isn t that they can t see the solution. They can t see the problem. G.K. CHESTERTON The Scandal of Father Brown The point of a pin" This booklet contains the notes for a short course in Optimization and Financial Mathematics. It is far from being exhaustive but nevertheless it is entirely self-contained, following a classical course of calculus. The main goal of these notes is to provide students with the essential information about these topics in a primarily theoretical framework. The first part (Chapters 1, 2 and 3) illustrates the basic tools of optimization in several variables, passing through essential linear algebra and quadratic forms. The second part (Chapters 4, 5, 6 and 7) contains Theory of Interest as a mathematical problem topic, which is rather unlike what is done in typical finance courses. In this part we consider the deterministic approach only. In the last Chapter we introduce some arguments of mathematics of life insurance, the understanding of the basic principles of this part being a solid foundation for further study of the theory in a more general setting. The authors apologize in advance for any typos and mistakes. They are grateful to the readers who will be so kind to point out them. Trieste, March 21st, 2012 Gianni BOSI Dipartimento di Scienze Economiche, Aziendali, Matematiche e Statistiche, Università di Trieste, Piazzale Europa 1, Trieste, Italy. giannibo@econ.units.it Magalì E. ZUANON Dipartimento di Metodi Quantitativi, Università degli Studi di Brescia, Contrada Santa Chiara 50, Brescia, Italy. zuanon@eco.unibs.it

4 Contents pag. 1 Some concepts of matrix algebra Basic definitions Determinants The algebraic structure of R n The topological structure of R n Real functions of n real variables General concepts Quadratic forms Differentiability and optimization Convex and concave functions Constrained optimization with equality constraints Exercises The classical financial regimes General concepts Regime of the linear interest Regime of the commercial discount Regime of the compound interest or exponential regime Exercises The regime of the compound interest The final value under the regime of the compound interest Equivalence of interest rates Force of Interest: Continuous compounding Axiomatization of the regime of the compound interest Exercises Annuities and perpetuities Annual payments Non-annual payments Amortization and capital construction by constant instalments 65

5 2 Contents 5.4 Exercises Amortization of a debt General concepts Amortization by constant annual principal shares Amortization by constant instalments Negotiation of a debt: Makeham formula Exercises Actuarial mathematics Probability and Lifetimes Expected Present Values of Insurance Contracts Exercises Solutions to selected exercises 87 Bibliography 97

6 CHAPTER 1 Some concepts of matrix algebra 1.1 Basic definitions In this section we first present the basic concepts concerning matrices. Then we introduce the operations of sum of two matrices, product of a matrix and a real number and finally product of two matrices. Definition (Real Matrix). An m n real matrix (m and n are positive integers) is a collection a a 1n a a 2n A = [a ij ] 1 i m,1 j n =..... a m1... a mn of real numbers a ij (1 i m, j j n) double ordered by row and by column. So, the matrix A = [a ij ] 1 i m,1 j n has m rows and n columns and a ij is precisely the element that belongs both to the i-th row and to the j-th column. In the sequel a real matrix will be simply referred to as a matrix for the sake of simplicity. Example Consider the following matrix ( ) A = Then A is a 2 4 matrix. For example, we have that a 13 = 3 and a 22 = 5.

7 4 Chapter 1. Some concepts of matrix algebra We now present the basic definitions of a square matrix and an identity matrix. Definition (Square Matrix). An n n matrix A = [a ij ] 1 i n,j j n is said to be a square matrix of order n (in this case there are n rows and n columns). In the particular case when n = 1 (there is only one column), we have to do with a column vector a of R m, that is a = a 1 a 2.. a m. Definition (Identity matrix). The identity matrix of order n is the square matrix I n of order n that is defined as follows: I n = so that the elements on the main diagonal of A (i.e. the elements a ii with 1 i n) are all equal to 1 while all the other elements are equal to 0. So, for example, we have I 2 = ( ), I 3 = ,, I 4 = It is possible to define the operations of sum of two matrices and product of a matrix and a real number. Definition (Sum of two matrices). Let two m n matrices A = [a ij ] 1 i m,1 j n and B = [b ij ] 1 i m,1 j n be given. Then the sum C = A+B of the matrices A and B is the m n matrix C = [c ij ] 1 i m,1 j n whose generic element c ij is the sum of the corresponding elements a ij and b ij of A and B, respectively (that is, c ij = a ij + b ij with 1 i m, 1 j n)..

8 1.1. Basic definitions 5 Example Consider the following 2 4 matrices ( ) ( A =, B = ). We have that ( ) A + B = = ( Definition (Product of a matrix and a real number). Let λ be a real number and consider an m n matrix A = [a ij ] 1 i m,1 j n. Then the product λa of λ and A is the m n matrix λa = [λa ij ] 1 i m,1 j n. The following example concerns a linear combination of two matrices. Both the sum and the product by a real number are involved. ). Example Consider the following 2 4 matrices ( ) ( A =, B = ). We want to determine the 2 4 matrix defined as 3A 2B. We have that ( ) ( ) A 2B = 3 2 = ( ) ( ) = + = ( ) = Definition (Product of two matrices). Let A = [a ij ] 1 i m,1 j n be an m n matrix and let B = [b ij ] 1 i n,1 j r be an n r matrix (so A has n columns and B has n rows). Then we define the product row by column of A and B as the m r matrix AB = C = [c ij ] 1 i m,1 j r with m rows and r columns whose generic element c ij is obtained by multiplying the i-th row a i of the matrix A and the j-th column b j of the matrix B, that is n c ij = a ih b hj (i = 1,..., m, j = 1,..., r). h=1 Remark The product of two matrices A and B is not defined whenever the number of columns of A is different from the number of rows of B. We present an example concerning the product of two matrices.

9 6 Chapter 1. Some concepts of matrix algebra Example Consider the following matrices ( ) ( A =, B = ). Notice that BA is not defined since B has 3 columns and A has 2 rows. We have that ( ) ( ) AB = = ( ) ( 2) 1 ( 4) = = ( 1) ( 1) ( 2) 2 ( 4) + ( 1) 6 ( ) = Remark It should be noted that for every square matrix A of order n we have that AI n = I n A = A. Remark The product of two matrices is not commutative. Even if for two matrices A and B both the products AB and BA are defined, it is not true in general that AB = BA. In order to illustrate this fact, consider the following two square matrices of order 2: A = ( ), B = ( ). We have that AB = ( ), BA = ( ). 1.2 Determinants We present the definition and the basic properties of the determinants. We consider determinants as tools in order to establish sufficient conditions for the existence of points of (relative) maximum and minimum. This will be done in the following chapter. In practice, we shall limit ourselves to the consideration of determinants of square matrices of order 2 or 3. Nevertheless, the definition concerns the most general case of a square matrix of arbitrary dimension.

10 1.2. Determinants 7 Definition (Determinant). Let A be a square matrix of order n, A = [a ij ] 1 i n,j j n. Then the determinant of A is the real number det(a) = a a 1n a a 2n a n a nn = ( 1) d(π) a 1π(1) a nπ(n), π where the previous summation is over all permutations (bijective mappings) π : {1,..., n} {1,..., n} and d(π) is the degree of the permutation π (the number of exchanges needed to give the fundamental permutation (1, 2,..., n) starting from the permutation (π(1),..., π(n))). In the sequel we shall consider in particular determinants of square matrices of order 2 and of order 3. Therefore, we need a fast procedure in order to calculate the determinants in these cases. Remark (Determinant of a 2 2 matrix). Given a square matrix of order 2 ( ) a11 a A = 12, a 21 a 22 we have that det(a) = a 11 a 22 a 12 a 21 since the permutation {2, 1} requires one exchange. Remark (Determinant of a 3 3 matrix). Given a square matrix of order 3 we have that A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33, det(a) = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 11 a 23 a 32 a 12 a 21 a 33 since the permutations {1, 2, 3}, {2, 3, 1} and {3, 1, 2} require an even number of exchanges while the permutations {3, 2, 1}, {1, 3, 2} and {2, 1, 3} require an odd number of exchanges. There is a fast rule which allows us to immediately arrive at the determinant of a 3 3 matrix. It is called the Sarrus rule".

11 8 Chapter 1. Some concepts of matrix algebra Remark (Sarrus rule). Given a square matrix of order 3 A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 write the matrix A and the first two columns of A subsequently on the right side, that is a 11 a 12 a 13 a 11 a 12 a 21 a 22 a 23 a 21 a 22 a 31 a 32 a 33 a 31 a 32. Then consider the sum of the products of the elements of the 3 diagonals starting above to the left and ending below to the right (i.e., a 11 a 22 a 33, a 12 a 23 a 31, a 13 a 21 a 32 ) and subtract the sum of the products of the elements of the 3 diagonals starting below to the right and ending above to the left (i.e., a 13 a 22 a 31, a 11 a 23 a 32, a 12 a 21 a 33 ). The resulting number is precisely the determinant of A (i.e., det(a) = a 11 a 22 a 33 +a 12 a 23 a 31 +a 13 a 21 a 32 a 13 a 22 a 31 a 11 a 23 a 32 a 12 a 21 a 33 ). Example (Determinant of some 3 3 matrix). If we consider the square matrix of order 3 A = then we have that det(a) = ( 2) ( 2) + ( 4) 1 3 ( 2) 0 ( 4) 3 ( 2) = = 8. Remark (Determinant of the product λa). It is easily seen that the following property of the determinants holds true :,, (i) for every real number λ and for every square matrix A of order n we have that det (λa) = λ n deta. Definition (Transpose matrix). Let A = [a ij ] 1 i m,1 j n be an m n matrix. Then the transpose of A is the n m matrix A = [a ij ] 1 i n,1 j m whose generic element a ij is defined as follows a ij = a ji 1 i n, 1 j m. It should be noted that the the transpose A of any matrix A is obtained by writing the rows of A as the columns of A or equivalently by writing the columns of A as the rows of A.

12 1.3. The algebraic structure of R n 9 Example Consider the 2 4 matrix ( ) A = Then its transpose A is defined as follows: A = Remark (Determinant of the transpose of a matrix). From the definition of the determinant it is immediate to see that for every square matrix A the determinant of A is equal to the determinant of A (i.e., det(a) = det(a )). Definition (Symmetric matrix). If A = [a ij ] 1 i n,1 j n is a square matrix of order n and we have that A = A, then A is said to be a symmetric matrix. Example Consider the following matrices of order 3: A = 1 0 4, B = We have that A is symmetric while B is not symmetric. 1.3 The algebraic structure of R n In this section we present the basic properties of the space R n with respect to the operations of sum of two elements and product of an element and a real number. As usual, we shall denote by R the set of all real numbers and by R n (n N + ) the n-dimensional Euclidean space consisting of all n-tuples x = (x 1, x 2,..., x n ) of real numbers. We consider on R n the following operations: sum of two elements of R n (+ : R n R n R n ) (x 1, x 2,..., x n ) + (y 1, y 2,..., y n ) = (x 1 + y 1, x 2 + y 2,..., x n + y n );

13 10 Chapter 1. Some concepts of matrix algebra product of a real number and an element of R n ( : R R n R n ) α (x 1, x 2,..., x n ) = (αx 1, αx 2,..., αx n ) The algebraic structure (R n, +, ) (consisting of the underlying space R n together with the operations + and ) is said to be a real vector space. Therefore, we can refer to x R n as a vector of R n. We present the properties according to which (R n, +, ) is a real vector space. Just notice that the definition of (real) vector space is much more general, but goes beyond our purposes. Definition (Vector space). We refer to (R n, +, ) as a real vector space since the following properties are verified: (1) (x + y) + z = x + (y + z) for every x, y, z R n ; (2) There exists a (unique) element 0 = (0,..., 0) such that x+0 = 0+x = x for every x R n ; (3) For every x R n there exists a so called opposite element x = ( x 1,..., x n ) such that x + ( x) = 0; (4) x + y = y + x for every x, y R n ; (5) α(βx) = (αβ)x for every α, β R, for every x R n ; (6) 1x = x for every x R n ; (7) α(x + y) = αx + αy for every α R, for every x, y R n ; (8) (α + β)x = αx + βx for every α, β R, for every x R n. Remark The algebraic structure (R n, +) satisfies the following properties listed above: (1) associative property, (2) existence of a neutral element, (3) existence of an opposite element of any element of R n, (4) commutative property. (R n, +) is said to be a commutative (abelian) group.

14 1.4. The topological structure of R n 11 Definition (Euclidean distance). Given two vectors x = (x 1, x 2,..., x n ) and y = (y 1, y 2,..., y n ) of R n, the Euclidean distance between x and y is the real number defined as follows: d(x, y) = n (x i y i ) 2. i=1 Remark (Properties of the Euclidean distance). The Euclidean distance satisfies the following properties: (0) d(x, y) 0 for every x, y R n ; (1) d(x, y) = 0 if and only if x = y for every x, y R n ; (2) d(x, y) = d(y, x) for every x, y R n (symmetric property); (3) d(x, y) + d(y, z) d(x, z) for every x, y, z R n (triangle inequality). The pair (R n, d) is referred to as a metric space (in the sense that there is a metric (i.e., a distance) d : R n R n R that associates to every pair of vectors of R n a real number in a such a way that the previous properties are verified). Also in this case, the concept of a metric space is much more general, as well as the concept of norm presented in the following definition. Definition (Euclidean norm). The Euclidean norm of any vector x R n is defined to be the real number x = d(x, 0) = n x 2 i. i=1 Remark (Properties of the norm). The (Euclidean) norm satisfies the following properties: (0) x 0 for every x R n ; (1) x = 0 if and only if x = 0 for every x R n ; (2) αx = α x for every α R, for every x R n ; (3) x + y x + y for every x, y R n.

15 12 Chapter 1. Some concepts of matrix algebra 1.4 The topological structure of R n We now present the basic definition of an open ball centered at a point of point of R n. The basic topological concepts will follow. Definition (Open ball centered at a point). Let x 0 be a vector of R n and let r be a positive real number. We denote by B r (x 0 ) the open ball with center at x 0 and radius r, that is defined to be the set of all vectors (points) of R n whose distance from x 0 is smaller than r, that is to say B r (x 0 ) = {x R n : d(x, x 0 ) < r} = x Rn : n (x i x 0 i )2 < r. i= Figure 1.1 Open ball centered at (0, 0) with radius 1.

16 1.4. The topological structure of R n 13 Definition (Accumulation point). A point x 0 R n is a said to be an accumulation point of a set A R n if every open ball with center at x 0 contains infinitely many points of A. Definition (Interior point of a set). A point x 0 R n is said to be an interior point of a set A R n if there exists an open ball centered at x 0 that is contained in A (that is, r R, r > 0 such that B r (x 0 ) A). Definition (Open sets and closed sets). Let A be a subset of R n. Then A is said to be (1) open if every point of A is an interior point of A (that is, for every x 0 A there exists r R, r > 0 such that B r (x 0 ) A); (2) closed if the complement of A, denoted by CA = R n \ A is open (that is, for every x 0 CA there exists r R, r > 0 such that B r (x 0 ) A = ). Example It is easy to check (but here we state without proof) that any open ball B r (x 0 ) is an open set. Further, every closed ball is a closed set. C r (x 0 ) = {x R n : d(x, x 0 ) r} = x Rn : n (x i x 0 i )2 r Remark (properties of the open sets). If we denote by τ the family consisting of all the open subsets of R n, then we have that τ satisfies the following properties: (i) X τ, τ; i=1 (ii) the union of every family of open sets is an open set; (iii) the intersection of any finite family of open sets is an open set. We say that τ is a topology on R n. Definition (Convergent sequence). We say that a sequence {x h } h N + of points of R n converges to a point x 0 R n (that is, x h x 0 ) if for every real number ǫ > 0 there exists a natural number h N + such that d(x h, x 0 ) < ǫ for every h > h (i.e., lim h + d(xh, x 0 ) = 0).

17 14 Chapter 1. Some concepts of matrix algebra Example (Example of a convergent sequence). Consider the sequence of points of R 2 defined as follows: {( 1 h, h )} h N +. We claim that ( 1 h, 1+ 1 h ) (0, 1). Indeed, consider that d(( 1 h, 1+ 1 h ), (0, 1)) = 1 h h = 2 2 h. For any fixed positive real number ǫ we have that 2 2 h < ǫ 2 as soon as h > such that h > 2 ǫ 2 ǫ.. So we only have to consider any natural number h N+ Definition (Bounded set). A subset A of R n is said to be bounded if it is contained in an open ball B r (0) centered at 0 (i.e., if there exists r R, r > 0 such that A B r (0)). Definition (compact set). A subset A of R n is said to be compact if it is closed and bounded.

18 CHAPTER 2 Real functions of n real variables 2.1 General concepts We shall denote by f : (R n )D R a real function of n real variables that is defined on the domain D. So f is a law (o prescription) that associates to every point (vector) x D a well determined real number f(x), that is called the value of f at the point x. From now on (unless a different specification) a vector x R n is thought of as a column vector x 1 x 2 x =... x n Example (Linear functions). A function f : R n R is said to be a linear function if f satisfies the following two conditions; (i) f(x + y) = f(x) + f(y) for all x, y R n ; (ii) f(αx) = αf(x) for all x R n and for all α R. It can be shown that a function f is linear if and only if there exists a (column) vector a R n such that for every x R n n f(x) = a x = (a 1,..., a n ) x = a i x i. i=1

19 16 Chapter 2. Real functions of n real variables Definition (Graph). If f : (R n )D R is a real function of n real variables then the graph of f is the subset of R n+1 that is defined as follows: G f = {(x 1, x 2,..., x n, f(x 1, x 2,..., x n )) : x = (x 1, x 2,..., x n ) D}. It should be noted that in the particular case when n = 2, the graph of a function of two real variables appears as a surface in the three-dimensional space Figure 2.1 Graph of f(x, y) = x 2 + y 2. Definition (Level curve). If f : (R n )D R is a real function of n real variables and c is any real number, then we refer to the level curve (corresponding to c) as the subset D f c of D defined as follows: D f c = {x D : f(x) = c}. Example (f(x, y) = x 2 +y 2 ). Consider the function (positive definite quadratic form) f : R 2 R, f(x, y) = x 2 +y 2. Then for every positive real number c the c-level curve D f c = {x D : x2 + y 2 = c} is a circumference centered at (0, 0) with radius c. We now present, for the sake of completeness, the basic definitions of continuity of a function at a point and finite limit at a point.

20 2.2. Quadratic forms Figure 2.2 Level curves of f(x, y) = x 2 + y 2. Definition Continuity at a point] Let f : (R n )D R be a real function of n real variables and consider a point x 0 D. Then f is said to be continuous at x 0 if f verifies the following condition: for every real number ǫ > 0 there exists a real number δ > 0 such that f(x) f(x 0 ) < ǫ for every x D such that d(x, x 0 ) < δ. Definition (Continuity on a domain). A function f : (R n )D R is said to be continuous (on D) if f is continuous at every point x 0 D. Definition (Finite limit at a point). Let f : (R n )D R be a real function of n real variables and let x 0 be an accumulation point of its domain D. We say that f has finite limit l ( R) as x approaches x 0 ( lim = l) if x x0f(x) for every real number ǫ > 0 there exists a real number δ > 0 such that f(x) l) < ǫ for every x D with 0 < d(x, x 0 ) < δ.

21 18 Chapter 2. Real functions of n real variables Figure 2.3 Level curves of f(x, y) = x 2 + y. 2.2 Quadratic forms In this section we introduce the quadratic forms and study their representations and sign by using the considerations in the previous chapters. Definition (Definition of a quadratic form). A quadratic form q on R n is a function q : R n R of the form n j q(x 1,..., x x ) = a ij x i x j. j=1 i=1

22 2.2. Quadratic forms 19 Definition (Definite and semidefinite quadratic forms). A quadratic form q on R n is said to be (i) positive definite (negative definite) if q(x) > 0 (q(x) < 0) for every x 0 (x R n ); (ii) positive semidefinite (negative semidefinite) if q(x) 0 (q(x) 0) for every x R n ; (iii) indefinite of sign if q(x) > 0 for some x R n and also q(x) < 0 for some x R n. Remark (Associated symmetric matrices). A quadratic form q(x 1,..., x x ) = n j=1 j i=1 a ijx i x j can be represented by means of a symmetric matrix A (of order n) in such a way that q(x) = x Ax for every x R n. To this aim it suffices to consider the following symmetric matrix: A = [a ij ] 1 i n,j j n = 1 a 11 2 a a 1 12 a a 1n 2 a 2n 1 2 a 1n a nn We say that A is the symmetric matrix associated to the quadratic form q. Example (Quadratic form on R 2 ). Consider the quadratic form q on R 2 defined as q(x, y) = x 2 2xy + y 2. If we define x = ( x y then we have that q(x) = x Ax. ), A = ( Example (Quadratic form on R 3 ). Consider the quadratic form on R 3 defined as q(x, y, z) = x 2 + 2y 2 + 3z 2 + 4xy 6xz + 8yz. If we define x = x y z then we have that q(x) = x Ax., A = ) We now introduce the different kinds of definition of a symmetric matrix by using the corresponding notions of definitions of a quadratic form.,.

23 20 Chapter 2. Real functions of n real variables Definition (Positive definite symmetric matrix). The symmetric matrix A of order n is said to be (i) positive definite (negative definite) if the quadratic form q(x) = x Ax is positive definite (negative definite); (ii) positive semidefinite (negative semidefinite) if the quadratic form q(x) = x Ax is positive semidefinite (negative semidefinite);; (iii) indefinite of sign if the quadratic form q(x) = x Ax, q : R n R is indefinite of sign. Example (Matrices definite of sign). (i) The symmetric matrix A = ( ) is indefinite of sign since q(x 1, x 2 ) = x Ax = x 2 1 x2 2 (q(1, 0) = 1 > 0) or negative q(0, 1) = 1 < 0). may be positive (ii) The symmetric matrix A = ( ) is positive definite since q(x 1, x 2 ) = x Ax = x x2 2 all x 0. is greater than zero for (iii) The symmetric matrix A = ( ) is positive semidefinite since q(x 1, x 2 ) = x Ax = x x 1x 2 + x 2 2 nonnegative but it it is equal to zero whenever x 1 = x 2. is always The following concept of principal minor is very useful in order too analyze the definition of a quadratic form.

24 2.2. Quadratic forms 21 Definition (Principal minors). Let A = [a ij ] 1 i n,1 j n be a square matrix of order n. Define, for every 1 h n, A h = [a ij ] 1 i h,1 j h = a a 1h a a 2n a h a hh. The principal minor of order h is defined to be the determinant of A h. We do not prove that following theorem, which contains a characterization of the positive/definition of a symmetric matrix, and therefore of a quadratic form, since we can always refer to the associated symmetric matrix. Theorem (Conditions for positive definitions). Let A be a symmetric matrix of order n. Then we have that: (i) A is positive definite if and only if all the n principal minors are positive; (ii) A is negative definite if and only if all its n principal minors alternate in sign with, in particular, det(a 1 ) < 0, det(a 2 ) > 0, det(a 3 ) < 0,..., and so on; (iii) if at least one principal minor of A is different from 0 but its sign doesn t agree with any of the previous cases, then A is indefinite of sign; (iv) A is positive semidefinite if and only if all its n principal minors are nonnegative and at least one of them is equal to zero; (v) A negative semidefinite if and only if all its n principal minors of odd order are smaller or equal to zero, all its principal minors of even order are greater or equal to zero and at least one of them is equal to zero.

25 22 Chapter 2. Real functions of n real variables Example (Positive definite matrices). (i) The symmetric matrix A = ( is positive definite since det(a 1 ) = 2 > 0 and det(a) = 2 1 = 1 > 0. ) (ii) The symmetric matrix A = is indefinite of sign since det(a 1 ) = 1 > 0 e det(a) = 7 < 0. (iii) The symmetric matrix A = is negative semidefinite since det(a 1 ) = 1 < 0, det(a 2 ) = 0 e det(a) = Differentiability and optimization We introduce the basic concepts related to the differential calculus in n variables. Definition (Partial derivative). Let f : (R n )D R be a real function of n real variables and let x 0 D be an accumulation point of D. If the limit f (x 0 f(x 0 1 ) = lim,..., x0 i 1, x0 i + h, x0 i+1,..., x0 n ) f(x0 ) x i h 0 h exists and it is finite then such a limit is said to be the partial derivative of f with respect to x i at the point x 0.

26 2.3. Differentiability and optimization 23 Definition (Gradient vector). Let f : (R n )D R be a real function of n real variables and let x 0 D be an accumulation point of D. If the partial derivatives f x i (x 0 ) exist for all i {1,..., n}, then we define the gradient vector D 1 f(x 0 ) of f at the point x 0 as follows: D 1 f(x 0 ) = f x 1 (x 0 )... f x n (x 0 ). Theorem (Derivative of compound functions). Let z = f(x 1,..., x n ) be a real function of n real variables such that the partial derivatives f x i (x) exist for every x belonging to the domain of f and for every i {1,..., n}. Consider m real functions of n real variables x i = g i (t 1,..., t m ) (i = 1,..., n) such that the partial derivatives g i t j (t) exist for every t belonging to the common domain of the functions g i. Then, for every j {1,..., m}, we have that z = z x 1 + z x z x n. t j x 1 t j x 2 t j x n t j Example (Derivative of a compound function). Consider the function z = f(x, y) = e x 2y, x = t t 2, y = 2t 2. From Theorem 2.3.3, we have that z = z x + z y = 2t 1 e x 2y = 2t 1 e t2 1 3t 2, dt 1 x t 1 y t 1 z = z dt 2 x x + z t 2 y y t 2 = e x 2y 4e x 2y = 3e t2 1 3t 2. It is clear that, equivalently, we could have considered directly the partial derivatives with respect to t 1 and t 2 of the function f(t 1, t 2 ) = e t2 1 3t 2. Definition (Continuous differentiability). A function f : (R n )D R is said to be continuously differentiable or C 1 on D if the partial derivative f x i (x) exists and it is continuous at every point x D for all i {1,..., n}.

27 24 Chapter 2. Real functions of n real variables Definition (Partial derivatives of higher order). If a function f : (R n )D R is C 1 on D and the partial derivative with respect to x j (j {1,..., n}) of f x i (x) exists for every x D, then we shall write f x j ( ) f (x) = 2 f (x). x i x j x i Further, if 2 f x j x i (x) is continuous at every point x D and for every i, j {1,..., n}, then we shall say that f is C 2 or twice continuously differentiable on D. Example Consider the real function of two real variables defined as Then we have that f x (x, y) = log(x y2 ) + f(x, y) = xlog(x y 2 ). x x y 2, f y (x, y) = 2xy x y 2, 2 f 1 (x, y) = x2 x y 2 y 2 (x y 2 ) 2, 2 f x + y2 (x, y) = 2x y2 (x y 2 ) 2, 2 f x y (x, y) = 2y 3 (x y 2 ) 2 = 2 f (x, y). y x Definition (Hessian matrix). Let f : (R n )D R be a real function of n real variables. If f is C 2 then at every point x 0 D we can define the hessian matrix of f D 2 f(x 0 ) = 2 f x f (x 0 ) 2 f 2 f x n x 1 (x 0 ) x 2 x 1 (x 0 )... 2 f (x 0 ) x x 1 x 2 (x 0 ) f x 1 x n 1 (x 0 ) f x 1 x n (x 0 ).... Hence we have that D 2 f(x 0 ) = [a ij ] 1 i n,1 j n with a ij = 2 f x j x i (x 0 ) (i, j {1,..., n}). 2 f x 2 n (x 0 ).

28 2.3. Differentiability and optimization 25 Remark (symmetric property of the hessian matrix). If f : (R n )D R is C 2 then, according to Schwartz Theorem, we have that for every x D and for every pair (i, j) of indexes the following property is verified: 2 f x j x i (x) = 2 f x i x j (x). Therefore the hessian matrix D 2 f(x) is symmetric for every x D. Definition (points of minimum and maximum). Let f : (R n )D R be a real function of n real variables. Then a point x 0 D is said to be (i) a point of maximum (minimum) if f(x) f(x 0 ) (f(x) f(x 0 )) for every x D; (ii) a point of strict maximum (minimum) if f(x) < f(x 0 ) (f(x) > f(x 0 )) for every x D, x x 0 ; (iii) a point of relative maximum (minimum) if there exists an open ball B r (x 0 ) such that f(x) f(x 0 ) (f(x) f(x 0 )) for every x B r (x 0 ) D; (iv) a point of strict relative maximum (minimum) if there exists an open ball B r (x 0 ) such that f(x) < f(x 0 ) (f(x) > f(x 0 )) for every x B r (x 0 ) D, x x 0. The famous Weierstrass theorem guarantees the existence of a maximum and a minimum of a continuous function on a compact set. We present the statement without the proof. Theorem (Weierstrass Theorem). Let f : (R n )D R be a real function of n real variables. If f is continuous and D is compact, then there exists a point of minimum and a point of maximum for f.. Theorem (Necessary condition for a relative minimum or maximum). If f : (R n )D R is C 1 and x 0 D is a point of relative minimum or maximum that is in addition an interior point of D then it must be f x i (x 0 ) = 0 for i = 1,..., n, or equivalently D 1 f(x 0 ) = 0.

29 26 Chapter 2. Real functions of n real variables Definition (Stationary point). Let f : (R n )D R be a real function of n real variables. We say that x 0 D is a stationary point of f if or equivalently D 1 f(x 0 ) = 0. f x i (x 0 ) = 0 for i = 1,..., n, In order to present sufficient conditions for the existence of relative maxima or minima, we need the Taylor s formula of order 2. Theorem (Taylor s formula of order 2). Let f : (R n )D R be a real function of n real variables which is defined on an open set D. If f is C 2 on D and if x 0 D then there exists a real function of n real variables R 2 (h; x 0 ) such that for every point x 0 + h D such that the segment with extremes x 0 and x 0 + h is contained in D it holds that f(x 0 + h) = f(x 0 ) + [D 1 f(x 0 )] h h D 2 f(x 0 )h + R 2 (h; x 0 ), where the function R 2 (h; x 0 ) is such that R 2 (h; x 0 ) lim h 0 h 2 = 0. We are now ready to prove the following theorem. Theorem (Sufficient condition for relative maximum or minimum). Let f : (R n )D R be a real function of n real variables that is defined on an open set D. If f is C 2 and x 0 D is a stationary point for f then x 0 is (i) a point of strict relative maximum if the hessian matrix D 2 f(x 0 ) is negative definite; (ii) a point of strict relative minimum if the hessian matrix D 2 f(x 0 ) is positive definite. If the hessian matrix D 2 f(x 0 ) is indefinite of sign, then x 0 is neither a point of minimum nor a point of maximum for f. Proof. We shall only prove that statement (ii) holds. The proof of statement (i) is perfectly analogous. Let x 0 D be a stationary point for a real-valued function f which is C 2 on an open set D R n. Assume that the hessian matrix D 2 f(x 0 ) is positive definite. From the Taylor s formula (see Theorem ), since x 0 D is a stationary point of f (hence, D 1 (x 0 ) = 0) we have

30 2.3. Differentiability and optimization 27 that, for every h R n such that the segment with extremes x 0 and x 0 + h is contained in D, f(x 0 + h) = f(x 0 ) h D 2 f(x 0 )h + R 2 (h; x 0 ), and, dividing by h 2, h f(x 0 + h) f(x 0 ) h 2 = 1 2 h D2 f(x 0 h ) h + R 2(h; x 0 ) h 2. The quadratic form q(h) = h D 2 f(x 0 )h is continuous on R n and therefore, from the Weierstrass theorem , attains a minimum value a on the unit circle {v R n : v = 1}, which is a closed and bounded set. On the other hand, since the hessian matrix D 2 f(x 0 ) (and therefore the quadratic form q(h) = h D 2 f(x 0 )h) is positive definite, such a minimum value a is greater than 0. Hence, we have that h 0 < a 2 < 1 2 h D2 f(x 0 h ) h. R 2 (h; x 0 ) Since lim h 0 h 2 = 0, there exists a real number r > 0 such that, if 0 < h < r, then a 4 < R 2(h; x 0 ) h 2 < a 4. Therefore, as soon as 0 < h < r, we have that h f(x 0 + h) f(x 0 ) h 2 = 1 2 h D2 f(x 0 h ) h + R 2(h; x 0 ) h 2 > a 2 a 4 > 0, which implies that x 0 is a point of strict relative minimum for f. Definition (Saddle point). Let f : (R n )D R be a real function of n real variables. Then a point x 0 D is said to be a saddle point of f provided that x 0 is a stationary point and the hessian matrix D 2 f(x 0 ) is indefinite of sign. In this case every open ball centered at x 0 contains points x D such that f(x) < f(x 0 ) and points x D such that f(x) > f(x 0 ). In the following remark we consider the sufficient conditions for the existence of relative maxima or minima in the case of two variables.

31 28 Chapter 2. Real functions of n real variables Figure 2.4 Graph of f(x, y) = x 2 y 2, saddle point x 0 = (0, 0). Remark (Sufficient conditions in the case of two variables). Let f be a real function of two real variables and assume that its domain D is open. From the previous theorem, if f is C 2 and x 0 D is a stationary point for f then x 0 is (i) a point of strict relative maximum if 2 f x 2 (x 0 ) < 0, 2 f x 2 (x 0 ) 2 f y 2 (x 0 ) (ii) a point of strict relative minimum if 2 f x 2 (x 0 ) > 0, 2 f x 2 (x 0 ) 2 f y 2 (x 0 ) [ 2 f x y (x0 )] 2 > 0; [ 2 f x y (x0 )] 2 > 0; (iii) a saddle point for f if 2 f x 2 (x 0 ) 2 f y 2 (x 0 ) [ 2 f x y (x0 )] 2 < 0. Nothing can be said in general if det(d 2 f(x 0 )) = 0. Example (Extremal points in case of 3 variables). Consider the real function of 3 real variables defined as We have that f x (x, y, z) = 4x(x2 +y 2 ) y, f(x, y, z) = (x 2 + y 2 ) 2 + z 2 xy. f y (x, y, z) = 4y(x2 +y 2 ) x, f (x, y, z) = 2z. z

32 2.3. Differentiability and optimization 29 In order to determine the stationary points of f, consider the system 4x(x 2 + y 2 ) y = 0 4y(x 2 + y 2 ) x = 0. z = 0 The previous system is equivalent to the following one: (4x 2 + 4y 2 + 1)(x y) = 0 4y(x 2 + y 2 ) x = 0, z = 0 that in turn is equivalent to the following pair of systems: 4x 2 + 4y = 0 4y(x 2 + y 2 ) x = 0 z = 0, x y = 0 4y(x 2 + y 2 ) x = 0 z = 0. The first one has no solutions in in R 3, while the second one, that can be written as follows: x(8x 2 1) = 0 y = x, z = 0 has solutions (0, 0, 0), ( 2 of the second order are 4, 2 2 f x 2 (x, y, z) = 4(3x2 + y 2 ), 2 f x y (x, y, z) = 2 f (x, y, z) = 8xy 1, y x We have that 2 f x (x, y, z) 2 (x, y, z) Since 2 f y x 4, 0) and ( 2 4, 2 4, 0). The partial derivatives 2 f y 2 (x, y, z) = 4(x2 + 3y 2 ), 2 f y z (x, y, z) = 2 f (x, y, z) = 0. z y 2 f x y (x, y, z) 2 f y 2 (x, y, z) D 2 (f(x, y, z)) = 2 f (x, y, z) = 2, z2 2 f x z (x, y, z) = 2 f (x, y, z) = 0, z x = 16(3x2 + y 2 )(x 2 + 3y 2 ) (8xy 1) 2. 4(3x 2 + y 2 ) 8xy 1 0 8xy 1 4(x 2 + 3y 2 ) we have that det(d 2 f(x, y, z) = 2[16(3x 2 + y 2 )(x 2 + 3y 2 ) (8xy 1) 2 ]). Hence, 2 f x 2 (0, 0, 0) = 0 = det(a 1 ), det(a 2 ) = 1, det(d 2 f(0, 0, 0)) = 2.,

33 30 Chapter 2. Real functions of n real variables The point (0, 0, 0) is a saddle point (therefore it is not an extremal point). If we consider the points ( 2 4, 2 4, 0) and ( 2 4, 2 4, 0), we obtain that D 2 (f(x, y, z)) is positive definite at those points and therefore these are points of relative minimum. 2.4 Convex and concave functions WE present some arguments concerning convexity. Definition (Convex set). A subset D of R n is said to be convex if for all x, y D and for every real number t [0, 1] we have that tx + (1 t)y D. If t [0, 1], then tx + (1 t)y D is said to be a convex linear combination of the points x and y. In this case, tx + (1 t)y is a point of the segment whose extremes are x and y. Definition (Convex and concave functions). A function f : (R n )D R defined on a convex set D R n is said to be (i) convex (concave) if f(tx + (1 t)y) tf(x) + (1 t)f(y) (respectively, f(tx + (1 t)y) tf(x) + (1 t)f(y)) for all x, y D and for every real number 0 < t < 1; (ii) strictly convex (strictly concave) if f(tx + (1 t)y) < tf(x)+(1 t)f(y) (respectively, f(tx + (1 t)y) > tf(x) + (1 t)f(y)) for all x, y D and for every real number 0 < t < 1. The convexity/concavity of a quadratic form is simple to be analyzed by means of the following proposition, which is not proven here. Proposition (Convex quadratic forms). A quadratic form q on R n is convex (concave) if and only if q is positive semidefinite (negative semidefinite). Further, q is strictly convex (concave) if q is positive definite (negative definite).

34 2.4. Convex and concave functions Figure 2.5 The function f(x, y) = sin(x + y) is neither convex nor concave. Example (Strictly convex quadratic form). Consider the following quadratic form on R 2 : q(x, y) = x 2 4xy + 8y 2. Then we have that q(x, y) = x Ax with x = (x, y) and A = ( Since D 2 f(x) A is positive definite, we have that q is strictly convex. We present the proof of the following nice result. Theorem (LocalRelative minimum and convexity). Let f : (R n )D R be a convex (concave) function. Then every point x 0 D that is a point of relative minimum (maximum) is also a point of minimum (maximum) for f on D. Proof. Let f be a convex function on a set D R n and let x 0 be a point of relative minimum for f. Then, from the definition of a point of relative minimum, there exists an open ball B r (x 0 ) such that f(x) f(x 0 ) for every x B r (x 0 ) D. Let x D be a generic point of the domain of f. Consider the convex linear combination z t = tx 0 + (1 t)x (0 < t < 1). Since f is ).

35 32 Chapter 2. Real functions of n real variables convex, we have that f(z t ) = f(tx 0 +(1 t)x) tf(x 0 )+(1 t)f(x) for every real number t such that 0 < t < 1. On the other hand, if t sufficiently near 1 then z t belongs to B r (x 0 ) and therefore f(x 0 ) f(z t ) tf(x 0 )+(1 t)f(x) implies that f(x 0 ) tf(x 0 )+(1 t)f(x). This last inequality can be written as (1 t)f(x 0 ) (1 t)f(x), which is equivalent to f(x 0 ) f(x). Hence, since such a condition is verified for every x D, we have that x 0 is a point of minimum for f. Theorem (Condition for convexity). Let f : (R n )D R be C 2 on an open and convex set D. Then f is convex (concave) if and only if the hessian matrix D 2 f(x) is positive (negative) semidefinite for all x D. If the hessian matrix D 2 f(x) is positive (negative) definite for all x D, then f is strictly convex (concave). Example (Convex function). Consider the function f(x, y) = e x y on R 2. We have that ( ) D 2 e x y e f(x) = x y. e x y e x y Since D 2 f(x) is positive semidefinite, we have that f is convex. Theorem (Convexity and extremal points). Let f : (R n )D R be C 1 on an open and convex set D. If f is convex (concave) and x 0 is a stationary point for f, then x 0 is a point of relative minimum (maximum). If in addition f is strictly convex (concave), then the stationary point x 0 is the unique point of strict minimum (maximum). 2.5 Constrained optimization with equality constraints We now present the some essential material concerning the constrained optimization with equality constraints. Consider m + 1 real functions f, g 1,..., g m : (R n )D R, with m < n. In the [constrained optimization problem with equality constraints we look for the points of minimum and maximum of the function f subject to the constraints g i (x 1,..., x n ) = 0 with i = 1,..., m. Since it is clear that min f(x) = max ( f(x)), we can limit ourselves to the consideration of the following problem:

36 2.5. Constrained optimization with equality constraints 33 max f(x 1,..., x n ) sub g 1 (x 1,..., x n ) = 0. g m (x 1,..., x n ) = 0 (2.5.1) Since we want to reduce ourselves to the consideration of a problem of unconstrained optimazation, let us introduce the Lagrangean function (or simply the Lagrangean) L : D R m R, L(x 1,..., x n, λ 1,..., λ m ) = m f(x 1,..., x n ) λ i g i (x 1,..., x n ). i=1 where the m real variables λ 1,..., λ m are said to be the Lagrange multipliers. Let us now define the set m D 0 = {(x 1,..., x n ) R n : g i (x 1,..., x m ) = 0}, i=1 and let us assume that the functions f, g 1,..., g m are C 1 on D. If a point (x 0, λ 0 ) D 0 R m is a stationary point of the Lagrangean L, then x 0 is a candidate to be a point of maximum for f constrained to g i (x 1,..., x n ) = 0 (i = 1,..., m). A possible justification of such an assertion can be found in the following Theorem, which immediately goes after the definition of a saddle point of the Lagrangean. Let us first notice that, with the following definitions: λ = λ 1. λ m, g(x) = g 1 (x). g m (x) the pervious problem of constrained maximum with equality constraints can be formulated as follows: max f(x), sub g(x) = λ. (2.5.2) Definition (Saddle point of the Lagrangean). If we consider the constrained maximization problem with equality constraints 2.5.2, a point (x 0, λ 0 ) D R m is said to be a saddle point of the Lagrangean L(x, λ) = f(x) λ g(x) if L(x 0, λ 0 ) = max x min L(x, λ). λ

37 34 Chapter 2. Real functions of n real variables Theorem (saddle points and constrained maxima). Consider the constrained maximization problem with equality constraints If (x 0, λ 0 ) is a saddle point of the Lagrangean L(x, λ), then x 0 is a solution of the problem Proof. If (x 0, λ 0 ) is a saddle point of the Lagrangean L(x, λ), then for every λ R m and for every vector x D we have that L(x, λ 0 ) L(x 0, λ 0 ) L(x 0, λ). The last inequality is equivalent to the requirement according to which, for every λ R m, [λ λ 0 ] g(x 0 ) 0, that in turn implies that g(x 0 ) = 0 (that is, x 0 must satisfy the equality constraints). Then the first inequality becomes f(x) λ 0 g(x 0 ) f(x 0 ), which is equivalent to f(x) f(x 0 ) since x satisfies the equality constraints. Therefore x 0 is actually a point of (absolute) maximum as soon we require that the constraints g(x) = 0 are satisfied. Example (extremal points with equality constraints). Consider the following real function of two real variables: f(x, y) = x 2 + y2 2. We want to determine the maximum and the minimum of f conditional to the equality constraints x 2 + y 2 6y = 0. The Lagrangean is L(x, y, λ) = x 2 + y2 2 λ(x2 + y 2 6y). The system of the partial derivatives with respect to the variables x, y e λ equal to zero is x 2 + y 2 6y = 0 2x 2λx = 0 x(1 λ) = 0. y 2λy + 6λ = 0 From the second equation we get x = 0 or else λ = 1. If λ = 1, then we have that x = 0 and y = 6. If λ 1, then we must have x = 0 and y = 0 or y = 6. If y = 0, then we have that λ = 0 and therefore the solutions are x = 0, y = 0 e λ = 0. If y = 6, then it must be λ = 1 and we arrive at a contradiction. Finally, since f(0, 6) = 18 and f(0, 0) = 0, we have that the points (0, 6) e (0, 0) are the points of maximum and respectively minimum given the contraints.

38 2.6. Exercises Exercises Quadratic forms Study the definition of the following quadratic forms on R n : 1. q(x, y, z) = x 2 + y 2 + z 2 2xy + 2yz; 2. q(x, y, z) = x 2 + 2y 2 + z 2 + 2xy 2xz; 3. q(x, y, z) = x 2 + y 2 + z 2 2xz; 4. q(x, y, z) = x 2 + 4y 2 + 2z 2 4xy; 5. q(x, y, z) = x 2 + y 2 + z 2 4xy 4xz; 6. q(x, y, z) = x 2 + 4y 2 + 2z 2 4xy: 7. q(x, y, z) = x 2 + 3y 2 z 2 2xy + 2yz; 8. q(x, y, z) = x 2 + 5y 2 + z 2 2xy 4yz; 9. q(x, y, z) = x 2 + 2y 2 + z 2 2xy + 2xz; 10. q(x, y, z) = x 2 + y 2 + 2z 2 xy xz. Answer the following multiple choice questions concerning a quadratic form q(x) on R n with an associated symmetric matrix A: 11. (a) q(x) is positive definite if there exists a positive minor of A; (b) q(x) is negative definite if det(a 1 ) < 0 and det(a 2 ) < 0; (c) indefinite of sign if det(a 1 ) < 0 and det(a 2 ) < 0; (d) none of the previous assertions is true. 12. (a) q(x) is positive semidefinite if there exists a nonnegative minor of A; (b) q(x) is negative semidefinite if there exists a non positive minor of A; (c) q(x) is indefinite of sign if det(a 1 ) 0 and det(a 2 ) < 0; (d) none of the previous assertions is true.

39 36 Chapter 2. Real functions of n real variables 13. (a) q(x) is negative semidefinite if all the principal minors of A are nonnegative; (b) q(x) is positive definite if det(a 2 ) < 0; (c) indefinite of sign if det(a 1 ) < 0; (d) none of the previous assertions is true. 14. (a) q(x) is positive definite if all the principal minors of A are nonnegative; (b) q(x) is negative definite if all the principal minors of A are nonnegative; (c) indefinite of sign if det(a 2 ) < 0; (d) none of the previous assertions is true. 15. (a) q(x) is positive definite if all the principal minors of A are nonnegative; (b) q(x) is indefinite of sign if det(a 2 ) < 0; (c) q(x) is negative definite if det(a 1 ) < 0; (d) none of the previous assertions is true. 16. (a) q(x) is positive definite if all the principal minors of A are nonnegative; (b) q(x) is negative definite if det(a 1 ) < 0; (c) indefinite of sign if det(a 2 ) < 0; (d) none of the previous assertions is true. Maxima and minima Determine the nature of the stationary points of the following functions: 1. f(x, y) = x 2 x y + 2xy; 2. f(x, y) = x(x y 1); 3. f(x, y) = x 2 + y 2 (x + 1); 4. f(x, y) = x(x + y + 1); 5. f(x, y) = x 2 2xy + y 3 ; 6. f(x, y) = x 2 y(1 2x);

40 2.6. Exercises f(x, y) = x 3 xy + x + y; 8. f(x, y) = 1 x + 1 y + xy. Answer the following multiple choice questions concerning a C 2 real function f : (R n )D R of n real variables and a point x 0 D: 9. (a) a point of strict relative maximum if the gradient at x 0 is D 1 f(x 0 ) = 0; (b) a saddle point of f if the hessian matrix D 2 f(x 0 ) is indefinite of sign; (c) a point of strict relative minimum if the gradient at x 0 is D 1 f(x 0 ) = 0 and the hessian matrix D 2 f(x 0 ) is negative definite; (d) none of the previous assertions is true. 10. (a) if x 0 is an interior point of strict relative maximum then the gradient at x 0 is D 1 f(x 0 ) = 0; (b) (c) (d) if the hessian matrix D 2 f(x 0 ) is positive definite then x 0 is a point of strict relative maximum; if the hessian matrix D 2 f(x 0 ) is positive definite then x 0 is a point of strict relative minimum; none of the previous assertions is true. 11. (a) a point of strict relative minimum if the gradient at x 0 is D 1 f(x 0 ) = 0 and the hessian matrix D 2 f(x 0 ) is positive definite; (b) a saddle point of f if the hessian matrix D 2 f(x 0 ) is indefinite of sign; (c) a point of strict relative maximum if the gradient at x 0 is D 1 f(x 0 ) = 0 and the hessian matrix D 2 f(x 0 ) is positive definite; (d) none of the previous assertions is true. 12. (a) a point of strict relative minimum if the gradient at x 0 is D 1 f(x 0 ) = 0 and the hessian matrix D 2 f(x 0 ) is negative definite; (b) a saddle point of f if the gradient at x 0 is D 1 f(x 0 ) = 0; (c) (d) a saddle point of f if the hessian matrix D 2 f(x 0 ) is negative definite; none of the previous assertions is true. 13. (a) a point of strict relative minimum if the gradient at x 0 is D 1 f(x 0 ) = 0;

41 38 Chapter 2. Real functions of n real variables (b) a saddle point of f if the hessian matrix D 2 f(x 0 ) is positive definite; (c) a point of strict relative maximum if the gradient at x 0 is D 1 f(x 0 ) = 0 and the hessian matrix D 2 f(x 0 ) is positive definite; (d) none of the previous assertions is true.

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0, Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

1 3 4 = 8i + 20j 13k. x + w. y + w

1 3 4 = 8i + 20j 13k. x + w. y + w ) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MA651 Topology. Lecture 6. Separation Axioms.

MA651 Topology. Lecture 6. Separation Axioms. MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

BX in ( u, v) basis in two ways. On the one hand, AN = u+

BX in ( u, v) basis in two ways. On the one hand, AN = u+ 1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

2013 MBA Jump Start Program

2013 MBA Jump Start Program 2013 MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Algebra Review Calculus Permutations and Combinations [Online Appendix: Basic Mathematical Concepts] 2 1 Equation of

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Understanding Basic Calculus

Understanding Basic Calculus Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson Mathematics for Computer Science/Software Engineering Notes for the course MSM1F3 Dr. R. A. Wilson October 1996 Chapter 1 Logic Lecture no. 1. We introduce the concept of a proposition, which is a statement

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

THE BANACH CONTRACTION PRINCIPLE. Contents

THE BANACH CONTRACTION PRINCIPLE. Contents THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,

More information

Limits and Continuity

Limits and Continuity Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely

More information

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing! MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Mathematics Review for MS Finance Students

Mathematics Review for MS Finance Students Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,

More information

The Ideal Class Group

The Ideal Class Group Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Math 53 Worksheet Solutions- Minmax and Lagrange

Math 53 Worksheet Solutions- Minmax and Lagrange Math 5 Worksheet Solutions- Minmax and Lagrange. Find the local maximum and minimum values as well as the saddle point(s) of the function f(x, y) = e y (y x ). Solution. First we calculate the partial

More information