MATHEMATICAL BACKGROUND

Size: px
Start display at page:

Download "MATHEMATICAL BACKGROUND"

Transcription

1 Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a system of linear equations. In this notes, matrices are denoted by capital letters A, B, C,. Vectors are denoted by bold face smaller letters a, b, c,, x, y, z. Vectors are column vectors. The transpose of a matrix A is A T. 0 is the zero vector and 1 is a column vector of all 1 s. 1.1 Linear Equations and Linear Programming In linear programming (LP), we usually want to optimize the objective variable z which is a linear function of the decision variables x i (i = 1, 2,, n) under some linear constraints on x i. Symbolically, we have In matrix form, we have optimize z = c 1 x 1 + c 2 x c n x n a 11 x a 1n x n b 1, subject to.. a m1 x a mn x n b m, where x i 0, i = 1, 2,, n. optimize subject to z = c T x { Ax b x 0. This is called the canonical form of an LP problem. Since inequality constraints are difficult to handle, in actual computation, we usually consider LP problems of the form max subject to z = c T x { Ax = b x 0. This is called the standard form of the LP problem. Notice that the constraints in the canonical form are transformed to a system of linear equations. The equation z = c T x is called the objective 1

2 2 Chapter 1. MATHEMATICAL BACKGROUND function and the set of all x R n that satisfy this equation is a hyperplane in R n. The rest of this chapter is devoted to the study of the solutions of systems of linear equations and the properties of hyperplanes. 1.2 Systems of Linear Equations and Their Solutions Consider a system of m simultaneous linear equations in n unknown variables x 1,, x n : a 11 x 1 + a 12 x a 1n x n = b 1, In matrix form, we have. a m1 x 1 + a m2 x a mn x n =. b m. Ax = b. (1.1) To solve this system of equations is to find the values of x 1, x 2,, x n that satisfy the equation. The corresponding vector x = [x 1, x 2,, x n ] T will be called a solution to (1.1). To find the solutions of (1.1), we construct the augmented matrix A b of A that is defined by a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 A b = [A, b] = = [ a 1, a 2,, a n, b ], a m1 a m2 a mn b m where a i is the i-th column of A. For the solution of (1.1), there are two cases to consider. (a) rank(a) < rank(a b ). Then b and the columns of A are linearly independent. Hence there are no x i such that n x i a i = b. In particular, the system Ax = b has no solutions. In that case, we call the system inconsistent. Notice that here we have rank(b) = 1 and rank (A b ) = rank (A) + 1. (b) rank(a) = rank(a b ) = k. Then every column of A b, in particular the vector b, can be expressed as a linear combination of k linearly independent columns of A, i.e. there exist x i1, x i2, x ik not all zero such that x ij a ij = b. Thus at least one solution exists in this case. We remark that if m = n = rank(a), then the solution is also unique and x = A 1 b. However, in LP, we usually have rank(a) = m < n and Ax = b usually has more than one solution. 1.3 Properties of Solutions of Systems of Linear Equations Let us suppose that Ax = b has more than one solution, say x 1 and x 2 with x 1 x 2. Then for any λ [0, 1], A[λx 1 + (1 λ)x 2 ] = λax + (1 λ)ax 2 = λb + (1 λ)b = b.

3 1.4. Homogeneous Systems of Linear Equations 3 Thus λx 1 + (1 λ)x 2 is also a solution. Hence we have proved that if Ax = b has more than one solution, then it has infinite many solutions. To characterize these solutions, let us first suppose that A is an m-by-n matrix with rank(a) = m < n. Then Ax = b can be written as where Bx B + Rx β = b, a 11 a 12 a 1m a 21 a 22 a 2m B = = [a 1, a 2,, a m ], a m1 a m2 a mm a 1m+1 a 1m+2 a 1n a 2m+1 a 2m+2 a 2n R =... = [a m+1, a m+2,, a n ], a mn+1 a mn+2 a mn x 1 x 2 x B =. and x β = x m x m+1 x m+2 Since the ordering of the variables x i are irrelevant, we can assume that the variables have been reordered so that B is nonsingular. Or equivalently, we can consider the nonsingular B as being formed by suitably picking m columns of A. Then we have. x n. x B = B 1 (b Rx β ). (1.2) Hence given any x β we can solve uniquely for x B in terms of x β. Thus to find the solution to (1.1), we can assign arbitrary values to the (n m) variables [ ] in x β and determine from (1.2) the values xb for the remaining m variables in x B. Then x = is a solution to Ax = b. x β 1.4 Homogeneous Systems of Linear Equations A homogeneous system of linear equations is one that is of the form Ax = 0. Its solutions form an (n m) dimensional subspace of R n. In fact, the solution space is just the kernel of the linear mapping represented by the matrix A. Since A is a mapping from R n to R m, we see that dimension of kernel of A = dimension of R n dimension of range of A = n rank(a) = n m. Note that if x 1 is a solution of Ax = b and x 0 0 is a solution of Ax = 0, then x 1 + x 0 is a solution of Ax = b. In fact, A(x 1 + x 0 ) = Ax 1 + Ax 0 = b + 0 = b. Clearly x 1 x 1 + x 0. Hence we have found two distinct solutions to Ax = b and by the results in 3, we see that Ax = b has infinite many solutions. Thus we have proved that if n > m, and if Ax = b is not inconsistent, then Ax = b has infinite many solutions. We remark that if b 0, the solutions of Ax = b do not form a subspace of R n, but is a space translated away from the origin. Such a space is called an affine space.

4 4 Chapter 1. MATHEMATICAL BACKGROUND 1.5 Basic Solutions Definition 1.1. Consider Ax = b with rank(a) = m < n. Let B be a matrix formed by choosing m columns out of the n columns of A. If B is a non-singular matrix and if all the (n m) variables not associated with these columns are set equal to zero, then the solution to the resulting system of equations is called a basic solution. We call the m variables x i associated with these m columns the basic variables, the other variables are called the non-basic variables. Using (1.2), we see that x B are the basic variables and [ x β are ] the non-basic variables. To get B a basic solution, we set x β = 0. Then x B = B 1 b and x = 1 b is a basic solution. 0 Definition 1.2. A basic solution to Ax = b is degenerate if one or more of the m basic variables vanish. Example 1.1. Consider the system Ax = b where [ ] A =, b = If we take B = [a1 a 2 ] = [ ] 1 2. In this case x 0 1 B = [ 1, 1] T and x = [ 1, 1, 0, 0] T is the corresponding basic solution. [ 1 1]. [ ] 1 0 We can also take B = [a 1 a 4 ] =. Then x 0 1 B = [1, 1] T and x = [1, 0, 0, 1] T is the corresponding basic solution. [ ] 1 2 For B = [a 2 a 3 ] = we have x 2 1 B = [1/3, 1/3] T. Hence x = [0, 1/3, 1/3, 0] T is the corresponding basic solution. Note that all these basic solutions are non degenerate. Thus if any one of the elements of x B is zero, the basic solution is degenerate. Although the number of solutions to Ax = b are in general infinite, we will see later that the optimal solutions in LP problems are basic solutions. Therefore, we will like to know how many basic solutions are there. This is equivalent to asking how many such nonsingular matrices B can possibly be formed from the columns of A. It is obvious that the number of such matrices is bounded by Cm n n! = m!(n m)!. Theorem 1.1. A necessary and sufficient condition for the existence and non-degeneracy of all possible basic solutions of Ax = b is the linearly independence of every set of m columns of the augmented matrix A b = [A, b]. Proof. ( ) Suppose that all basic solutions exist and are not degenerate. Then for any set of m columns of A, say a i1, a i2,, a im, 1 i j n, 1 j m, there exists x Bj, j = 1, 2,, m such that m x Bj a ij = b. Since the solution is non degenerate, all x Bj 0. We first claim that {a ij } m is linearly independent. For if not, then there exist c j, j = 1,, m not all zeros, such that m c j a ij = 0.

5 1.6. Hyperplanes 5 Let us suppose that c r 0. Then m m x Bj a ij = (x Bj x B r c j )a ij = b. c r j r Since x Br = 0, we have found a degenerate solution to Ax = b, a contradiction. Thus {a ij } m is linearly independent. Using the fact that x Bj 0 for all j, we can, by replacement method, show that any m 1 vectors of {a ij } m together with the vector b are also linearly independent. In fact, for all x Br 0, we have m x Bj a ij 1 b = a ir. x Br x Br Using similar argument as above, we conclude that the set {a i1, a ir 1, a ir+1, a im, b} is also linearly independent. ( ) Let {a i1, a i2,, a im } be an arbitrary set of m columns of A. By assumption, it is linearly independent. Since a ij are m-vectors, we see that the set {a i1, a i2,, a im, b} is linearly dependent. Hence there exist x Bj, j = 1,, m such that b = m x Bj a ij. Thus basic solution exists for such choice of m columns of A. Next we claim that x Bj j = 1, 2,, m. Suppose that x Br = 0. Then 0 for all b m x Bj a ij = 0. j r That means that the set {a i1, a ir 1, a ir+1, a im, b} is linearly dependent, hence a contradiction to our assumption. Thus x Bj 0 for all j = 1,, m. By the same arguments used in the proof of the above theorem, we have the following corollary. Corollary 1.1. Given a basic solution to Ax = b with basic variables x i1,, x im, a necessary and sufficient condition for the solution to be non-degenerate is the linearly independence of b with every m 1 columns of {a i1,, a im }. 1.6 Hyperplanes Definition 1.3. A hyperplane in R n is defined to be the set of all points in {x c T x = z} where c is a fixed nonzero vector in R n and z R.

6 6 Chapter 1. MATHEMATICAL BACKGROUND In R 2, a hyperplane is given by the equation c 1 x 1 + c 2 x 2 = z, which is a straight line. In R 3, hyperplanes are just planes. c c T x = z Given a hyperplane c T x = z, it is clear that it passes through the origin if and only if z = 0. In that case, c is orthogonal to every vector x of the hyperplane, and the hyperplane forms a vector subspace of R n with dimension n 1. If z 0, then for any two vectors x 1 x 2 in the hyperplane, we have c T (x 1 x 2 ) = c T x 1 c T x 2 = z z = 0. Thus c is orthogonal to x 1 x 2 which is a vector lying on the hyperplane, i.e. c is also perpendicular to the hyperplane. We note that in this case, the hyperplane is an affine space. Definition 1.4. Given the hyperplane c T x = z, the vector c is called the normal of the hyperplane. Two hyperplanes are said to be parallel if their normals are parallel vectors. Example 1.1. Let x 0 be arbitrarily chosen from a hyperplane X 0 = {x c T x = z 0 }. Let λ > 0 be fixed. Then the point x 1 = x 0 + λc satisfies c T x 1 = c T x 0 + λ c 2 = z 0 + λ c 2 > z 0. Here denotes the Euclidean norm in R n. Let z 1 z 0 + λ c 2 and define the hyperplane X 1 = {x c T x = z 1 }. We see that the hyperplanes X 0 and X 1 are parallel and X 1 is lying in the direction of c from X 0. The distance between the hyperplanes is λ. X 1 λc x 1 x 0 X Convex Sets Definition 1.5. A set C is said to be convex if for all x 1 and x 2 in C and λ [0, 1], we have λx 1 + (1 λ)x 2 C. Geometrically, that means that given any two points in C, then all points on the line segment joining the given two points should also be in C.

7 1.7. Convex Sets 7 Definition 1.6. A point x is an extreme point of a convex set C if there exist no two distinct points x 1 and x 2 C such that x = λx 1 + (1 λ)x 2 for some λ (0, 1). Geometrically, extreme points are just the corner points of C. Example 1.2. R n is convex. Let W be a subspace of R n and x 1, x 2 W. Thus any linear combination of x 1 and x 2 is also in W, in particular the linear combination λx 1 + (1 λ)x 2 W for λ [0, 1]. This shows that W is convex. Example 1.3. The n dimensional open ball centered at x 0 with radius r is defined as B r (x 0 ) = {x x x 0 < r}. The n dimensional closed ball centered at x 0 with radius r is defined as B r (x 0 ) = {x x x 0 < r}. Both the open ball and the closed ball are convex. We prove it for the open ball. Let x 1, x 2 B(x 0 ) and λ [0, 1]. Then (λx 1 + (1 λ)x 2 ) x 0 = λ(x 1 x 0 ) + (1 λ)(x 2 x 0 ) λ x 1 x 0 + (1 λ) x 2 x 0 λr + (1 λ)r = r. Let S be a subset of R n. A point x is a boundary point of S if every open ball centered at x contains both a point in S and a point in R n S. Note that a boundary point can either be in S or not in S. The set of all boundary points of S, denoted by S, is the boundary of S. A set S is closed if S S. A set S is open if its complement R n S is closed. Note that a set that is not closed is not necessarily open; and a set that is not open is not necessarily closed. There are sets that are neither open nor closed. The closure of a set S is the set S = S S. The interior of a set S is the set S = S S. A set S is closed if and only if S = S. A set S is open if and only if S = S. Example 1.4. R n is both open and closed. The empty set is both open and closed. Example 1.5. The hyperplane P = {x c T x = z} is closed in R n. In fact we will show that P P. Without loss of generality we may assume c = 1. Let x P and B r (x) is an open ball centered at x with radius r. Since x B r (x) it remains to show that B r (x) contains a point not in P. Let y = x + r 2 c then c T y = c T x + r 2 ct c = z + r 2 > z. Hence y / P. But y x = r 2 therefore y B r(x. Example 1.6. The half spaces X 1 = {x c T x z} and X 2 = {x c T x z} are closed in R n. In fact we have X 1 = X 2 = the hyperplane P = {x c T x = z}. We will show that X 1 = P, the proof for X 2 = P is similar. ( P X 1 ) Let x P. For any r > 0, let y 1 = x + r 2 c c, y 2 = x r 2 c c. We see that x y 1 = r 2 = x y 2 so both y 1, y 2 B r (x). Moreover c T y 1 = c T x + r 2 c ct c = r + r 2 > r

8 8 Chapter 1. MATHEMATICAL BACKGROUND and therefore y 1 / X 1. On the other hand c T y 2 = c T x r 2 c ct c = r r 2 < r so y 2 X 1. This shows x X 1. ( X 1 P ) Suppose x / P. If c T x = z 1 < z then let r = z z 1 2 > 0. The open ball B r (x) lies entirely in X 1. So B r (x) contains no point outside of X 1, hence x / X 1. If c T x = z 1 > z then let r = z1 z 2 > 0. The open ball B r (x) lies entirely outside of X 1. So B r (x) contains no point of X 1, hence x / X 1. In either case c / X 1. Lemma 1.1. (a) All hyperplanes are convex. (b) The closed half-space {x c T x z} and {x c T x z} are convex. (c) The open half-space {x c T x < z} and {x c T x > z} are convex. (d) Any intersection of convex sets is still convex. (e) The set of all feasible solutions to a linear programming problem is a convex set. Proof. (a) Let X = {x c T x = z} be our hyperplane. For all x 1, x 2 X and λ [0, 1], we have c T [λx 1 + (1 λ)x 2 ] = λc T x 1 + (1 λ)c T x 2 = λz + (1 λ)z = z. Thus λx 1 + (1 λ)x 2 X. Hence X is convex. (b) and (c) can be proved similarly by replacing the equality signs in (a) by the corresponding inequality signs. (d) Let C = α I C α, where C α are convex for all α in the index set I. Then for all x 1, x 2 C, we have x 1, x 2 C α for all α I. Hence for all λ [0, 1], λx 1 + (1 λ)x 2 C α for all α I. Thus λx 1 + (1 λ)x 2 C, and C is convex. (e) For any LP problem, the constraints can be written as a i x b i or a i x = b i etc. The set of points that satisfy any one of these constraints is thus a half space or a hyperplane. By (a), (b) and (c), they are convex. By (d), the intersection of all these sets, which is defined to be the set of feasible solutions, is a convex set. Definition 1.7. Let {x 1, x k } be a set of given points. Let x = M i x i where M i 0 for all i and k M i = 1. Then x is called a convex combination of the points x 1, x 2,, x k. Example 1.7. Consider the triangle on the plane with vertices x 1, x 2, x 3. Then any point x in the triangle is a convex combinations of {x 1, x 2, x 3 }. In fact, let y be the extension of the line segment

9 1.7. Convex Sets 9 xx 3 and such that y lies on the line segment x 1 x 2. Then where Since where we have Clearly kp + kq + l = 1. x 3 x x 1 y x 2 p = x 2y x 2 x 1 l = xy yx 3 y = px 1 + qx 2, and q = x 1y x 2 x 1. x = lx 3 + ky, and k = xx 3 yx 3, x = kpx 1 + kqx 2 + lx 3. Definition 1.8. The convex hull A of a set A is defined to be the intersection of all convex sets that contain A. Lemma 1.2. (a) If A φ, then A φ. (b) If A B, then A B. (c) A is the smallest convex set that contains A. (d) If A is convex, then A = A. Proof. Part (a) follows from the facts that A R n and R n is convex. For (b), we note that any convex set that contains B should also contain A. Finally, (c) and (d) are trivial. Theorem 1.2. The convex hull of a finite number of points x 1, x 2,, x k is the set of all convex combinations of x 1, x 2,, x k. Proof. Let A = { x x = M i x i, M i 0, } M i = 1, be the set of all convex combinations of x 1, x 2,, x k. Let B = {x 1, x 2,, x k } be the convex hull of {x 1, x 2,, x k }. We claim that A = B. We first prove that A is convex. For all y 1, y 2 A, we have y 1 = M i x i and y 2 = N i x i,

10 10 Chapter 1. MATHEMATICAL BACKGROUND where M i = 1, and M i, N i 0 for all i. Hence for all λ [0, 1], N i = 1 y = λy 1 + (1 λ)y 2 = λ M i x i + (1 λ) = N i x i (λm i + (1 λ)n i ) x i = where P i = λm i + (1 λ)n i 0 and P i = = λ P i x i (λm i + (1 λ)n i ) M i + (1 λ) = λ 1 + (1 λ) 1 = 1. Thus y is a convex combination of x 1,, x k, hence is in A. Therefore A is convex. Next we claim that B A. For all i = 1, k, since x i = δ ij x j, where δ ij is the Kronecker delta 1, we see that x i A. Thus A is a convex set containing {x 1,, x k }. By Lemma 1.2 (b) and (d), B A. Finally, we claim that A B. We prove this by mathematical induction on k. If k = 1, then B = B 1 = {x 1 }. For all x A 1, x = 1 x 1 = x 1. Thus A 1 = {x 1 } B 1. Hence by Lemma 1.2 (b) and (d), A 1 = A B 1. Now assume that the claim holds for k 1, i.e. A k 1 B k 1, where and B k 1 = {x 1, x 2,, x k 1 }, { } k 1 k 1 A k 1 = x x = M i x i, M i 0, i = 1, 2,, k 1, M i = 1. We claim that A k B k = {x 1, x 2,, x k }. Let x A k, then x = M i x i where M i 0 for all i and k M i = 1. If x = x k, then x B k. Hence we may assume that x x k, or equivalently, M k 1. Since k M i = 1, we have k 1 k 1 M i 1 = M i = 1. 1 M k 1 M k 1 The Kronecker delta is defined by δ ii = 1 and δ ij = 0 for i j. N i

11 1.8. Supporting Hyperplanes 11 Consider the vector y A k 1 given by y = k 1 M i 1 M k x i, M i we have 0 for all i = 1, 2,, k 1 and k 1 M i = 1. You can think of y as the 1 M k 1 M k projection of x onto the convex set A k 1 : x y A k 1 Thus by the induction hypothesis y B k 1. Since {x 1, x 2,, x k 1 } {x 1, x 2,, x k }, B k 1 B k. Therefore, y B k. Since B k is convex by definition and y, x k B k, we see that x k Thus A k B k for all k. x = M i x i = (1 M k )y + M k x k B k. Definition 1.9. Let A = {x 1, x 2,, x n } be a set of points. The convex hull of A is called the convex polyhedron spanned by A. The convex hull of any set of n + 1 points in R n which do not lie on a hyperplane is called a simplex. Thus in R 2, the triangle formed by any three non-collinear points is a simplex while in R 3, the tetrahedron formed by any four points that are not on a plane is a simplex. A simplex in R Supporting Hyperplanes A simplex in R 3 Theorem 1.3. Let C be a closed convex set and y be a point not in C. Then there is a hyperplane X = {x a T x = z} that contains y and such that C {x a T x < z} or C {x a T x > z}.

12 12 Chapter 1. MATHEMATICAL BACKGROUND Proof. Let δ = inf x y > 0. x C Then there is an x 0 on the boundary of C such that x 0 y = δ. This follows because the continuous function f(x) = x y achieves its minimum over any closed and bounded set and it is clear that it suffices to consider x in the intersection of the closure of C and the sphere of radius 2δ centered at y. Define a = x 0 y and z = a T y. Consider the hyperplane X = {x a T x = z} which is illustrated by the following diagram C X x 0 a y Clearly y X. We claim that C {x a T x > z}. Let x C. For any λ (0, 1), since x 0 C, x 0 + λ(x x 0 ) = λx + (1 λ)x 0 C. Thus by definition of a, x 0 + λ(x x 0 ) y 2 δ x 0 y 2, i.e. λ(x x 0 ) + a 2 a 2. Expanding it, we have 2λa T (x x 0 ) + λ 2 x x Letting λ 0 +, we obtain a T (x x 0 ) 0. Hence a T x a T x 0 = a T (y + a) = a T y + a 2 = a T y + δ 2 = z + δ 2 > z. Thus C {x a T x > z = a T y}. Definition Let y be a boundary point of a convex set C. Then X = {x a T x = z} is called a supporting hyperplane of C at y if y X and either C {x a T x z} or C {x a T x z}.

13 1.8. Supporting Hyperplanes 13 C X y Theorem 1.4. If y is a boundary point of a closed convex set C, then there is at least one supporting hyperplane at y. Proof. Since y is a boundary point of C, for any positive integer k, let y k B 1 (y) and y k / C. k Then {y k } be a sequence of vectors not in C but converges to y. Let {a k } be the sequence of corresponding normal vectors constructed according to Theorem 1.3, normalized so that a k = 1 and such that C {x a T k x > at k y k}. Since {a k } is a bounded sequence, it has a convergent subsequence {a kj } with limit a. Let z = a T y. We claim that X = {x a T x = z} is a supporting hyperplane at y. Clearly y X. For any x C, we have Thus C {x a T x z}. a T x = lim j at k j x lim j at k j y kj = a T y = z. Note that in general a supporting hyperplane is not unique. Definition A set C is said to be bounded from below if Thus inf{x j x j is the j-th component of x in C} > for all j. R n + {x R n x T = [x 1,, x n ], x j 0, j = 1,, n} (1.3) is a set bounded from below. However, the upper half plane {(x 1, x 2 ) x 2 0} is not bounded from below. Clearly, any subset of R n+ is also bounded from below. Theorem 1.5. Let C be a closed convex set which is bounded from below. Then every supporting hyperplane of C contains an extreme point of C. Proof. Let x 0 be a boundary point of C and X = {x c T x = z} be a supporting hyperplane at x 0. Define T = X C. We will show that T contains an extreme point of C. Since x 0 X and x 0 C, x 0 T and hence T is nonempty. We first claim that any extreme point of T is also an extreme point of C, or equivalently, if t is not an extreme point of C, then t cannot be an extreme point of T. If t T, then we are done. Let us assume that t T, then t C. By Theorem 1.4, we may assume that C {x c T x z}. Since t is not an extreme point in C, there exist two distinct points x 1 and x 2 C such that t = λx 1 + (1 λ)x 2 for some λ (0, 1). It then suffices to show that x 1, x 2 T. Since c T t = z = λc T x 1 + (1 λ)c T x 2

14 14 Chapter 1. MATHEMATICAL BACKGROUND and C {x c T x z}, we see that c T x 1 = z = c T x 2. Hence x 1, x 2 T. Thus t is not an extreme point in T. We next claim that there is an extreme point in T. We first find a t 1 = [t 1 1, t 1 2,, t 1 n] T in T such that t 1 1 = inf{t 1 t = [t 1,, t n ] T T }. Since T C and C is bounded from below, the infimum t 1 1 is well-defined. If t 1 is unique, then we prove below that it is our extreme point. If t 1 is not unique, we continue to find a t 2 = (t 2 1, t 2 2,, t 2 n) T T such that t 2 1 = inf{t 1 t = [t 1, t 2,, t n ] T T } t 2 2 = inf{t 2 t = [t 1, t 2,, t n ] T T with t 1 = t 2 1}. Since C R n, this process must stop at most after n steps. We claim that when the process stops, the point found is an extreme point. Let t j be that point. Then for all k = 1,, j, t j k = inf{t k t = [t 1, t 2,, t n ] T T with t i = t j i for i = 1, k 1}. Suppose that t j is not an extreme point of T, then there exist two distinct points y 1, y 2 T such that t j = λy 1 + (1 λ)y 2 for some λ (0, 1). Thus t j i = λy1 i + (1 λ)y 2 i, for alli = 1, 2,, n. For all i = 1,, j, since t j i is the infimum, tj i y1 i and t j i y2 i. It follows that tj i = y1 i = y 2 i, i = 1,, j, a contradiction to the uniqueness of t j.

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

THE BANACH CONTRACTION PRINCIPLE. Contents

THE BANACH CONTRACTION PRINCIPLE. Contents THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

1 3 4 = 8i + 20j 13k. x + w. y + w

1 3 4 = 8i + 20j 13k. x + w. y + w ) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information