MATHEMATICAL BACKGROUND


 Morris Riley
 1 years ago
 Views:
Transcription
1 Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a system of linear equations. In this notes, matrices are denoted by capital letters A, B, C,. Vectors are denoted by bold face smaller letters a, b, c,, x, y, z. Vectors are column vectors. The transpose of a matrix A is A T. 0 is the zero vector and 1 is a column vector of all 1 s. 1.1 Linear Equations and Linear Programming In linear programming (LP), we usually want to optimize the objective variable z which is a linear function of the decision variables x i (i = 1, 2,, n) under some linear constraints on x i. Symbolically, we have In matrix form, we have optimize z = c 1 x 1 + c 2 x c n x n a 11 x a 1n x n b 1, subject to.. a m1 x a mn x n b m, where x i 0, i = 1, 2,, n. optimize subject to z = c T x { Ax b x 0. This is called the canonical form of an LP problem. Since inequality constraints are difficult to handle, in actual computation, we usually consider LP problems of the form max subject to z = c T x { Ax = b x 0. This is called the standard form of the LP problem. Notice that the constraints in the canonical form are transformed to a system of linear equations. The equation z = c T x is called the objective 1
2 2 Chapter 1. MATHEMATICAL BACKGROUND function and the set of all x R n that satisfy this equation is a hyperplane in R n. The rest of this chapter is devoted to the study of the solutions of systems of linear equations and the properties of hyperplanes. 1.2 Systems of Linear Equations and Their Solutions Consider a system of m simultaneous linear equations in n unknown variables x 1,, x n : a 11 x 1 + a 12 x a 1n x n = b 1, In matrix form, we have. a m1 x 1 + a m2 x a mn x n =. b m. Ax = b. (1.1) To solve this system of equations is to find the values of x 1, x 2,, x n that satisfy the equation. The corresponding vector x = [x 1, x 2,, x n ] T will be called a solution to (1.1). To find the solutions of (1.1), we construct the augmented matrix A b of A that is defined by a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 A b = [A, b] = = [ a 1, a 2,, a n, b ], a m1 a m2 a mn b m where a i is the ith column of A. For the solution of (1.1), there are two cases to consider. (a) rank(a) < rank(a b ). Then b and the columns of A are linearly independent. Hence there are no x i such that n x i a i = b. In particular, the system Ax = b has no solutions. In that case, we call the system inconsistent. Notice that here we have rank(b) = 1 and rank (A b ) = rank (A) + 1. (b) rank(a) = rank(a b ) = k. Then every column of A b, in particular the vector b, can be expressed as a linear combination of k linearly independent columns of A, i.e. there exist x i1, x i2, x ik not all zero such that x ij a ij = b. Thus at least one solution exists in this case. We remark that if m = n = rank(a), then the solution is also unique and x = A 1 b. However, in LP, we usually have rank(a) = m < n and Ax = b usually has more than one solution. 1.3 Properties of Solutions of Systems of Linear Equations Let us suppose that Ax = b has more than one solution, say x 1 and x 2 with x 1 x 2. Then for any λ [0, 1], A[λx 1 + (1 λ)x 2 ] = λax + (1 λ)ax 2 = λb + (1 λ)b = b.
3 1.4. Homogeneous Systems of Linear Equations 3 Thus λx 1 + (1 λ)x 2 is also a solution. Hence we have proved that if Ax = b has more than one solution, then it has infinite many solutions. To characterize these solutions, let us first suppose that A is an mbyn matrix with rank(a) = m < n. Then Ax = b can be written as where Bx B + Rx β = b, a 11 a 12 a 1m a 21 a 22 a 2m B = = [a 1, a 2,, a m ], a m1 a m2 a mm a 1m+1 a 1m+2 a 1n a 2m+1 a 2m+2 a 2n R =... = [a m+1, a m+2,, a n ], a mn+1 a mn+2 a mn x 1 x 2 x B =. and x β = x m x m+1 x m+2 Since the ordering of the variables x i are irrelevant, we can assume that the variables have been reordered so that B is nonsingular. Or equivalently, we can consider the nonsingular B as being formed by suitably picking m columns of A. Then we have. x n. x B = B 1 (b Rx β ). (1.2) Hence given any x β we can solve uniquely for x B in terms of x β. Thus to find the solution to (1.1), we can assign arbitrary values to the (n m) variables [ ] in x β and determine from (1.2) the values xb for the remaining m variables in x B. Then x = is a solution to Ax = b. x β 1.4 Homogeneous Systems of Linear Equations A homogeneous system of linear equations is one that is of the form Ax = 0. Its solutions form an (n m) dimensional subspace of R n. In fact, the solution space is just the kernel of the linear mapping represented by the matrix A. Since A is a mapping from R n to R m, we see that dimension of kernel of A = dimension of R n dimension of range of A = n rank(a) = n m. Note that if x 1 is a solution of Ax = b and x 0 0 is a solution of Ax = 0, then x 1 + x 0 is a solution of Ax = b. In fact, A(x 1 + x 0 ) = Ax 1 + Ax 0 = b + 0 = b. Clearly x 1 x 1 + x 0. Hence we have found two distinct solutions to Ax = b and by the results in 3, we see that Ax = b has infinite many solutions. Thus we have proved that if n > m, and if Ax = b is not inconsistent, then Ax = b has infinite many solutions. We remark that if b 0, the solutions of Ax = b do not form a subspace of R n, but is a space translated away from the origin. Such a space is called an affine space.
4 4 Chapter 1. MATHEMATICAL BACKGROUND 1.5 Basic Solutions Definition 1.1. Consider Ax = b with rank(a) = m < n. Let B be a matrix formed by choosing m columns out of the n columns of A. If B is a nonsingular matrix and if all the (n m) variables not associated with these columns are set equal to zero, then the solution to the resulting system of equations is called a basic solution. We call the m variables x i associated with these m columns the basic variables, the other variables are called the nonbasic variables. Using (1.2), we see that x B are the basic variables and [ x β are ] the nonbasic variables. To get B a basic solution, we set x β = 0. Then x B = B 1 b and x = 1 b is a basic solution. 0 Definition 1.2. A basic solution to Ax = b is degenerate if one or more of the m basic variables vanish. Example 1.1. Consider the system Ax = b where [ ] A =, b = If we take B = [a1 a 2 ] = [ ] 1 2. In this case x 0 1 B = [ 1, 1] T and x = [ 1, 1, 0, 0] T is the corresponding basic solution. [ 1 1]. [ ] 1 0 We can also take B = [a 1 a 4 ] =. Then x 0 1 B = [1, 1] T and x = [1, 0, 0, 1] T is the corresponding basic solution. [ ] 1 2 For B = [a 2 a 3 ] = we have x 2 1 B = [1/3, 1/3] T. Hence x = [0, 1/3, 1/3, 0] T is the corresponding basic solution. Note that all these basic solutions are non degenerate. Thus if any one of the elements of x B is zero, the basic solution is degenerate. Although the number of solutions to Ax = b are in general infinite, we will see later that the optimal solutions in LP problems are basic solutions. Therefore, we will like to know how many basic solutions are there. This is equivalent to asking how many such nonsingular matrices B can possibly be formed from the columns of A. It is obvious that the number of such matrices is bounded by Cm n n! = m!(n m)!. Theorem 1.1. A necessary and sufficient condition for the existence and nondegeneracy of all possible basic solutions of Ax = b is the linearly independence of every set of m columns of the augmented matrix A b = [A, b]. Proof. ( ) Suppose that all basic solutions exist and are not degenerate. Then for any set of m columns of A, say a i1, a i2,, a im, 1 i j n, 1 j m, there exists x Bj, j = 1, 2,, m such that m x Bj a ij = b. Since the solution is non degenerate, all x Bj 0. We first claim that {a ij } m is linearly independent. For if not, then there exist c j, j = 1,, m not all zeros, such that m c j a ij = 0.
5 1.6. Hyperplanes 5 Let us suppose that c r 0. Then m m x Bj a ij = (x Bj x B r c j )a ij = b. c r j r Since x Br = 0, we have found a degenerate solution to Ax = b, a contradiction. Thus {a ij } m is linearly independent. Using the fact that x Bj 0 for all j, we can, by replacement method, show that any m 1 vectors of {a ij } m together with the vector b are also linearly independent. In fact, for all x Br 0, we have m x Bj a ij 1 b = a ir. x Br x Br Using similar argument as above, we conclude that the set {a i1, a ir 1, a ir+1, a im, b} is also linearly independent. ( ) Let {a i1, a i2,, a im } be an arbitrary set of m columns of A. By assumption, it is linearly independent. Since a ij are mvectors, we see that the set {a i1, a i2,, a im, b} is linearly dependent. Hence there exist x Bj, j = 1,, m such that b = m x Bj a ij. Thus basic solution exists for such choice of m columns of A. Next we claim that x Bj j = 1, 2,, m. Suppose that x Br = 0. Then 0 for all b m x Bj a ij = 0. j r That means that the set {a i1, a ir 1, a ir+1, a im, b} is linearly dependent, hence a contradiction to our assumption. Thus x Bj 0 for all j = 1,, m. By the same arguments used in the proof of the above theorem, we have the following corollary. Corollary 1.1. Given a basic solution to Ax = b with basic variables x i1,, x im, a necessary and sufficient condition for the solution to be nondegenerate is the linearly independence of b with every m 1 columns of {a i1,, a im }. 1.6 Hyperplanes Definition 1.3. A hyperplane in R n is defined to be the set of all points in {x c T x = z} where c is a fixed nonzero vector in R n and z R.
6 6 Chapter 1. MATHEMATICAL BACKGROUND In R 2, a hyperplane is given by the equation c 1 x 1 + c 2 x 2 = z, which is a straight line. In R 3, hyperplanes are just planes. c c T x = z Given a hyperplane c T x = z, it is clear that it passes through the origin if and only if z = 0. In that case, c is orthogonal to every vector x of the hyperplane, and the hyperplane forms a vector subspace of R n with dimension n 1. If z 0, then for any two vectors x 1 x 2 in the hyperplane, we have c T (x 1 x 2 ) = c T x 1 c T x 2 = z z = 0. Thus c is orthogonal to x 1 x 2 which is a vector lying on the hyperplane, i.e. c is also perpendicular to the hyperplane. We note that in this case, the hyperplane is an affine space. Definition 1.4. Given the hyperplane c T x = z, the vector c is called the normal of the hyperplane. Two hyperplanes are said to be parallel if their normals are parallel vectors. Example 1.1. Let x 0 be arbitrarily chosen from a hyperplane X 0 = {x c T x = z 0 }. Let λ > 0 be fixed. Then the point x 1 = x 0 + λc satisfies c T x 1 = c T x 0 + λ c 2 = z 0 + λ c 2 > z 0. Here denotes the Euclidean norm in R n. Let z 1 z 0 + λ c 2 and define the hyperplane X 1 = {x c T x = z 1 }. We see that the hyperplanes X 0 and X 1 are parallel and X 1 is lying in the direction of c from X 0. The distance between the hyperplanes is λ. X 1 λc x 1 x 0 X Convex Sets Definition 1.5. A set C is said to be convex if for all x 1 and x 2 in C and λ [0, 1], we have λx 1 + (1 λ)x 2 C. Geometrically, that means that given any two points in C, then all points on the line segment joining the given two points should also be in C.
7 1.7. Convex Sets 7 Definition 1.6. A point x is an extreme point of a convex set C if there exist no two distinct points x 1 and x 2 C such that x = λx 1 + (1 λ)x 2 for some λ (0, 1). Geometrically, extreme points are just the corner points of C. Example 1.2. R n is convex. Let W be a subspace of R n and x 1, x 2 W. Thus any linear combination of x 1 and x 2 is also in W, in particular the linear combination λx 1 + (1 λ)x 2 W for λ [0, 1]. This shows that W is convex. Example 1.3. The n dimensional open ball centered at x 0 with radius r is defined as B r (x 0 ) = {x x x 0 < r}. The n dimensional closed ball centered at x 0 with radius r is defined as B r (x 0 ) = {x x x 0 < r}. Both the open ball and the closed ball are convex. We prove it for the open ball. Let x 1, x 2 B(x 0 ) and λ [0, 1]. Then (λx 1 + (1 λ)x 2 ) x 0 = λ(x 1 x 0 ) + (1 λ)(x 2 x 0 ) λ x 1 x 0 + (1 λ) x 2 x 0 λr + (1 λ)r = r. Let S be a subset of R n. A point x is a boundary point of S if every open ball centered at x contains both a point in S and a point in R n S. Note that a boundary point can either be in S or not in S. The set of all boundary points of S, denoted by S, is the boundary of S. A set S is closed if S S. A set S is open if its complement R n S is closed. Note that a set that is not closed is not necessarily open; and a set that is not open is not necessarily closed. There are sets that are neither open nor closed. The closure of a set S is the set S = S S. The interior of a set S is the set S = S S. A set S is closed if and only if S = S. A set S is open if and only if S = S. Example 1.4. R n is both open and closed. The empty set is both open and closed. Example 1.5. The hyperplane P = {x c T x = z} is closed in R n. In fact we will show that P P. Without loss of generality we may assume c = 1. Let x P and B r (x) is an open ball centered at x with radius r. Since x B r (x) it remains to show that B r (x) contains a point not in P. Let y = x + r 2 c then c T y = c T x + r 2 ct c = z + r 2 > z. Hence y / P. But y x = r 2 therefore y B r(x. Example 1.6. The half spaces X 1 = {x c T x z} and X 2 = {x c T x z} are closed in R n. In fact we have X 1 = X 2 = the hyperplane P = {x c T x = z}. We will show that X 1 = P, the proof for X 2 = P is similar. ( P X 1 ) Let x P. For any r > 0, let y 1 = x + r 2 c c, y 2 = x r 2 c c. We see that x y 1 = r 2 = x y 2 so both y 1, y 2 B r (x). Moreover c T y 1 = c T x + r 2 c ct c = r + r 2 > r
8 8 Chapter 1. MATHEMATICAL BACKGROUND and therefore y 1 / X 1. On the other hand c T y 2 = c T x r 2 c ct c = r r 2 < r so y 2 X 1. This shows x X 1. ( X 1 P ) Suppose x / P. If c T x = z 1 < z then let r = z z 1 2 > 0. The open ball B r (x) lies entirely in X 1. So B r (x) contains no point outside of X 1, hence x / X 1. If c T x = z 1 > z then let r = z1 z 2 > 0. The open ball B r (x) lies entirely outside of X 1. So B r (x) contains no point of X 1, hence x / X 1. In either case c / X 1. Lemma 1.1. (a) All hyperplanes are convex. (b) The closed halfspace {x c T x z} and {x c T x z} are convex. (c) The open halfspace {x c T x < z} and {x c T x > z} are convex. (d) Any intersection of convex sets is still convex. (e) The set of all feasible solutions to a linear programming problem is a convex set. Proof. (a) Let X = {x c T x = z} be our hyperplane. For all x 1, x 2 X and λ [0, 1], we have c T [λx 1 + (1 λ)x 2 ] = λc T x 1 + (1 λ)c T x 2 = λz + (1 λ)z = z. Thus λx 1 + (1 λ)x 2 X. Hence X is convex. (b) and (c) can be proved similarly by replacing the equality signs in (a) by the corresponding inequality signs. (d) Let C = α I C α, where C α are convex for all α in the index set I. Then for all x 1, x 2 C, we have x 1, x 2 C α for all α I. Hence for all λ [0, 1], λx 1 + (1 λ)x 2 C α for all α I. Thus λx 1 + (1 λ)x 2 C, and C is convex. (e) For any LP problem, the constraints can be written as a i x b i or a i x = b i etc. The set of points that satisfy any one of these constraints is thus a half space or a hyperplane. By (a), (b) and (c), they are convex. By (d), the intersection of all these sets, which is defined to be the set of feasible solutions, is a convex set. Definition 1.7. Let {x 1, x k } be a set of given points. Let x = M i x i where M i 0 for all i and k M i = 1. Then x is called a convex combination of the points x 1, x 2,, x k. Example 1.7. Consider the triangle on the plane with vertices x 1, x 2, x 3. Then any point x in the triangle is a convex combinations of {x 1, x 2, x 3 }. In fact, let y be the extension of the line segment
9 1.7. Convex Sets 9 xx 3 and such that y lies on the line segment x 1 x 2. Then where Since where we have Clearly kp + kq + l = 1. x 3 x x 1 y x 2 p = x 2y x 2 x 1 l = xy yx 3 y = px 1 + qx 2, and q = x 1y x 2 x 1. x = lx 3 + ky, and k = xx 3 yx 3, x = kpx 1 + kqx 2 + lx 3. Definition 1.8. The convex hull A of a set A is defined to be the intersection of all convex sets that contain A. Lemma 1.2. (a) If A φ, then A φ. (b) If A B, then A B. (c) A is the smallest convex set that contains A. (d) If A is convex, then A = A. Proof. Part (a) follows from the facts that A R n and R n is convex. For (b), we note that any convex set that contains B should also contain A. Finally, (c) and (d) are trivial. Theorem 1.2. The convex hull of a finite number of points x 1, x 2,, x k is the set of all convex combinations of x 1, x 2,, x k. Proof. Let A = { x x = M i x i, M i 0, } M i = 1, be the set of all convex combinations of x 1, x 2,, x k. Let B = {x 1, x 2,, x k } be the convex hull of {x 1, x 2,, x k }. We claim that A = B. We first prove that A is convex. For all y 1, y 2 A, we have y 1 = M i x i and y 2 = N i x i,
10 10 Chapter 1. MATHEMATICAL BACKGROUND where M i = 1, and M i, N i 0 for all i. Hence for all λ [0, 1], N i = 1 y = λy 1 + (1 λ)y 2 = λ M i x i + (1 λ) = N i x i (λm i + (1 λ)n i ) x i = where P i = λm i + (1 λ)n i 0 and P i = = λ P i x i (λm i + (1 λ)n i ) M i + (1 λ) = λ 1 + (1 λ) 1 = 1. Thus y is a convex combination of x 1,, x k, hence is in A. Therefore A is convex. Next we claim that B A. For all i = 1, k, since x i = δ ij x j, where δ ij is the Kronecker delta 1, we see that x i A. Thus A is a convex set containing {x 1,, x k }. By Lemma 1.2 (b) and (d), B A. Finally, we claim that A B. We prove this by mathematical induction on k. If k = 1, then B = B 1 = {x 1 }. For all x A 1, x = 1 x 1 = x 1. Thus A 1 = {x 1 } B 1. Hence by Lemma 1.2 (b) and (d), A 1 = A B 1. Now assume that the claim holds for k 1, i.e. A k 1 B k 1, where and B k 1 = {x 1, x 2,, x k 1 }, { } k 1 k 1 A k 1 = x x = M i x i, M i 0, i = 1, 2,, k 1, M i = 1. We claim that A k B k = {x 1, x 2,, x k }. Let x A k, then x = M i x i where M i 0 for all i and k M i = 1. If x = x k, then x B k. Hence we may assume that x x k, or equivalently, M k 1. Since k M i = 1, we have k 1 k 1 M i 1 = M i = 1. 1 M k 1 M k 1 The Kronecker delta is defined by δ ii = 1 and δ ij = 0 for i j. N i
11 1.8. Supporting Hyperplanes 11 Consider the vector y A k 1 given by y = k 1 M i 1 M k x i, M i we have 0 for all i = 1, 2,, k 1 and k 1 M i = 1. You can think of y as the 1 M k 1 M k projection of x onto the convex set A k 1 : x y A k 1 Thus by the induction hypothesis y B k 1. Since {x 1, x 2,, x k 1 } {x 1, x 2,, x k }, B k 1 B k. Therefore, y B k. Since B k is convex by definition and y, x k B k, we see that x k Thus A k B k for all k. x = M i x i = (1 M k )y + M k x k B k. Definition 1.9. Let A = {x 1, x 2,, x n } be a set of points. The convex hull of A is called the convex polyhedron spanned by A. The convex hull of any set of n + 1 points in R n which do not lie on a hyperplane is called a simplex. Thus in R 2, the triangle formed by any three noncollinear points is a simplex while in R 3, the tetrahedron formed by any four points that are not on a plane is a simplex. A simplex in R Supporting Hyperplanes A simplex in R 3 Theorem 1.3. Let C be a closed convex set and y be a point not in C. Then there is a hyperplane X = {x a T x = z} that contains y and such that C {x a T x < z} or C {x a T x > z}.
12 12 Chapter 1. MATHEMATICAL BACKGROUND Proof. Let δ = inf x y > 0. x C Then there is an x 0 on the boundary of C such that x 0 y = δ. This follows because the continuous function f(x) = x y achieves its minimum over any closed and bounded set and it is clear that it suffices to consider x in the intersection of the closure of C and the sphere of radius 2δ centered at y. Define a = x 0 y and z = a T y. Consider the hyperplane X = {x a T x = z} which is illustrated by the following diagram C X x 0 a y Clearly y X. We claim that C {x a T x > z}. Let x C. For any λ (0, 1), since x 0 C, x 0 + λ(x x 0 ) = λx + (1 λ)x 0 C. Thus by definition of a, x 0 + λ(x x 0 ) y 2 δ x 0 y 2, i.e. λ(x x 0 ) + a 2 a 2. Expanding it, we have 2λa T (x x 0 ) + λ 2 x x Letting λ 0 +, we obtain a T (x x 0 ) 0. Hence a T x a T x 0 = a T (y + a) = a T y + a 2 = a T y + δ 2 = z + δ 2 > z. Thus C {x a T x > z = a T y}. Definition Let y be a boundary point of a convex set C. Then X = {x a T x = z} is called a supporting hyperplane of C at y if y X and either C {x a T x z} or C {x a T x z}.
13 1.8. Supporting Hyperplanes 13 C X y Theorem 1.4. If y is a boundary point of a closed convex set C, then there is at least one supporting hyperplane at y. Proof. Since y is a boundary point of C, for any positive integer k, let y k B 1 (y) and y k / C. k Then {y k } be a sequence of vectors not in C but converges to y. Let {a k } be the sequence of corresponding normal vectors constructed according to Theorem 1.3, normalized so that a k = 1 and such that C {x a T k x > at k y k}. Since {a k } is a bounded sequence, it has a convergent subsequence {a kj } with limit a. Let z = a T y. We claim that X = {x a T x = z} is a supporting hyperplane at y. Clearly y X. For any x C, we have Thus C {x a T x z}. a T x = lim j at k j x lim j at k j y kj = a T y = z. Note that in general a supporting hyperplane is not unique. Definition A set C is said to be bounded from below if Thus inf{x j x j is the jth component of x in C} > for all j. R n + {x R n x T = [x 1,, x n ], x j 0, j = 1,, n} (1.3) is a set bounded from below. However, the upper half plane {(x 1, x 2 ) x 2 0} is not bounded from below. Clearly, any subset of R n+ is also bounded from below. Theorem 1.5. Let C be a closed convex set which is bounded from below. Then every supporting hyperplane of C contains an extreme point of C. Proof. Let x 0 be a boundary point of C and X = {x c T x = z} be a supporting hyperplane at x 0. Define T = X C. We will show that T contains an extreme point of C. Since x 0 X and x 0 C, x 0 T and hence T is nonempty. We first claim that any extreme point of T is also an extreme point of C, or equivalently, if t is not an extreme point of C, then t cannot be an extreme point of T. If t T, then we are done. Let us assume that t T, then t C. By Theorem 1.4, we may assume that C {x c T x z}. Since t is not an extreme point in C, there exist two distinct points x 1 and x 2 C such that t = λx 1 + (1 λ)x 2 for some λ (0, 1). It then suffices to show that x 1, x 2 T. Since c T t = z = λc T x 1 + (1 λ)c T x 2
14 14 Chapter 1. MATHEMATICAL BACKGROUND and C {x c T x z}, we see that c T x 1 = z = c T x 2. Hence x 1, x 2 T. Thus t is not an extreme point in T. We next claim that there is an extreme point in T. We first find a t 1 = [t 1 1, t 1 2,, t 1 n] T in T such that t 1 1 = inf{t 1 t = [t 1,, t n ] T T }. Since T C and C is bounded from below, the infimum t 1 1 is welldefined. If t 1 is unique, then we prove below that it is our extreme point. If t 1 is not unique, we continue to find a t 2 = (t 2 1, t 2 2,, t 2 n) T T such that t 2 1 = inf{t 1 t = [t 1, t 2,, t n ] T T } t 2 2 = inf{t 2 t = [t 1, t 2,, t n ] T T with t 1 = t 2 1}. Since C R n, this process must stop at most after n steps. We claim that when the process stops, the point found is an extreme point. Let t j be that point. Then for all k = 1,, j, t j k = inf{t k t = [t 1, t 2,, t n ] T T with t i = t j i for i = 1, k 1}. Suppose that t j is not an extreme point of T, then there exist two distinct points y 1, y 2 T such that t j = λy 1 + (1 λ)y 2 for some λ (0, 1). Thus t j i = λy1 i + (1 λ)y 2 i, for alli = 1, 2,, n. For all i = 1,, j, since t j i is the infimum, tj i y1 i and t j i y2 i. It follows that tj i = y1 i = y 2 i, i = 1,, j, a contradiction to the uniqueness of t j.
Vector Spaces II: Finite Dimensional Linear Algebra 1
John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationDefinition of a Linear Program
Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMATH 110 Spring 2015 Homework 6 Solutions
MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More information(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties
Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry
More information1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by MenGen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More information. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form
1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationPART I. THE REAL NUMBERS
PART I. THE REAL NUMBERS This material assumes that you are already familiar with the real number system and the representation of the real numbers as points on the real line. I.1. THE NATURAL NUMBERS
More informationPOWER SETS AND RELATIONS
POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationMath 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationThe Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationOpen and Closed Sets
Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.
More informationNUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS
NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationThe determinant of a skewsymmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14
4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationMath212a1010 Lebesgue measure.
Math212a1010 Lebesgue measure. October 19, 2010 Today s lecture will be devoted to Lebesgue measure, a creation of Henri Lebesgue, in his thesis, one of the most famous theses in the history of mathematics.
More informationImages and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232
Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been
More informationMathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
More informationRi and. i=1. S i N. and. R R i
The subset R of R n is a closed rectangle if there are n nonempty closed intervals {[a 1, b 1 ], [a 2, b 2 ],..., [a n, b n ]} so that R = [a 1, b 1 ] [a 2, b 2 ] [a n, b n ]. The subset R of R n is an
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationMonotone maps of R n are quasiconformal
Monotone maps of R n are quasiconformal K. Astala, T. Iwaniec and G. Martin For Neil Trudinger Abstract We give a new and completely elementary proof showing that a δ monotone mapping of R n, n is K quasiconformal
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationMetric Spaces. Chapter 1
Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationLargest FixedAspect, AxisAligned Rectangle
Largest FixedAspect, AxisAligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 19982016. All Rights Reserved. Created: February 21, 2004 Last Modified: February
More informationLectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n realvalued matrix A is said to be an orthogonal
More informationTHE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
More informationThis exposition of linear programming
Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationLinear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationChapter 6. Cuboids. and. vol(conv(p ))
Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both
More informationSOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties
SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationSec 4.1 Vector Spaces and Subspaces
Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationSolutions to Review Problems
Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More information2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair
2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xyplane
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationLecture 3: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can
More information6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium
6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium Asu Ozdaglar MIT February 18, 2010 1 Introduction Outline PricingCongestion Game Example Existence of a Mixed
More informationMinimally Infeasible Set Partitioning Problems with Balanced Constraints
Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of
More informationLinear Programming: Theory and Applications
Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationINTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL
SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationLecture Note on Linear Algebra 15. Dimension and Rank
Lecture Note on Linear Algebra 15. Dimension and Rank WeiShi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter
More informationIntroduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
More informationINCIDENCEBETWEENNESS GEOMETRY
INCIDENCEBETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More informationSYSTEMS OF EQUATIONS
SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which
More information