Applied Optimization: Formulation and Algorithms for Engineering Systems Slides
|
|
- Alexandrina Shepherd
- 7 years ago
- Views:
Transcription
1 Applied Optimization: Formulation and Algorithms for Engineering Systems Slides Ross Baldick Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX Copyright c 2016 Ross Baldick Title Page 1 of 219 Go Back Full Screen Close Quit
2 Three introductory chapters Title Page 2 of 219 Go Back Full Screen Close Quit
3 1 Introduction In this course, we are going to: 1.1 Road map formulate various types of numerical problems, and develop techniques for solving them. Title Page 3 of 219 Go Back Full Screen Close Quit
4 Road map, continued We will use a number of case studies to: (i) illustrate the process of formulating a problem, that is, translating from an intuitive idea of the problem by writing it down mathematically, (ii) motivate and develop algorithms to solve problems, that is, descriptions of operations that can be implemented in software to take a problem specification and return a solution, and (iii) illustrate how to match the formulation of the problem to the capabilities of available algorithms, involving, in some cases, transformation of the problem from its initial formulation. Title Page 4 of 219 Go Back Full Screen Close Quit
5 Road map, continued We will consider five general problem classes: (i) linear systems of equations, (ii) non-linear systems of equations, (iii) unconstrained optimization, (iv) equality-constrained optimization, and (v) inequality-constrained optimization. Title Page 5 of 219 Go Back Full Screen Close Quit
6 Road map, continued Mostly consider problems that are defined in terms of smooth functions of continuous variables. We will emphasize issues that have proved pivotal in algorithm development and problem formulation: (i) monotonicity, (ii) convexity, (iii) problem transformations, (iv) symmetry, and (v) sparsity. Title Page 6 of 219 Go Back Full Screen Close Quit
7 1.2 Goals At the end of the course, you should be able to: (i) take a description of your problem, (ii) translate it into a mathematical formulation, (iii) evaluate if optimization techniques will be successful, (iv) if the problem is tractable, solve small- to medium-scale versions of it using commercial code, and (v) use the solution of the problem to calculate sensitivities to changes in problem specifications. Title Page 7 of 219 Go Back Full Screen Close Quit
8 1.3 Pre-requisites The course assumes familiarity with MATLAB and MATLAB M-files and that you have access to the MATLAB Optimization Toolbox. In downloadable Appendix A there are notational conventions and a number of results that we will use in the course. Title Page 8 of 219 Go Back Full Screen Close Quit
9 2 Problems, algorithms, and solutions In this chapter, we define the various types of problems that we will treat in the rest of the course. Title Page 9 of 219 Go Back Full Screen Close Quit
10 Outline In Section 2.1, we define the decision vector. In Section 2.2 we define two problems involving solution of simultaneous equations. In Section 2.3 we describe three optimization problems. We define an algorithm in Section 2.4 in reference to two general schemata: direct algorithms, which, in principle, obtain the exact solution to the problem in a finite number of operations, and iterative algorithms, which generate a sequence of approximate solutions or iterates that, in principle, approach the exact solution to the problem. Title Page 10 of 219 Go Back Full Screen Close Quit
11 2.1 Decision vector The problems will involve choices of a value of a decision vector from n-dimensional Euclidean space R n or from some subsetsof R n, where: R is the set of real numbers, and R n is the set of n-tuples of real numbers. We will usually denote the decision vector by x. Title Page 11 of 219 Go Back Full Screen Close Quit
12 2.2 Simultaneous equations Definition Consider a vector function g that takes values of a decision vector in a domainr n and returns values of the function that lie in a range R m. We write g :R n R m to denote the domain and range of the function. Suppose we want to find a value x of the argument x that satisfies: g(x)=0. (2.1) A value, x, that satisfies (2.1) is called a solution of the simultaneous equations g(x) = 0. Title Page 12 of 219 Go Back Full Screen Close Quit
13 Example Figure 2.1 shows the case of a function g :R 2 R 2. There are two sets illustrated by the solid curves. These two sets intersect at two points, x,x, illustrated as bullets. The points x and x are the two solutions of the simultaneous equations g(x)=0, so that{x R n g(x)=0}={x,x }. x 2 {x R 2 g 1 (x)=0} x x {x R 2 g 2 (x)=0} x 1 Fig Example of simultaneous equations and their solution. Title Page 13 of 219 Go Back Full Screen Close Quit
14 Inconsistent and redundant equations If there are no solutions to the equations then {x R n g(x)=0}= /0, where /0 is the empty set, and we say that the equations are inconsistent. If some linear combination of the entries of g (with coefficients not all zero) yields a function that is identically zero then we say that the equations are redundant. For example, if two entries of g are the same then the equations are redundant. x 2 {x R 2 g 1 (x)=0} {x R 2 g 2 (x)=0} x 1 Fig Example of inconsistent simultaneous equations. Title Page 14 of 219 Go Back Full Screen Close Quit
15 2.2.2 Types of problems Linear simultaneous equations Suppose that g :R n R m in (2.1) is affine, that is, of the form: x R n,g(x)=ax b. Then we have a set of linear simultaneous equations: Ax b=0. Examples For example, if: then: A= [ ] [ 1 2 1,b=, (2.2) 3 4 1] [ x 1 = 1] is a solution. Title Page 15 of 219 Go Back Full Screen Close Quit
16 Examples, continued x x {x R 2 g 1 (x)=0} {x R 2 g 2 (x)=0} x 1 Fig Solution of linear simultaneous equations g(x) = Ax b = 0 with A and b defined as in (2.2). Title Page 16 of 219 Go Back Full Screen Close Quit
17 Examples, continued As another example, if: then: is a solution. Number of solutions A= [ 2 3 ] ,b= x =[ There may be several values that satisfy the equations. Case studies ] [ ] 9 18, (2.3) 28 Nodal analysis of a direct current linear circuit (in Section 4.1), and Control of a discrete-time linear system (in Section 4.2). Title Page 17 of 219 Go Back Full Screen Close Quit
18 Non-linear simultaneous equations Examples For example, suppose that the function g :R R is defined by: x R,g(x)=(x) 2 + 2x 3. (2.4) The quadratic equation shows that the two solutions are: x = 3,x = 1. Title Page 18 of 219 Go Back Full Screen Close Quit
19 Examples, continued As another example, let: g :R 2 R 2 be defined by: [ ] x R 2 (x1 ),g(x)= 2 +(x 2 ) 2 + 2x 2 3. (2.5) x 1 x 2 x {x R 2 g 2 (x)=0} 2 1 x x {x R 2 g 1 (x)=0} x 1 Fig Solution of non-linear simultaneous equations g(x) = 0 with g defined as in (2.5). Title Page 19 of 219 Go Back Full Screen Close Quit
20 Examples, continued As a third example, let g :R R be defined by: x R,g(x)=(x 2) (2.6) By inspection, x = 1 is the unique solution to g(x)=0. Algorithms and number of solutions Larger problems may also possess multiple solutions or no solutions under some circumstances. Case studies Nodal analysis of a non-linear direct current electric circuit (in Section 6.1), and Analysis of an electric power system (in Section 6.2). Title Page 20 of 219 Go Back Full Screen Close Quit
21 Eigenvalue problems LetKbe the set of complex numbers. The n (not necessarily distinct) eigenvalues of a matrix A R n n are given by the (possibly complex) solutions of the characteristic equation for A: g(λ)=0, where g :K K is the characteristic polynomial, defined by: λ K,g(λ)=det(A λi), The eigenvectors associated with an eigenvalue λ are the solutions of: (A λi)x=0. Title Page 21 of 219 Go Back Full Screen Close Quit
22 Example A= [ ] 2 1, 5 4 λ K,g(λ) = det(a λi), [ 2 λ 1 = det 5 4 λ ], = (2 λ)( 4 λ) (1)( 5), = (λ) 2 + 2λ 3. From the previous example, we already know that the two solutions to g(λ)=0 are: λ = 3,λ = 1, so these are the eigenvalues of A. Title Page 22 of 219 Go Back Full Screen Close Quit
23 Example, continued The eigenvectors associated with λ = 3 are the vectors in the set: {x R 2 (A+3I)x=0}. The eigenvectors associated with λ = 1 are the vectors in the set: Discussion {x R 2 (A I)x=0}. There are special iterative algorithms for eigenvalue problems that are somewhat different in flavor to the algorithms we will describe for solving general linear and non-linear equations. We will not discuss general algorithms for eigenvalue calculation. Title Page 23 of 219 Go Back Full Screen Close Quit
24 2.3 Optimization Definitions Objective Consider a function f :R n Rthat denominates the cost or lack of desirability of solutions for a particular model or system. That is, f(x) is the cost of using x as the solution. The function is called an objective function. Title Page 24 of 219 Go Back Full Screen Close Quit
25 Example An example of a quadratic function f :R 2 R is given by: x R 2, f(x)=(x 1 ) 2 +(x 2 ) 2 + 2x 2 3. (2.7) 60 f(x) x x 1 5 Fig Graph of the example objective function defined in (2.7). Title Page 25 of 219 Go Back Full Screen Close Quit
26 Discussion We can categorize objectives according to the highest power of any entry in the argument. We will categorize objectives in a different way in Section once we have discussed optimization in more detail. Title Page 26 of 219 Go Back Full Screen Close Quit
27 Feasible set Our problem might involve restrictions on the choices of values of x. We can imagine a feasible sets R n from which we must select a solution Problem A minimization problem means to find the minimum value of f(x) over choices of x that lie in the feasible sets. Definition 2.1 Let S R n, f :S R, and f R. Then by: we mean that: f = min x S f(x), (2.8) x S such that: ( f = f(x )) and ((x S) ( f(x ) f(x))). (2.9) Title Page 27 of 219 Go Back Full Screen Close Quit
28 Set of minimizers The set of all the minimizers of min x S f(x) is denoted by: argmin x S f(x). If the problem has no minimum (and, therefore, no minimizers) then we define: argmin f(x)= /0. x S To emphasize the role of S, we also use the following notations: min x R n{ f(x) x S} and argmin n{ f(x) x S}. We will often use a more explicit notation ifsis defined as the set of points satisfying a criterion. For example, if f :R n R, g :R n R m, h :R n R r, and S={x R n g(x)=0,h(x) 0} then we will write min x R n{ f(x) g(x)=0,h(x) 0} for min x S f(x). x R Title Page 28 of 219 Go Back Full Screen Close Quit
29 Multiple minimizers For example, consider the function f : R R defined by: x R, f(x)= x (x 1) f(x) f x x x Fig Function having multiple unconstrained minimizers indicated by the bullets. Title Page 29 of 219 Go Back Full Screen Close Quit
30 Lower bound Definition 2.2 Let S R n, f :S R, and f R. If f satisfies: x S, f f(x), then we say that f is a lower bound for the problem min x S f(x) or that the problem min x S f(x) is bounded below by f. IfS /0 but no such f exists, then we say that the problem min x S f(x) is unbounded below (or unbounded if the below is clear from context.) Consider f :R 2 Rdefined in (2.7), which we repeat here: x R 2, f(x)=(x 1 ) 2 +(x 2 ) 2 + 2x 2 3. This function is illustrated in Figure 2.5. For the feasible sets=r 2, the value f = 10 is a lower bound for the problem min x S f(x), as shown in Figure 2.5. Title Page 30 of 219 Go Back Full Screen Close Quit
31 Level and contour sets Definition 2.3 Let S R n, f :S R, and f R. Then the level set at value f of the function f is the set: L f ( f)={x S f(x) f}. The contour set at value f of the function f is the set: C f ( f)={x S f(x)= f}. For each possible function f, we can think of L f and C f themselves as set-valued functions from R to(2) (Rn), where (2) (Rn) denotes the set of all subsets of R n, sometimes called the power set ofr n. Title Page 31 of 219 Go Back Full Screen Close Quit
32 Example Consider the function f :R 2 R defined by: x R 2, f(x)=(x 1 1) 2 +(x 2 3) 2. (2.10) 100 f(x) x x 1 5 Fig Graph function in (2.10). of defined Title Page 32 of 219 Go Back Full Screen Close Quit
33 Contour set for example The contour sets C f ( f) can be shown in a two-dimensional representation. 5 x x 1 Fig Contour sets C f ( f) of the function defined in (2.10) for values f = 0,2,4,6,... The heights of the contours decrease [ towards 1 the point, which is 3] illustrated with a and is the contour of height 0. Title Page 33 of 219 Go Back Full Screen Close Quit
34 2.3.2 Types of problems The three general forms of S that we will consider are: unconstrained optimization, equality-constrained optimization, and inequality-constrained optimization. Title Page 34 of 219 Go Back Full Screen Close Quit
35 Unconstrained optimization IfS=R n then the problem is said to be unconstrained. Example For example, consider the objective f :R 2 R defined in (2.10): x R 2, f(x)=(x 1 1) 2 +(x 2 3) 2. From Figure 2.8, which shows the contour sets of f, we can see that: min f(x) = f = 0, x R 2 {[ 1 argmin f(x) =, x R 3]} 2 [ so that there is a minimum f = 0 and a unique minimizer x 1 = of 3] this problem. Title Page 35 of 219 Go Back Full Screen Close Quit
36 Another example Consider a linear system Ax b=0 that does not have a solution. We may try to seek a value of the decision vector that most nearly satisfies Ax=b in the sense of minimizing a criterion. A natural criterion is to consider a norm and then seek x that minimizes Ax b : min x Rn Ax b. (2.11) Case studies Multi-variate linear regression (in Section 9.1), and Power system state estimation (in Section 9.2). Title Page 36 of 219 Go Back Full Screen Close Quit
37 Equality-constrained optimization If g :R n R m and S={x R n g(x)=0} then the problem is said to be equality-constrained. Sub-types of equality-constrained optimization problems Linearly constrained If g is affine then the problem is called linearly constrained. Example x R 2, f(x) = (x 1 1) 2 +(x 2 3) 2, x R 2,g(x) = x 1 x 2, (2.12) min f(x) g(x)=0} = min x R2{ x R 2{ f(x) x 1 x 2 = 0}. (2.13) Title Page 37 of 219 Go Back Full Screen Close Quit
38 Example, continued The unique minimizer of Problem (2.13) is x = [ 2 2]. 5 x x 1 Fig Contour sets C f ( f) of function repeated from Figure 2.8 with feasible set from Problem (2.13) superimposed. The heights of the contours decrease [ towards the point 1. The minimizer 3] [ 2 x = is illustrated 2] with a. Title Page 38 of 219 Go Back Full Screen Close Quit
39 Non-linearly constrained If there is no restriction on g then the problem is called non-linearly constrained. Example For example, consider the same objective as previously, f :R 2 R defined in (2.10): x R 2, f(x)=(x 1 1) 2 +(x 2 3) 2. However, let g :R 2 R be defined by: x R 2,g(x)=(x 1 ) 2 +(x 2 ) 2 + 2x 2 3. Consider the equality-constrained problem: min f(x) g(x)=0}. (2.14) x R2{ Title Page 39 of 219 Go Back Full Screen Close Quit
40 Example, continued The unique minimizer of Problem (2.14) is x [ ] x x 1 Fig Contour sets C f ( f) of function repeated from Figure 2.8 with feasible set from Problem (2.14) superimposed. The heights of the contours[ decrease 1 towards the point. The 3] minimizer x is illustrated as a. Title Page 40 of 219 Go Back Full Screen Close Quit
41 Case studies Least-cost production of a group of manufacturing facilities that must collectively meet a demand constraint (in Section 12.1), and Power system state estimation with zero injection buses (in Section 12.2). Title Page 41 of 219 Go Back Full Screen Close Quit
42 Inequality-constrained optimization If g :R n R m, h :R n R r, and S={x R n g(x)=0,h(x) 0} then the problem is said to be inequality-constrained. Sub-types of inequality-constrained optimization problems Non-negatively constrained If h is of the form: x,h(x)= x, so that the constraints are of the form x 0then the problem is non-negatively constrained. Linear inequality constraints If h is affine then the problem is linear inequality-constrained. Title Page 42 of 219 Go Back Full Screen Close Quit
43 Linear program If the objective is linear and g and h are affine then the problem is called a linear program or a linear optimization problem. Example x R 2, f(x) = x 1 x 2, x R 2,g(x) = x 1 + x 2 1, [ ] x R 2 x1,h(x) =, x 2 min f(x) g(x)=0,h(x) 0}= min x R2{ x R 2{x 1 x 2 x 1 +x 2 1=0,x 1 0,x 2 0}. (2.15) Title Page 43 of 219 Go Back Full Screen Close Quit
44 Example, continued 2 Fig Contour sets x x 1 C f ( f) of objective function and feasible set for Problem (2.15). The contour sets are the parallel lines. The feasible set is shown as the line joining the two points [ 1 0] and [ 0 1]. The heights of the contours decrease to the left and [ up. 0 The minimizer x = is 1] illustrated as a. Title Page 44 of 219 Go Back Full Screen Close Quit
45 Linear program, continued We often emphasize the linear and affine functions by writing: min x Ax=b,Cx d}, x R 2{c where c R n, A R m n, b R m, C R r n, and d R r. For Problem (2.15), the appropriate vectors and matrices are: [ ] [ ] [ c=,a=[1 1],b=[1],C=,d = ] We can write this non-negatively constrained problem even more concisely as: min x Ax=b,x 0}. (2.16) x R 2{c Title Page 45 of 219 Go Back Full Screen Close Quit
46 Linear program, continued There is a rich body of literature on linear programming and there are special purpose algorithms to solve linear programming problems. The best known are: the simplex algorithm (and variants), and interior point algorithms. Standard format If g is affine and the inequality constraints are non-negativity constraints then the problem is said to be in the standard format. Problem (2.16) is a linear program in standard format. Title Page 46 of 219 Go Back Full Screen Close Quit
47 Quadratic program If f is quadratic and g and h are affine then the problem is called a quadratic program or a quadratic optimization problem. Example x R 2, f(x) = (x 1 1) 2 +(x 2 3) 2, x R 2,g(x) = x 1 x 2, x R 2,h(x) = 3 x 2. (2.17) Title Page 47 of 219 Go Back Full Screen Close Quit
48 Example, continued 5 x x 1 Fig Contour sets C f ( f) of objective function and feasible set for Problem (2.18). The heights of the contours[ decrease 1 towards the point. The 3] feasible set is the half-line [ 3 starting at the point. 3] [ 3 The minimizer x = is 3] illustrated with a. Title Page 48 of 219 Go Back Full Screen Close Quit
49 Example, continued min f(x) g(x)=0,h(x) 0} = 4, (2.18) x R2{ {[ 3 argmin f(x) g(x)=0,h(x) 0} = ={x x R2{ 3]} }. Quadratic program, continued We can emphasize the quadratic and linear functions by writing: { } 1 2 x Qx+c x Ax=b,Cx d, min x R 2 where we have omitted the constant term. For Problem (2.18), the appropriate vectors and matrices are: [ ] [ ] Q=,c=,A=[1 1],b=[0],C=[0 1],d =[ 3] Title Page 49 of 219 Go Back Full Screen Close Quit
50 Non-linear program If there are no restrictions on f, g, and h, then the problem is called a non-linear program or a non-linear optimization problem. Example min f(x) g(x)=0,h(x) 0}, (2.19) x R3{ where f :R 3 R, g :R 3 R 2, and h :R 3 R are defined by: Convexity x R 3, f(x) = (x 1 ) 2 + 2(x 2 ) 2, [ ] x R 3 2 x2 sin(x,g(x) = 3 ), x 1 + sin(x 3 ) x R 3,h(x) = sin(x 3 ) 0.5. We will see in Section that we can also classify problems on the basis of the notion of convexity. Title Page 50 of 219 Go Back Full Screen Close Quit
51 Satisfaction of constraints Definition 2.4 Let h :R n R r. An inequality constraint h l (x) 0 is called a binding constraint or an active constraint at x if h l (x )=0. It is called non-binding or inactive at x if h l (x )<0. The set: A(x )={l {1,...,r} h l (x )=0} is called the set of active constraints or the active set for h(x) 0 at x. Definition 2.5 Let h :R n R r. The point x is called strictly feasible for the inequality constraint h l (x) 0if h l (x )<0. The point x is called strictly feasible for the inequality constraints h(x) 0if h(x )<0. If h :R n R r is continuous and satisfies certain other conditions then: the boundary of S={x R n h(x) 0} is the set {x R n h(x) 0and, for at least one l, h l (x)=0}, and its interior is the set{x R n h(x)<0}. That is, the set of strictly feasible points for the inequality constraints is the interior of S. Title Page 51 of 219 Go Back Full Screen Close Quit
52 Example [ x R 2,h(x)= 3 x 2 x 1 + x 2 10 ]. x x x x x 1 Fig Points x,x, and x that are feasible with respect to inequality constraints. The feasible set is the shaded triangular region for which x 2 3 and x 1 + x Title Page 52 of 219 Go Back Full Screen Close Quit
53 Example, continued [ x 5 = 4] The constraints h 1 (x) 0 and h 2 (x) 0 are non-binding so that the active set is A(x )= /0. This point is in the interior of the set{x R 2 h(x) 0}. [ x 5 = 3] The constraint h 2 (x) 0 is non-binding while the constraint h 1 (x) 0is binding so that the active set is A(x )={1}. This point is on the boundary of the set{x R 2 h(x) 0}. [ x 7 = 3] The constraints h 1 (x) 0 and h 2 (x) 0 are both binding so that the active set is A(x )={1,2}. This point is on the boundary of the set{x R 2 h(x) 0}. Title Page 53 of 219 Go Back Full Screen Close Quit
54 Example in higher dimension Consider Figure 2.14, which shows a dodecahedron, a twelve-sided solid, in R 3. The dodecahedron is an example of a set that can be described in the form S={x R 3 h(x) 0} with h :R 3 R 12 affine. Fig Dodecahedron inr 3. Title Page 54 of 219 Go Back Full Screen Close Quit
55 Various cases for a point in S x is in the interior of the dodecahedron. We have h(x )<0 and A(x )= /0. x is on a face of the dodecahedron but not on an edge or vertex. That is, exactly one constraint l is binding and A(x )={l}. x is on the boundary. x is on an edge but not a vertex of the dodecahedron. That is, exactly two constraints l,l are binding and A(x )={l,l }. x is on the boundary. x is a vertex of the dodecahedron. That is, exactly three constraints l,l, and l are binding and A(x )={l,l,l }. x is on the boundary. Title Page 55 of 219 Go Back Full Screen Close Quit
56 Discussion The importance of the notion of binding constraints is that it is typical for some but not all of the inequality constraints to be binding at the optimum. Representation of inequality constraints Most optimization software can deal directly with: double-sided functional inequalities such as h h(x) h and double-sided inequalities on variables such as x x x, For notational simplicity, we will usually restrict ourselves to inequalities of the form h(x) 0, but recognize that problems may be easier to express in terms of the more comprehensive form x x x, h h(x) h. It is almost always worthwhile to take advantage of the more comprehensive form when the software has the capability. Title Page 56 of 219 Go Back Full Screen Close Quit
57 Case studies Least-cost production with capacity constraints (in Section 15.1), Optimal routing in a data communications network (in Section 15.2), Least absolute value data fitting (in Section 15.3), Optimal margin pattern classification (in Section 15.4), Sizing of gate interconnects in integrated circuits (in Section 15.5), and Optimal power flow (in Section 15.6). Title Page 57 of 219 Go Back Full Screen Close Quit
58 Summary For small example problems, inspection of a carefully drawn diagram can yield the minimum and minimizer. For larger problems where the dimension of x increases significantly past two, or the dimension of g or h increases, the geometry becomes more difficult to visualize and intuition becomes less reliable in predicting the solution. Title Page 58 of 219 Go Back Full Screen Close Quit
59 2.3.3 Problems without minimum and the infimum Analysis To discuss problems that do not have a minimum, we need a more general definition. Definition 2.6 Let S R n, f :S R. Then, inf x S f(x), the infimum of the corresponding minimization problem, min x S f(x), is defined by: inf f(x)= x S the greatest lower bound for min x S f(x), if min x S f(x) is bounded below,, if min x S f(x) is unbounded below,, if min x S f(x) is infeasible. By definition, the infimum is equal to the minimum of the corresponding minimization problem min x S f(x) if the minimum exists, but the infimum exists even if the problem has no minimum. To emphasize the role of S, we also use the notation inf x R n{ f(x) x S} and analogous notations for the infimum. Title Page 59 of 219 Go Back Full Screen Close Quit
60 Examples Unconstrained problem with unbounded objective There is no f R such that x R, f f(x). The problem min x R f(x) is unbounded below. The infimum is inf x R f(x)=. x R, f(x)=x. (2.20) Title Page 60 of 219 Go Back Full Screen Close Quit
61 Unconstrained problem with objective that is bounded below f = 0 is a lower bound for min x R f(x), where x R, f(x)=exp(x). The problem has no minimum but the infimum is inf x R f(x)=0. f(x) x Fig The function exp is bounded below on the feasible set R but has no minimum. Title Page 61 of 219 Go Back Full Screen Close Quit
62 Strict inequalities Again consider the objective f : R R defined in (2.20): but let the feasible set be: x R, f(x)=x, S={x R x>0}. Figure 2.16 shows the objective on the feasible set. Note that x S, f(x) 0, so that the problem is bounded below by 0. However, there is no x S such that f(x )=0. In this example, the problem is bounded, but does not have a minimum nor a minimizer. For this problem, the infimum is inf x R { f(x) x>0}=0. Title Page 62 of 219 Go Back Full Screen Close Quit
63 Strict inequalities, continued f(x) x Fig Function that is bounded below on feasible set but where the problem has no minimum because the feasible set is defined by a strict inequality. The function is illustrated only on the feasible set. The circle at x=0, f(x)=0indicates that this point is not included in the graph but that points to the right of x = 0 and arbitrarily close to x=0 are included in the graph. Title Page 63 of 219 Go Back Full Screen Close Quit
64 Inconsistent constraints Consider any objective f :R R and let: where g :R R 2 is defined by: S={x R g(x)=0}, x R,g(x)= [ ] x+1. x 1 Then there are no feasible solutions, since the equality constraints are inconsistent and so S= /0. In this example, there are no feasible values of x and therefore no minimum. The infimum is inf x R { f(x) g(x)=0}=. Title Page 64 of 219 Go Back Full Screen Close Quit
65 Discontinuous objective Finally, let: and define f :S R by: S={x R x 0}, x S, f(x)= { 1, if x=0, x, if x 0. (2.21) Title Page 65 of 219 Go Back Full Screen Close Quit
66 Discontinuous objective, continued The problem min x S f(x) is bounded below by zero, but there is again no minimum nor minimizer. The infimum is inf x R { f(x) x 0}=0. f(x) x Fig Function (2.21) that is bounded below on feasible set but where the problem has no minimum because the function is discontinuous. The function is illustrated only on the feasible set. The bullet at x = 0, f(x) = 1 indicates that this is the value of the function at x=0. Title Page 66 of 219 Go Back Full Screen Close Quit
67 Summary In all five cases, argmin x R f(x) is the empty set /0. Careful formulation of a problem can avoid these issues Conditions for problems to possess a minimum and minimizer Theorem 2.1 LetS R n be non-empty, closed, and bounded and let f :S R be continuous. Then the problem min x S f(x) possesses a minimum and minimizer. Title Page 67 of 219 Go Back Full Screen Close Quit
68 2.3.5 Maximization problems and the supremum max f(x)= min ( f(x)). (2.22) x S x S Definition 2.7 Let S R n, f :S R. Then, sup x S f(x), the supremum of the corresponding maximization problem max x S f(x) is defined by: the least upper bound for max sup f(x)= x S f(x), if max x S f(x) is bounded above, x S, if max x S f(x) is unbounded above,, if max x S f(x) is infeasible. The supremum is equal to the maximum of the corresponding maximization problem max x S f(x) if the maximum exists. Title Page 68 of 219 Go Back Full Screen Close Quit
69 2.3.6 Extended real functions Definition 2.8 Let S R n, f :S R {, }, and f R. Then by: we mean that: f = min x S f(x), x S such that f = f(x ) R and (x S) ( f(x ) f(x)). Title Page 69 of 219 Go Back Full Screen Close Quit
70 2.4 Algorithms Two basic types of algorithms: Direct, to be described in Section 2.4.1, and Iterative, to be described in Section Title Page 70 of 219 Go Back Full Screen Close Quit
71 2.4.1 Direct A finite list of operations that calculates the solution of the problem Discussion Under the (usually unrealistic) assumptions that: all numbers in the problem specification are represented to infinite precision, all arithmetic operations are carried out to infinite precision, and the answers to each arithmetic operation are represented to infinite precision, then the answer obtained from a direct algorithm would be exact Applicability Some problems cannot be solved by direct algorithms. Consider g :R Rsuch that g is a polynomial. For non-linear equations involving arbitrary fifth or higher degree polynomials, there is provably no direct algorithm available to find the solution. Title Page 71 of 219 Go Back Full Screen Close Quit
72 2.4.2 Iterative Recursion to define iterates x (ν+1) = x (ν) + α (ν) x (ν),ν=0,1,2,..., is the iteration counter, x (0) is the initial guess of the solution, x (ν) is the value of the iterate at the ν-th iteration, α (ν) R + is the step-size, with usually 0<α (ν) 1, x (ν) R n is the step direction, and the product α (ν) x (ν) is the update to add to the current iterate x (ν). x 2 x (ν+1) step direction x (ν) update α (ν) x (ν) x (ν) x 1 Fig Update of iterate inr 2. The bullets indicate the locations of the points x (ν) and x (ν+1), while the arrows ր indicate the magnitudes and directions of the vectors x (ν) and α (ν) x (ν). Title Page 72 of 219 Go Back Full Screen Close Quit
73 Sequence of iterates and closeness to a solution Definition 2.9 Let be a norm on R n. Let{x (ν) } ν=0 be a sequence of vectors inr n. Then, the sequence {x (ν) } ν=0 converges to a limit x if: ( ε>0, N ε Z + such that (ν Z + and ν N ε x ) (ν) x ) ε. The setz + is the set of non-negative integers. If the sequence {x (ν) } ν=0 converges to x then we write lim ν x (ν) = x or lim ν x(ν) = x and call x the limit of the sequence {x (ν) } ν=0. Title Page 73 of 219 Go Back Full Screen Close Quit
74 Analysis Rate of convergence Definition 2.10 Let be a norm. A sequence{x (ν) } ν=0 that converges to x R n is said to converge at rate R R ++ (where R ++ is the set of strictly positive real numbers) and with rate constant C R ++ if: x (ν+1) x lim ν x (ν) x = C. (2.23) R If (2.23) is satisfied for R=1and some value of C in the range 0< C < 1 then the rate is called linear. If (2.23) is satisfied for R=2 and some C in the range 0< C < then the rate is called quadratic. If (2.23) is satisfied for some R in the range 1<R<2 and some C in the range 0< C< then the rate is called super-linear. Title Page 74 of 219 Go Back Full Screen Close Quit
75 Discussion Qualitatively, the larger the value of R, the faster the iterates converge, at least asymptotically. x (ν) x ν Fig Rates of convergence for several sequences, with: R = 1 and C = 0.9 shown as ; R = 1 and C = 0.2 shown as ; R = 2 and C=0.9 shown as ; and R = 1.5 and C = 0.9 shown as +. Title Page 75 of 219 Go Back Full Screen Close Quit
76 2.5 Solutions of simultaneous equations Number of solutions Consider a linear equation in one variable, Ax=b, where A,b R. The possible cases are: 0x=0, infinitely many solutions, 0x=b, b 0, no solutions, Ax=b, A 0, one solution Uniqueness of solution for linear equations Necessary and sufficient conditions for there to be a unique solution to a square system of equations is that the coefficient matrix A be non-singular. Title Page 76 of 219 Go Back Full Screen Close Quit
77 Number of solutions, continued Consider a quadratic equation in one variable, Q(x) 2 + Ax=b. where A,b,Q R. The possible cases are: 0(x) 2 + 0x=0, infinitely many solutions, 0(x) 2 + 0x=b, b 0, no solutions, 0(x) 2 + Ax=b, A 0, one solution, Q(x) 2 + Ax=b, Q 0,A 2 + 4Qb<0, no (real) solutions, Q(x) 2 + Ax=b, Q 0,A 2 + 4Qb=0, one solution, Q(x) 2 + Ax=b, Q 0,A 2 + 4Qb>0, two solutions. Title Page 77 of 219 Go Back Full Screen Close Quit
78 2.5.3 Uniqueness of solution for non-linear equations To study uniqueness, we will consider simultaneous equations where the number of equations equals the number of variables Monotone functions Definition 2.11 LetS R n and let g :S R n. We say that g is monotone on S if: x,x S, ( g(x ) g(x) ) (x x) 0. (2.24) We say that g is strictly monotone on S if: x,x S,(x x ) ( g(x ) g(x) ) (x x)>0. If g is monotone on R n then we say that g is monotone. If g is strictly monotone on R n then we say that g is strictly monotone. Title Page 78 of 219 Go Back Full Screen Close Quit
79 Monotone functions, continued Geometrically, g is monotone on S if, for all pairs of vectors x and x in S, the vectors (x x) and (g(x ) g(x)) point in directions that are within less than or equal to 90 of each other. x 2 (x x) x x g(x) g(x ) (g(x ) g(x)) x 1 Fig Illustration of definition of monotone. For all x and x in S, the vectors (x x) and (g(x ) g(x)) point in directions that are within less than or equal to 90 of each other. Title Page 79 of 219 Go Back Full Screen Close Quit
80 Example Even if a function ĝ :R n R n is not strictly monotone, by permuting the entries of ĝ it may be possible to create a strictly monotone function. Consider the function: [ ] x R 2 x2,ĝ(x)=. x 1 This function is not strictly monotone since: (ĝ(x ) ĝ(x) ) (x x) = 2(x 2 x 2 )(x 1 x 1 ), < 0, if x 2 > x 2 and x 1 < x 1. However, the function g :R 2 R 2 obtained by swapping the entries of ĝ is strictly monotone, since: ( g(x ) g(x) ) (x x) = x x 2 2, > 0, for x x. Title Page 80 of 219 Go Back Full Screen Close Quit
81 Analysis Theorem 2.2 LetS R n and g :S R n be strictly monotone on S. Then there is at most one solution of the simultaneous equations g(x)=0 that is an element of S. Proof Suppose that there are two solutions x,x S with x x. That is, g(x )=g(x )=0. Consequently, (g(x ) g(x )) (x x )=0. But by definition of strictly monotone applied to x and x, (g(x ) g(x )) (x x )>0. This is a contradiction. Discussion It is possible for a function g to be not strictly monotone and yet there may be a unique solution or no solution to the equations g(x)=0. Title Page 81 of 219 Go Back Full Screen Close Quit
82 Example Consider g :R Rdefined by x R,g(x)=(x) 3 x 6. This function is not strictly monotone, yet there is only one solution to g(x)=0, namely x = 2. g(x) x Fig Function g that is not strictly monotone but for which there is only one solution, x = 2, to g(x)=0. The solution is illustrated with the. Title Page 82 of 219 Go Back Full Screen Close Quit
83 Characterizing monotone and strictly monotone functions Jacobian The entries of the Jacobian J :R n R m n are defined by: k=1,...,n, l=1,...,m,j lk = g l x k. Positive definite and positive semi-definite A matrix Q R n n is positive semi-definite if: The matrix is positive definite if: x R n,x Qx 0. x R n,(x 0) (x Qx>0). Title Page 83 of 219 Go Back Full Screen Close Quit
84 Convex sets Definition 2.12 LetS R n. We say thatsis a convex set or thatsis convex if x,x S, t [0,1],(1 t)x+tx S. The sets R n,r n +, and R n ++ are all convex. Fig Convex sets with pairs of points joined by line segments. Title Page 84 of 219 Go Back Full Screen Close Quit
85 Examples of non-convex sets Non-convex sets can have indentations. Fig Non-convex sets. Title Page 85 of 219 Go Back Full Screen Close Quit
86 Conditions for strictly monotone Theorem 2.3 LetS R n be a convex set and g :S R n. Suppose that g is partially differentiable with continuous partial derivatives on S. Moreover, suppose that the Jacobian J is positive semi-definite throughout S. Then g is monotone on S. If J is positive definite throughout S then g is strictly monotone on S. Proof Suppose that J is positive semi-definite throughout S. Let x,x S. For 0 t 1 we have that(x+t[x x]) S sincesis a convex set. As t varies from 0 to 1, (x+t[x x]) traces out the line segment joining x and x. Define φ :[0,1] Rby: t [0,1],φ(t) = (x x) g(x+t[x x]), = g(x+t[x x]) (x x). Title Page 86 of 219 Go Back Full Screen Close Quit
87 Proof, continued We have: φ(1) φ(0) = (x x) (g(x ) g(x)), = (g(x ) g(x)) (x x), and so we must prove that φ(1) φ(0) 0. Notice that: dφ dt (t) = (x x) J(x+t[x x])(x x), by the chain rule, 0, for 0 t 1, (2.25) since J(x+t[x x]) is positive semi-definite. We have: 1 dφ φ(1) = φ(0)+ t=0 dt (t)dt, by the fundamental theorem of calculus applied to φ, φ(0), since the integrand is non-negative everywhere by (2.25). This is the result we were trying to prove. A similar analysis applies for J positive definite, noting that the integrand is then strictly positive and continuous. Title Page 87 of 219 Go Back Full Screen Close Quit
88 2.6 Solutions of optimization problems Local and global minima Definitions Recall Problem (2.8) and its minimum f : f = min x S f(x). Sometimes, we call f in Problem (2.8) the global minimum of the problem to emphasize that there is no x S that has a smaller value of f(x). Definition 2.13 Let be a norm on R n,s R n, x S, and f :S R. We say that x is a local minimizer of the problem min x S f(x) if: ε>0such that x S,( x x <ε) ( f(x ) f(x)). (2.26) The value f = f(x ) is called a local minimum of the problem. Title Page 88 of 219 Go Back Full Screen Close Quit
89 Local minimizer and minimum A local minimum may or may not be a global minimum but if a problem possesses a minimum then there is exactly one global minimum, by definition. The global minimum is also a local minimum. Formally, ˆx is not a local minimizer if: ε>0, x ε S such that ( ˆx x ε <ε) and ( f(ˆx)> f(x ε )). (2.27) Title Page 89 of 219 Go Back Full Screen Close Quit
90 Examples Multiple local minimizers over a convex set f :R Rhas two local minimizers at x = 3,x = 3 over S. 3 f(x) f local minimum and minimizer f not a local minimum local and global minimum and minimizer x ˆx x x Fig Local minima, f and f, with corresponding local minimizers x and x, over a set S. The point x is the global minimizer and f the global minimum over S. Title Page 90 of 219 Go Back Full Screen Close Quit
91 Illustration of definition of not a local minimizer For ε=1 there is a point, namely ˆx+ 2 ε = 2 that is within a distance ε of ˆx=1.5 and which has lower value of objective than the point ˆx=1.5. f(x) ˆx x Fig A point ˆx = 1.5, illustrated with a, that is not a local minimizer and another point, ˆx+ ε 2 = 2, illustrated with a, that is within a distance ε = 1 of ˆx and has a lower objective value. Title Page 91 of 219 Go Back Full Screen Close Quit
92 Illustration of definition of not a local minimizer, continued For ε=0.5 there is a point, namely ˆx+ 2 ε = 1.75 that is within a distance ε of ˆx=1.5 and which has lower value of objective than the point ˆx=1.5. f(x) ˆx x Fig A point ˆx = 1.5, illustrated with a, that is not a local minimizer and another point, ˆx+ ε 2 = 1.75, illustrated with a, that is within a distance ε = 0.5 of ˆx and has a lower objective value. Title Page 92 of 219 Go Back Full Screen Close Quit
93 Multiple local minimizers over a non-convex set Over the non-convex setp={x R 4 x 1or 2 x 4} there are three local minimizers, x = 3,x = 3, and x = 1. 3 f(x) f 2 1 f f local minimum and minimizer local minimum and minimizer local and global minimum and minimizer x x x x Fig Local and global minima and minimizers of a problem over a set P = {x R 4 x 1 or 2 x 4}. Title Page 93 of 219 Go Back Full Screen Close Quit
94 Multiple local minimizers over a non-convex set in higher dimension [ ] [ ] The local minimizers are x 2.4 and x x x 1 Fig Contour sets of the function defined in (2.10) with feasible set shaded. The two local minimizers are indicated by bullets. The heights of the contours decrease [ towards 1 the point. 3] Title Page 94 of 219 Go Back Full Screen Close Quit
95 Discussion Iterative algorithms involve generating a sequence of successively better points that provide successively better values of the objective or closer satisfaction of the constraints or both. With an iterative improvement algorithm, we can usually only guarantee, at best, that we are moving towards a local minimum and minimizer. Title Page 95 of 219 Go Back Full Screen Close Quit
96 2.6.2 Strict and non-strict minimizers Definitions There can be more than one minimizer even if the minimum is global. f(x) x Fig A function with multiple global minimizers. The set of minimizers is indicated by a thick line. Title Page 96 of 219 Go Back Full Screen Close Quit
97 Definition 2.14 We say that x S is a strict global minimizer of the problem min x S f(x) if: x S,(x x ) ( f(x )< f(x)). The value f = f(x ) is called a strict global minimum of the problem. Definition 2.15 We say that x S is a strict local minimizer of the problem min x S f(x) if: ε>0such that x S,(0< x x <ε) ( f(x )< f(x)). The value f = f(x ) is called a strict local minimum of the problem Examples The two local minimizers, x = 3 and x = 3, in Figure 2.24 are strict local minimizers. All three local minimizers, x = 3,x = 3,x = 1, in Figure 2.27 are strict local minimizers. Title Page 97 of 219 Go Back Full Screen Close Quit
98 2.6.3 Convex functions Definitions Definition 2.16 LetS R n be a convex set and let f :S R. Then, f is a convex function on S if: x,x S, t [0,1], f([1 t]x+tx ) [1 t] f(x)+t f(x ). (2.28) If f :R n R is convex on R n then we say that f is convex. A function h :S R r is convex on S if each of its components h l is convex on S. If h :R n R r is convex on R n then we say that h is convex. The setsis called the test set. Furthermore, f is a strictly convex function on S if: x,x S,(x x ) ( t (0,1), f([1 t]x+tx )<[1 t] f(x)+t f(x ) ). If f :R n R is strictly convex on R n then we say that f is strictly convex. A function h :S R r is strictly convex on S if each of its components h l is strictly convex on S. If h :R n R r is strictly convex on R n then we say that h is strictly convex. Title Page 98 of 219 Go Back Full Screen Close Quit
99 Discussion The condition in (2.28) means that linear interpolation of convex f between points on the curve is never below the function values. f(x) x Fig Linear interpolation of a convex function between points never under-estimates the function. (For clarity, the line interpolating f between x = 0 and x = 1 is drawn slightly above the solid curve: it should be coincident with the solid curve.) Title Page 99 of 219 Go Back Full Screen Close Quit
100 Definition 2.17 LetS R n be a convex set and let f :S R. We say that f is a concave function on S if ( f) is a convex function on S Examples A linear or affine function is convex and concave on any convex set. The function f :R R shown in Figure 2.24 is not convex on the convex sets={x R 4 x 4}. Qualitatively, convex functions are bowl-shaped and have level sets that are convex sets as specified in: Definition 2.18 LetS R n and f :S R. Then the function f has convex level sets on S if for all f R we have thatl f ( f) is convex. If f :R n R has convex level sets on R n then we say that f has convex level sets. Note that a function with convex level sets need not itself be a convex function. Title Page 100 of 219 Go Back Full Screen Close Quit
101 Convexity of level sets of convex function x R 2, f(x)=(x 1 1) 2 +(x 2 3) 2 1.8(x 1 1)(x 2 3). (2.29) x x 1 Fig Contour sets C f ( f) of the function defined in (2.29). The heights of the contours decrease [ towards the 1 point. 3] Title Page 101 of 219 Go Back Full Screen Close Quit
102 Relationship to optimization problems Theorem 2.4 LetS R n be a convex set and f :S R. Then: (i) If f is convex on S then it has at most one local minimum over S. (ii) If f is convex on S and has a local minimum over S then the local minimum is the global minimum. (iii) If f is strictly convex on S then it has at most one minimizer over S. Proof We prove all three items by contradiction. (i) For the sake of a contradiction, suppose that f is convex, yet that it has two local minima over S; that is, there are two distinct values f R and f R, say, with f f that each satisfy Definition For concreteness, suppose that f > f and let x S and x S be any two local minimizers associated with f and f, respectively. The situation is illustrated in Figure Title Page 102 of 219 Go Back Full Screen Close Quit
103 Proof of (i), contd The solid line shows f(x) as a function of x while the dashed line shows the linear interpolation of f between x and x. f(x) 3 2 f f x x x Fig Multiple minima and minimizers in proof of Theorem 2.4, Item (i). Title Page 103 of 219 Go Back Full Screen Close Quit
104 Proof of (i), continued We are going to show that x satisfies the condition (2.27) for x not to be a local minimizer, which we repeat here for reference: We have: ε>0, x ε S such that( x x ε <ε) and ( f(x )> f(x ε )). t [0,1], f(x +t[x x ]) f(x )+t[ f(x ) f(x )], by convexity of f, = f +t[ f f ], by definition of f and f, < f, for 0< t 1, since f > f, = f(x ). (2.30) Title Page 104 of 219 Go Back Full Screen Close Quit
105 Proof of (i), continued For 0 t 1, we have x +t(x x ) S since S is convex. But this means that there are feasible points arbitrarily close to x that have a lower objective value. In particular, given any norm and any number ε>0, we can define x ε = x +t(x x ) where t is specified by: { t = min 1, } ε 2 x x. Title Page 105 of 219 Go Back Full Screen Close Quit
106 Proof of (i), continued Note that x ε S since 0 t 1 and that x ε satisfies: x x ε = x [x +t(x x )], by definition of x ε, = t(x x ), = t x x, by a property of norms, ε 2 x x x x, by definition of t, = 1 2 ε, < ε. Title Page 106 of 219 Go Back Full Screen Close Quit
107 Proof of (i), continued Furthermore 0<t 1 by construction, so by (2.30): f(x )> f(x ε ). That is, x satisfies (2.27) and is therefore not a local minimizer of f, which is a contradiction. As suggested by the hump in f at x 1, the situation illustrated in Figure 2.32 is inconsistent with the assumption that f is convex. We conclude that f has at most one local minimum. (ii) Suppose that the local minimum is f R with corresponding local minimizer x S. Suppose that it is not a global minimum and minimizer. That is, there exists x S such that f = f(x )< f(x ). Then the same argument as in Item (i) shows that f is not a local minimum. Title Page 107 of 219 Go Back Full Screen Close Quit
108 (iii) Suppose that f is strictly convex, yet that it has two local minimizers, x x, say. Since f is convex, then by Item (i), both minimizers correspond to the unique minimum, say f, of f over S. We have: t (0,1), f(x +t[x x ]) < f(x )+t[ f(x ) f(x )], by strict convexity of f, = f +t[ f f ], by definition of f, = f, which means that neither x nor x were local minimizers of f, since feasible points of the form x +t(x x ) have a lower objective value for all t (0,1). That is, by a similar argument to that in Item (i), we can construct a feasible x ε that is within a distance ε of x having a smaller value of objective than x. Definition 2.19 IfS R n is a convex set and f :R n R is convex on S, then min x S f(x) is called a convex problem. Title Page 108 of 219 Go Back Full Screen Close Quit
109 Discussion Local versus global minimizers Theorem 2.4 shows that a convex problem has at most one local minimum. If we find a local minimum for a convex problem, it is in fact the global minimum. Choice of step directions Convexity enables us to relate the two goals of: (i) moving from the current iterate in a direction that decreases the objective while still maintaining feasibility, and (ii) moving from the current iterate towards the minimizer of the problem. If we have a convex problem, then these goals are not inconsistent. Title Page 109 of 219 Go Back Full Screen Close Quit
110 Convex problems If: the function f :R n R is convex, the function g :R n R m is affine, with x R n,g(x)=ax b, and the function h :R n R r is convex, then: min x Rn{ f(x) g(x)=0,h(x) 0} is a convex problem. Some problems involving non-convex functions also specify convex problems. Title Page 110 of 219 Go Back Full Screen Close Quit
111 Weakening the convexity assumption For example, a function with convex level sets has only one local minimum. f(x) x Fig A nonconvex function with convex level sets. Title Page 111 of 219 Go Back Full Screen Close Quit
112 Maximizing a convex function Definition 2.20 LetS R n and x S. We say that x is an extreme point of S if: x,x S, ( (x x) and (x x) ) ( x 1 ) 2 (x + x ). That is, x is an extreme point of S if it cannot be expressed as the average of two other points in S. In Figure 2.22, there are three polygons: The extreme points of each polygon are its vertices. The extreme points of the dodecahedron in Figure 2.14 are its vertices. In Figure 2.22, the extreme points of the filled ellipse are the points on the ellipse. Title Page 112 of 219 Go Back Full Screen Close Quit
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
More informationDate: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationCreating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities
Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned
More informationDuality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationLecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
More informationIncreasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.
1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationLecture 2: August 29. Linear Programming (part I)
10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More informationThe Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationFixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
More informationChapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationAlgebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationUndergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
More informationMath 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
More informationNEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationMOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu
Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationLimits and Continuity
Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function
More informationNumerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
More informationThis unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More informationCHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY
January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationThnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson
More informationLecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
More informationDuality in Linear Programming
Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationLinear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
More information. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
More informationReal Roots of Univariate Polynomials with Real Coefficients
Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationModule1. x 1000. y 800.
Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationOverview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series
Sequences and Series Overview Number of instruction days: 4 6 (1 day = 53 minutes) Content to Be Learned Write arithmetic and geometric sequences both recursively and with an explicit formula, use them
More informationBX in ( u, v) basis in two ways. On the one hand, AN = u+
1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x
More information! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationFIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.
FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationDRAFT. Algebra 1 EOC Item Specifications
DRAFT Algebra 1 EOC Item Specifications The draft Florida Standards Assessment (FSA) Test Item Specifications (Specifications) are based upon the Florida Standards and the Florida Course Descriptions as
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationSpecial Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More informationLecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
More information10. Proximal point method
L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing
More informationCurrent Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
More informationSeveral Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
More informationChapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More informationMATH 132: CALCULUS II SYLLABUS
MATH 32: CALCULUS II SYLLABUS Prerequisites: Successful completion of Math 3 (or its equivalent elsewhere). Math 27 is normally not a sufficient prerequisite for Math 32. Required Text: Calculus: Early
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationCHAPTER 1 Splines and B-splines an Introduction
CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationApplication. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.
Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
More informationSMT 2014 Algebra Test Solutions February 15, 2014
1. Alice and Bob are painting a house. If Alice and Bob do not take any breaks, they will finish painting the house in 20 hours. If, however, Bob stops painting once the house is half-finished, then the
More informationSolutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR
Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision
More informationComputational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON
Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf
More informationApproximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
More informationMulti-variable Calculus and Optimization
Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationConvex Programming Tools for Disjunctive Programs
Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible
More informationAlgebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
More informationMathematics Georgia Performance Standards
Mathematics Georgia Performance Standards K-12 Mathematics Introduction The Georgia Mathematics Curriculum focuses on actively engaging the students in the development of mathematical understanding by
More informationClass Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
More informationNetwork Traffic Modelling
University of York Dissertation submitted for the MSc in Mathematics with Modern Applications, Department of Mathematics, University of York, UK. August 009 Network Traffic Modelling Author: David Slade
More informationPrentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009
Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationAlgebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard
Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationAnswer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
More information