Global Optimization Methods for Solving Multicriteria Optimal Control Problems

Size: px
Start display at page:

Download "Global Optimization Methods for Solving Multicriteria Optimal Control Problems"

Transcription

1 AGH UNIVERSITY OF SCIENCE AND TECHNOLOGY KRAKÓW, POLAND Faculty of Electrical Engineering, Automatics, Computer Science and Electronics Ph. D. Thesis Omer S.M. Jomah Global Optimization Methods for Solving Multicriteria Optimal Control Problems Supervisor: prof. dr hab. Andrzej M.J. Skulimowski Kraków, 2012

2 ACKNOWLEDGEMENT Although my name appears on the cover of this dissertation, so many great people have contributed to its completion and production. I owe my sincere and deep gratitude to everyone who has made this dissertation possible and because of whom my study experience has been something that I will like forever. My deepest gratitude is, first of all, to my supervisor, Prof. Dr. hab. Andrzej Skulimowski. I have been extremely lucky to have him as a supervisor; he gave me the freedom to explore on my own, and at the same time, the guidance to recover when my steps faltered. Professor Skulimowski has taught me how to question thoughts and ideas and to express them easily in a spontaneous way. His incredible patience and support aided me overcome many obstacles and crisis conditions until I finished the dissertation. I hope that one day, in the future, I would become a supervisor to students as Prof. Skulimowski has always been to me and to other students. I have benefited greatly from his thoughtful criticism, advice, mentoring and supervision. I also like to express my gratitude to the committee members and reviewers for their valuable time and efforts in guiding and commenting on the thesis. My work on this dissertation has involved many people that it will be impossible to thank and express gratitude to them all adequately. On top of those who helped me is Mr. ZAWADA who let me experience the Matlab programming, patiently corrected my writing and essentially supported my research; without his help, advice, and encouragement, this dissertation could never have been produced. I would also like to thank my parents. They were always supporting me and encouraging me with their best wishes. My wife who put up with and endured my continuous absences, both physically and psychologically, for a number of years, while I was studying in Poland, does deserve thanks and appreciation beyond measure. She and my children spent three very long years during which I was busy and totally wrapped up with the sheets of my dissertation; she with the little kids, proved willing to endure and showed great forbearance, tolerance and, above all, willingness to reap the fruits of my success. All of them, but particularly my wife, have contributed to this dissertation in ways, some they know, and most they cannot know

3 DEDICATION To My Parents with Love and Gratitude To My Wife To My Children Khaula, Hamza and Saleh - 3 -

4 TABLE OF CONTENT ACKNOWLEDGEMENT... 2 DEDICATION Introduction The Mathematical Background Vector, Normed and Metric Spaces Topological Spaces The Hausdorff Distance Compactness of a Set Continuity of a Function Convex Hull of a Set Separation Theorems An Introduction to Optimization Introduction The Nelder-Mead Method Algorithm Modification Multiple Reflections Convergence Criteria Optimization of the Coefficients for the Set of Function Applications Tests Applied Searching for the Minimum Coefficient Optimization The Library of the Test Functions Database Creation Database Schema

5 3.10. Application Approaches to Solve Global Optimization Problems Occurring in Control Local Search Methods Random Local Search Conjugate Gradient Stochastic Approximation Global Optimization Methods with Guaranteed Accuracy Indirect Methods Direct Methods Introduction to Evaluation Algorithm Evolutionary Approach The Main Ideas of Evolutionary Computation Genetic Algotithms Selection Crossing Mutation Non-dominated Sorting Genetic Algorithm (NSGA II) Short Description of the Algorithm Genetic Operators Differential Evolution Algorithm (DE) Multicriteria Optimization and Optimal Control The Formulation of the Multicriteria Optimization Problem An Overview of the General Methodology of Multicriteria Optimization Solutions to the Global Multicriteria Optimization Problems A Selection of Approaches to Solve Multicriteria Problems Scalarization Methods and Algorithms

6 5.3. Preference Modeling and its Applications to Selecting Compromise Solutions The Basics of Utility and Value Theory Multicriteria Optimal Control The Formulation of Multicriteria Optimal Control Problem for Difference and Ordinary Differential Equations The Aggregation of Time Preferences An Approach to Approximate the Pareto Set in Optimal Control Problems Multicriteria Trajectory Optimization Basic Approaches to Solving Linear Multicriteria Optimal Control Problems A Generalization of the Nelder-Mead Algorithm for Discrete and Discretized Optimal Control Problems Introduction An Introduction to Topological-Algebraic Ideas to Formalize Optimization Process An Algebraic Structure in the Family of Simplices The Nelder-Mead-type Algorithms for Solving Global Constrained and Combinatorial Optimization Problems Delauney Triangulation [170] Hybrid Nelder Mead Algorithm for Switching Point Optimization Multicriteria Population-based Extensions of Nelder-Mead Algorithm Optimization of the Water Supply System in Libya MMR (Man-Made River) Libyan Water Supply System Background Water Supply System s Optimal Control in a Discrete Time Space Final Discussion and Conclusions

7 INDEX OF FIGURES INDEX OF TABLES BIBLIOGRAPHY APPENDIX A: Matlab codes

8 1. Introduction The main goal of this dissertation is to prove the following three theses: 1. The common Nelder-Mead algorithm for nondifferentiable optimization can be generalized using the simplicial complex theory to a cooperative system of global optimization processes that imitates an evolutionary procedure, but has a strict mathematical properties, and yields the global minimum. 2. The idea of cooperative optimization processes, derived from the Nelder-Mead algorithm can be further generalized yielding a procedure for solving global multicriteria optimization problems that finds all local Pareto minima. 3. Multicriteria optimal control problems governed by several specified classes of ordinary differential equations (linear stationary, with separable variables, and nonlinear locally linearizable) can be solved using a sequential extension of the global multicriteria Nelder-Mead algorithm, applied to the original optimal control problem after a discretization of control switching times. The above goal has been achieved by presenting the underlying mathematical and computational background in Chapters 2 to 5, then, in Chapter 6, by elaborating new discrete and hybrid discrete-continuous optimization methods based on the Nelder-Mead scheme, and by applying them to solving a multicriteria optimal control problem related to water supply in Chapter 7. Specifically, the dissertation is structured as follows. In the next Chapter 2 we provide a review of main concepts and facts concerning the mathematical background for the results presented in the subsequent chapters of this dissertation. We give an overview of definitions and methods of functional analysis that will be used throughout the thesis, including the basic topology, ordered spaces and separation theorems. In Chapter 3 we introduce a classical Nelder-Mead Method, and find its optimal parameters using a set of test functions. This method created by Nelder and Mead [91] is a simple and elegant way to determine the minimum of many real variables function. The algorithm s coefficients (reflection, expansion, contraction, shrinking) for any number of functions has been optimized. With the parameter modifications proposed in Chapter 3 the algorithm becomes even faster and more accurate. In this formulation several further improvements simplifying the process of finding the minimum by the algorithm have been applied. A - 8 -

9 simultaneous determination of the two optima for two functions appears to be an interesting application. When operating with two criteria (two functions) the algorithm determines optimal points for both criteria. The results of the numerical tests are also shown. It is also worth to consider the stop conditions that can be used in the Nelder Mead method. Most of them compare the values of the optimized criterion achieved at points of the current simplex iteration and takes into account the size of this simplex. Any inconsistency in applying the stopping condition of this kind may considerably affect the accuracy of the end results. Summing up, the modified Nelder-Mead algorithms presented in Chapter 3 should be taken into consideration when solving multicriteria optimization problems. The results of the tests prove that this method is efficient. In Chapter 4, first we overview the methods of global optimization that can be used in solving global optimal control problems which is the main goal of this dissertation. Following the results of the previous Chapter 3, we pay a special attention to the direct methods, specifically to the Nelder-Mead algorithm which was selected as most suitable for solving discretized control problems. Then, we propose a method to optimize the parameters of the Nelder Mead algorithm by minimizing the number of steps necessary to reach a closeto-optimum-point of a linear combination of a family of test functions. The parameters are optimized in IR 4, by the Nelder-Mead method itself. In Chapter 5 we will present the differential evolution method, which shows some relationships to the Nelder-Mead method and may serve as a benchmark for the new algorithms based on the Nelder-Mead one introduced in this dissertation. Differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality [137]. DE is used for multidimensional real-valued functions but like the Nelder-Mead method it does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as it is the case of other classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc. Being a typical representative of evolutionary algorithms, DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to a simple formula., Then it keeps whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way the optimization problem is treated as a black box that merely provides a measure of quality - 9 -

10 given a candidate solution and the gradient is therefore not needed. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee that an optimal, or even a near-optimal solution is ever found. In Chapter 6 the well-known Nelder-Mead algorithm for continuous non-differentiable optimization is modified in such a way that it can be used to solve discretized optimal control problems. Let us recall that it uses N+1 starting points in an N-dimensional decision space, which need just to be linearly independent - hence the other name of the algorithm: the downhill simplex method. The method, originally proposed by Nelder and Mead in 1965 [91], has become very popular and appeared in many variants since then, despite of its convergence deficiencies. In the present thesis we study the combinatorial properties of the generalized optimization algorithm based on the Nelder-Mead ideas. Namely, following [126] we study the Nelder-Mead-type procedure starting from a simplicial complex S rather than a simplex to allow for a parallel processing of mutually communicating search processes. This procedure can also be applied as a combinatorial search on a discrete set of vertices of all simplices from S. We propose an algebraic structure that describes the action of this algorithm for a given function f to be minimized and prove some properties of the proposed class of algorithms. In the next Chapter 7 the algorithms elaborated in Chapter 6 are applied to solving multicriteria optimal control problems related to drinking water supply in Libya. The final Chapter 8 contains conclusions

11 2. The Mathematical Background 2.1. Vector, Normed and Metric Spaces We start by presenting basic notions that will be used throughout this thesis. Definition 2.1. A set V is called a vector space (over the field IK) if the following conditions are satisfied: 1) X is an abelian group (written additively), i.e. a) x + y = y + x for all x, y V, b) for each x V there exists a unique vector (denoted by 0) such that x + 0 = x, c) to each x V there corresponds a unique vector x such that ( x) + x = 0; 2) A scalar multiplication is defined: to every element x V and each α IK there is an element of V, denoted by αx, such that we have a) α(x + y) = αx + αy for all x, y V, b) (α + β)x = αx + βx for all x V and α, β IK, c) α(βx) = (αβ)x for all x V and α, β IK, d) 1x = x for all x V. The elements of a vector space are called vectors. In what follows, we consider vector spaces over IK = IR, i.e. the real number field. Definition 2.2. A linear transformation of a vector space V into a vector space W is a mapping Λ: V W such that Λ(αx + βy) = αλx + βλy for all x, y V and all scalars α and β. Λ is called a linear functional if W is the field of scalars. Now we give some examples of linear functionals, which are considered in further sections. 1) Let IR n be an n-dimensional real space whose elements are denoted by x = (x 1,..., x n ). Given a = (a 1,..., a n ) IR n leta = (a 1,, a n ), a function F defined by n F(x) = a i x i i=1 (2.1) is a linear functional on IR n. 2) Let

12 b F(f): = f(x)p(x)dx, (2.2) a where a and b are real numbers such that a < b, p is a continuous positive-valued function on the interval [a,b] and f is a real measurable function on [a, b]. Such F is a linear functional on the space of all integrable real-valued functions on [a, b], denoted by L([a, b]). Definition 2.3. Let V be a vector space. A nonnegative real valued function. : V IR is called a norm on V if the following conditions hold: a) f =0 if and only if f = 0, b) αf = α f for all f V and all scalars α, where α denotes the absolute value of α, c) f + g f + g for all f, g V. A pair (V,. ) is called a normed space. Remark that if all above conditions are satisfied except for a), then the function. is called a seminorm in V. A concept of a norm leads in a natural way to another important notion, which formally lets us measure the distance between two arbitrary vectors. Suppose that X is a nonempty set (not necessarily a vector space). Definition 2.4. A nonnegative function ρ: X X IR is called a metric on X if it has the following properties: a) ρ(x, y) = 0ifandonlyifx = y. if and only if x = y, b) ρ(x, y) = ρ(y, x) for all x, y X, c) ρ(x, y) ρ(x, z) + ρ(z, y) for all x, y, z X (the triangle inequality). A pair (X, ρ) is called a metric space. Given a metric ρ on X, the number ρ(x, y) is often called the distance between x and y in the metric ρ. If (V,. ) is a normed space, then a metric on V is given by the formula ρ(x, y) = x y. Let us consider some useful examples of such spaces

13 Examples a) The most common norm in IR n (the Euclidean norm) is defined by n x = x i 2 i=1 for each x = (x 1,, x n ). The metric induced by this norm is given by ρ(x, y) = (x i y i ) 2 n i=1 In the case of n=1, the above formula takes a simpler well-known form ρ(x, y) = x y. b) Let C [a, b] be the set of all continuous real functions defined on the interval [a, b] (the notion of continuity will be discussed later). A norm inc[a, b] can be established by f = max a t b f(t). Hence the distance between two functions f and g is measured as ρ(f, g) = max a t b g(t) f(t). c) For p 1, let L p [a, b]denote the space of all real-valued functions f on [a, b] such that f p is an integrable (i.e. the integral of f p over [a, b] exists and is finite). A seminorm. in L p [a, b] is given by the formula b 1/p f = f p (t)dt. a Note that if the values of two functions, say f and g, differ in at least one point, we may still have the equality f = g. This leads to the conclusion that the condition a) from the definition of a norm does not hold. To avoid this problem, we introduce an equivalence relation in L p [a, b] identifying all functions, which are equal almost everywhere (i.e. everywhere expect on a set of measure zero, which may be thought of as a thin set). We write [ f ] for the equivalence class of a function f L p [a, b] associated with this relation. It can be shown that for f, g L p [a, b], the equality [ f ] = [ g ] is equivalent to f = g almost everywhere. Finally, in the collection of all equivalence classes (denoted by

14 L p [a, b]) we may define a norm [ f ] in the same way as the above seminorm. For simplicity, we will write f instead of [ f ] (having in mind the fact that this notation signifies that f is almost uniquely determined). Consequently, the distance between two elements (in what follows also called functions) in L p [a, b] can be defined as b ρ(f, g) = g(t) f(t) 2 The spaces L 1 [a, b] and L 2 [a, b] are particularly useful for our purposes. a 1/2 Remark that normed spaces in b) and c) are referred to as function spaces, i.e. spaces whose elements are functions, regarded as just single points of the space. Note also that the functions belonging to L p spaces described in c) are not necessarily continuous. We will close this subsection with the following definitions. Definition 2.5. For x X and r > 0, we denote by K(x, r) the set all elements y X such that ρ(x, y) < r and call it an open ball with the center at x and the radius r. Definition 2.6. We say that a subset B of a metric space (X, ρ) is bounded if there exist x X and r > 0 such that B K(x, r) Topological Spaces A major role in the subsequent applications is played by both functionals and continuous mappings. In order to define the notion of continuity, one has to introduce first a topology. A topological space is a more general concept then a normed space or a metric space. Definition 2.7. A collection τ of subsets of a nonempty set X is called a topology on X if the following conditions hold: a) X and X τ, b) if {U i } i I is a family of sets from τ, then i I U i c) if U, V τ, then U V τ τ,

15 An ordered pair (X, τ) consisting of a set X and a topology τ on X is called a topological space. Note that, by induction, the above condition c) can be equivalently written as c ) if U 1,, U n τ, then U 1 U 2 U n τ. A subset U of X is called an open set if U τ. We say that a set U is a neighbourhood of an element x X if x U and U is an open set. A subset F of X is called a closed set if its complement in X is an open set. Therefore, the sets Ø and X are closed, arbitrary intersections of closed sets are closed, and finite unions of closed sets are also closed. An example of a topological space, which we shall frequently encounter, is the extended real line [-, ]. Its topology is defined by declaring the following sets to be open: (a, b), [-, a), (a, ], and any union of segments of this type. When defining a topology in X, we do not have to be able to measure the distance between two arbitrary points in X. However, one of the most important and frequently used ways of imposing topology is to define it by a metric (in case the latter is given on a considered set). More precisely, if (X, ρ) is a metric space, then a topology τ on X, which is said to be induced by the metric ρ, is defined as follows: U τ x U r x > 0: K(x, r x ) U. In other words, a set U is open in X if for its every element x we can find an open ball, which is contained in U, with the center at x. It is worth noting that, in view of the definition of τ, each open ball in X is an open set. As an example of such topology, considered in this dissertation, can serve so-called natural topology in IR n induced by the Euclidean metric. Finally, let us recall three fundamental notions that frequently prove useful to formulate the mathematical background of optimal control. Let A be a subset of a topological space X. The smallest closed set in X containing A is called the closure of A. The largest open set contained in A is called the interior of A. The boundary of A is the intersection of the closures of A and X\A. We denote the closure, the interior and the boundary of A by cl(a), int(a) and A, respectively

16 2.3. The Hausdorff Distance By a multifunction we mean a function from a topological space X into the family of all subsets P(Y) of another topological space Y. Closed-valued multifunctions with values included in a metric space Y play an important role in optimization and optimal control. The family of all closed subsets of Y will be denoted by Cl(Y) and endowed with the metric called the Hausdorff distance in Cl(Y), which is defined below (we utilize this notion in Chapter 5). Definition 2.8. Let X and Y be two non-empty subsets of a metric space (M, d). We define their Hausdorff distance d H (X, Y) by d H (X, Y) = max { sup x X inf y Y d(x, y), sup y Y inf x X d(x, y)}. (2.3) Figure 2.1. The Hausdorff distance of sets X and Y It can be shown that d H (X, Y) = inf{ε > 0: X Y ε and Y X ε }, where X ε = {z M d(z, x) ε} (the set of the form X ε is sometimes called a generalized ball of radius ε around X). Informally, we can say that two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is a common measure of convergence of vector optimization algorithms that aim at an approximation of the set of nondominated points (cf. Chapters

17 and 7). It is well-known [126] that the subset of weakly nondominated points of a continuous multifunction is continuous in the Hausdorff distance, but the set of nonodominated points is not. This fact must be considered when designing vector optimization algorithms Compactness of a Set Definition 2.9. A collection C of subsets of X is said to be a covering of X if the union of the elements of C is equal to X (in this case we say that C covers X). If elements of C are open sets, then C is called an open covering of X. Definition We say that a subset K of a topological space X is compact if every open covering of K contains a finite subcollection that also covers K. If X is itself compact, then we say that X is a compact space. Remark that IR n equipped with a natural topology is not a compact space, because we cannot choose its finite subcovering, for example, from the family of balls K(0, n), where n IN. However, every point of IR n has a neighbourhood whose closure is compact. A topological space having such property is called a locally compact space. In general, it takes some effort to decide whether a given topological space is compact or not. In the case of X= IR n and a topology on X is induced by the euclidean metric, the compactness of a set can be equivalently described by the following elegant condition: a subset K of IR n is compact if and only if it is closed and bounded. Since the definition of boundedness involves the notion of a distance, this useful characterization of compactness does not make sense in general (i.e. not necessarily metric) topological spaces

18 2.5. Continuity of a Function Definition Let (X, τ X ) and (Y, τ Y ) be topological spaces. A function f: X Y is said to be continuous if for each open subset V of Y, the preimage V under f is an open subset of X. Theorem 2.1. For f: X Y, the following conditions are equivalent: a) f is continuous, b) for each x X and each neighbourhood V of f (x), there exists a neighbourhood of U of x such that f(v) U. If the condition in b) holds for the point x of X, then we say that f is continuous at x. A continuity of a function can be expressed in a more convenient way if topologies on X and Y are induced by some metrics. Namely, if (X, ρ) and (Y, d) are metric spaces, then the continuity of f: X Y at the point x X is equivalent to the requirement that given ε > 0, there exists δ > 0 such that for all y X we have ρ(x, y) < δ d(f(x), f(y)) < ε. We are in a position to formulate the Weierstrass theorem, which plays an important role in mathematical analysis. Theorem 2.2. Let (X, τ) be a compact topological space. A real-valued continuous function on a nonempty compact set contained in X attains its maximum and minimum, each at least once. The above result is posed in a very general setting. In particular, it holds true for differentiable functions (as well as for those of higher regularity), also considered in this thesis, defined on compact subsets of IR n Convex Hull of a Set Definition Let V be a vector space and X be an arbitrary subset of V. X is said to be convex if and only if [x, y] X for all x, y X, where [x, y] stands for the set

19 {tx + (1 t)y: t [0,1]}. Definition A convex hull or convex envelope for a set of points X in a real vector space V is the minimal convex set containing X. In two-dimensional spaces the convex hull can be represented by a sequence of the vertices of the line segments forming the boundary of the polygon, ordered along that boundary [128]. To verify that the convex hull of a set X exists, notice that X is contained in at least one convex set (the whole space V, for example), and any intersection of convex sets containing X is also a convex set containing X. It is then clear that the convex hull is the intersection of all convex sets containing X. This can be used as an alternative definition of the convex hull. Algebraically, the convex hull H convex (X) of X can be characterized as follows: k k H convex (X) = α i x i x i X, α i IR, α i 0, α i = 1, i = 1,2,. i=1 i= Separation Theorems In this section we make a brief overview of so-called separation results, which can be formulated either in algebraic or in geometric form. For the purposes of this thesis, the latter one is more convenient. Definition Let V be a real vector space. A real-valued function ƒ is called sublinear if f(γx) = γf(x) for all γ 0 and x V (positive homogeneity), f(x + y) f(x) + f(y) for all x, y V (subadditivity). Note that every seminorm on V (and so every norm on V) is sublinear. First, let us recall a classical result of functional analysis obtained independently by H. Hahn and S. Banach. Theorem 2.3. Let V be a real vector space and p: V IR be a sublinear function. Suppose that λ: Y IR is a linear functional defined on a linear subspace Y of X which satisfies λ(x) p(x) for

20 all x X. Then, there exists a linear functional Λ: X IR such that Λ(x) p(x) for all x X, and Λ(x) = λ(x) for all x Y. Remark that the above Hahn-Banach theorem can be extended to the case of a complex vector space X. From its original form one can also derive its geometric version, formulated below, called the separation theorem. Definition If λ is a real-valued linear functional defined on a vector space and a IR, then the set {x X: λ(x) = a} is called a hyperplane. Definition We say that two sets A and B are separated by a hyperplane if there exists a continuous realvalued linear functional λ and a IR such that λ(x) a for all x A and λ(x) a for all x B. If λ(x) < a for all x A and λ(x) > a for all x B, then A and B are strictly separated by a hyperplane. Theorem 2.4. Suppose that A and B are disjoint convex sets in a locally convex vector space X. Then the ensuing conditions hold: a) If A or B is open, then they can be separated by a hyperplane. b) If A and B are both open or A is compact and B is closed, they can be strictly separated by a hyperplane. Separation theorems are often used to prove the sufficient optimality conditions, in both, single, and multiple criteria cases

21 3. An Introduction to Optimization 3.1. Introduction In this chapter we will provide the basic notions of optimization in n-dimensional real spaces (Multiple Variable Optimization), multicriteria optimization and the Nelder-Mead algorithm, the latter called also the downhill simplex or Amoeba Method. The methods of finding the function extrema can be divided into stochastic and deterministic. Stochastic Methods The characteristic feature of these methods is the use of random finding mechanisms of the target function extrema. The stochastic methods include e.g. simulated annealing and evolutionary processing. Deterministic Methods We can distinguish between the gradient and non-gradient deterministic methods. The gradient methods demand knowing the gradient, or the first derivative, whereas in the nongradient methods, the gradient is not necessary to be known. These methods are often called direct search methods. When can the non-gradient methods be employed? In the case when the function is not differentiable but continuous in Lipschitz sense, so called direct search or non-gradient methods can be implemented. They can be used to search for specific directions. The most promising directions are searched for in some of the methods. Recall that if (X, ρ) and (Y, d) are metric spaces, we say that a function f: X Y fulfils the Lipschitz condition if there exists a constant L > 0 such that ρ(f(x 1 ), f(x 2 )) Lρ(x 1, x 1 ) for all x 1, x 2 X. A function fulfilling the Lipschitz condition is uniformly continuous. Examples of the optimization algorithms using the non-gradient methods: 1. Nelder-Mead Method [91], 2. Hooke-Jeeves Method [68], called also Pattern Search

22 3. Rosenbrock Method [87], 4. Gauss-Seidel Method [96], 5. Powell s Method [48], 6. Zangwill s Method [10] The Nelder-Mead Method In this subsection we will show the basics of the Nelder-Mead Algorithm that belongs to the class of direct search methods. It maintains the set of temporary points that belong to a simplex in the decision space. A simplex is a figure that has one more vertex than the dimension of the domain of the target function and all its vertices are linearly independent. This algorithm was created in 1965 by Nelder and Mead and is also famously known as the Downhill Simplex Method. More on the Nelder-Mead algorithm is contained in Chapter 6. Here we shall concentrate our attention on optimizing the method s parameters. This method is working well for nonlinear function, but demands a great input of numerical work, especially if there is a great number of decision variables. That is why it is recommended to use this algorithm for target function with no more than 10 dimensions. An one-dimensional simplex is a segment with two vertices, a two-dimensional simplex is a triangle, and generally,an n-dimensional simplex with n+1 vertices is a set of all points specified by vectors: n+1 x = x j S j j=1 n+1, where x = x j = 1 and j=1 x j 0. (3.1) So this is a regular polyhedron with n + 1 vertices coinciding with n + 1 basis vectors S j. The coordinates of the point of the simplex are marked as x j [90]. The basic operations: The reflection of the point P h across S r P = (1 + a)s r ap h Expansion of point P * across S r P = (1 + c)p cs r

23 Contraction of the point P h across S r P = bp h + (1 b)s r Shrinking of the point P h across S r Notation: P = P (P S r ) a Reflection coefficient (assumed 1) c Expansion coefficient (assumed 2) b Contraction coefficient (assumed 0.5) d Shrinking coefficient (assumed 0.5) P L chosen vertex point of the simplex among n+1 vertices P i, the point where the function reaches its minimum P h chosen vertex point of the simplex among n+1 vertices P i, the point where the function reaches its maximum S r the centre of symmetry of the simplex not including the P h point defined as N iteration number S j = n+1 j=1 P j n, where j h (3.2) Steps of the basic algorithm: 1. Calculation of the dependent variable of the target function in the vertex points of the simplex, F j = f(p j ) for j = 1,..., n Determination of h and L such that f(p h ) = max and f(p L ) = min among set F j. 3. Calculation of the centre of symmetry for S r simplex. 4. Reflection of P point P h across S r. 5. Calculation of the dependent variable of the function F S = f(s r ) and F 0 = f(p ). If F 0 < min, then: 6. Calculation P (expansion) and the dependent variable F e = f(p ). 7. If F e < max, we substitute P h = P, otherwise P h = P. 8. The repetition of the first step algorithm if the criterion for the minimum is not fulfilled

24 If F 0 > min, then: 9. If F 0 f(p j ) for j = 1,, n + 1 (not including j = h) and F 0 max, transition to the next step. If F 0 < max, we substitute P h = P. 10. Contraction of the P point P h about S r. 11. Calculation of F k = f(p ). 12. If F k max, we reduce simplex according to the formula P j = 0, 5 (P j + P L ) for j = 1,, n Whereas, if F k < max, we substitute P h = P and continue with the step If F 0 < f(p j ) for j = 1,, n + 1 (not including j = h), we substitute P h = P. 15. Repetition of the first step procedure, if the stop criterion was not fulfilled Algorithm Modification Multiple Reflections The step of the algorithm which is the reflection of P * point P h across S r undergoes modification. Reflection of P * is made for P h1 and P h2 points across their symmetry centre S r1 and S r 2. P h1 and P h2 are such points that P L < P h1 and P L < P h2. Next, F h1 = f(p h1 ) for F h2 = f(p h2 ) is counted. If F h1 < F h2, then the next for the next step point F h is left, otherwise point F h Convergence Criteria In the original version of the algorithm simplex can be converted as long as the distance between its vertices, close to the searched for extreme, is smaller than the assumed accuracy of calculation e > 0. In the modified version there are various different stop criteria: d od e > 0, where d is the sum of all lengths of simplex edges, F h F L od e > 0,

25 P h k P L k od e > 0, where k = 1,, N, P h k P L k 1 od e > 0, where k = 1,, N Optimization of the Coefficients for the Set of Function In order to optimize the algorithm coefficients for the set of function the ensuing steps should be followed: Creation of a tableau (cells) with the indices for the function. Creation of the initial simplex with which the optimization of the function will be started. Creation of the simplex with the algorithm coefficients. Four coefficients: reflection, expansion, contraction and shrinking. Making a function optimizing the coefficients with parameters which are the variables that had been created in the previous steps. Example Coefficients for four functions will be optimized - ackley, beale, bh1, booth. These are functions of two variables. That is why the initial simplex with which the function values will be optimized needs to have three vertices, each of them with two coordinates. The algorithm uses four coefficients, that is why the initial simplex needs to have five vertices, each of them specified with one coordinate. Definitions [169]: Ackley function: f(x) = 20 + e 20e n n i=1 x i 2 e 1 n n i=1 cos 2πx i. Beale function: f(x) = (1.5 x 1 + x 1 x 2 ) 2 + (2.25 x 1 + x 1 x 2 2 ) 2 + (2.625 x 1 + x 1 x 2 3 ) 2. Bohachevsky functions: f 1 (x) = x x cos (3πx 1 ) 0.4cos (4πx 2 ) + 0.7, f 2 (x) = x x cos (3πx 1 )cos (4πx 2 ) + 0.3, f 1 (x) = x x cos (3πx 1 + 4πx 2 )

26 Booth function: f(x) = (x 1 + 2x 2 7) 2 + (2x 1 + x 2 5) 2. Griewank function: n f(x) = x i 2 cos( x i ) i i=1 n i=1 Hump function: f(x) = 4x x x x 1 x 2 4x x 2 4. Rastrigin function: n f(x) = 10n + (x i 2 10 cos(2πx i )) i=1 Creation of a table with the indices for the function» Creation of the initial simplex with which the optimization of the function will be started.» simplex_function = [0,0; 0,1; 1,0] simplex_function = Creation of the simplex with the algorithm coefficients. Four coefficients: reflection, expansion, contraction and shrinking» simplex_coefficients = [ 1, 2, 0.5, 0.5; 0.8, 1.6, 0.4, 0.4; 1.2, 2, 0.6, 0.5; 1, 2.2, 0.5, 0.7;

27 1, 2, 0.3, 0.3; ] simplex_coefficients = Making a function optimizing the coefficients with parameters which are the variables that had been created in the previous steps.» nm_opt_cof (function, simplex_function, simplex_coefficients) ans = Applications We have developed a procedure aimed at finding the optimal point while considering the values of the two described functions. Functions introduced should be recognizable from the Matlab command line. The functions should take arguments in the form of a table. Optimization of two functions 1. Call the first function with its name and argument, e.g. fun1(x). 2. Call the second function with its name and argument function, e.g. fun2(x). 3. Click on the optimize button. 4. The optimal point for the two given functions will appear as the optimal point. Optionally: 2a. Set up the initial simplex. 2b. Set up the coefficients of the algorithm or order the automatic optimization of the coefficients clicking on optimize the coefficients. Drawing graphs of the functions 1. Optimize the functions (as above). 2. Click on Draw functions

28 Optionally: 2a. To have the possibility to rotate the graphs click on switch rotation on. The main window of this application is shown in Figure 3.1. After clicking optimize coefficients for many functions the program opens a new window, see Figure 3.2. In this window the program allows for the automatic optimization of the Nelder-Mead algorithm coefficients in terms of any number of functions. The field function should contain function indices (the function name following sign), divided by a comma. Figure 3.1. The main window of the program Optimization of the coefficients 1. Introduce the function indices in the functions field, e.g

29 2. Type in the initial simplex which will be used for the optimization of already described functions, e.g. [0,0; 0,1; 1,0]. 3. Introduce simplex of the coefficients according to their sequence: reflection, expansion, contraction, shrinking. 4. Click on optimize and wait untill the algorithm finishes its work. 5. In the optimal coefficients fields the optimal coefficients will appear, with which the optimization of the described function is most efficient Tests Applied These tests were applied for different sets of functions. These tests concern counting the minimum as well as the optimization of the algorithm coefficients. Figure 3.2. Window responsible for the optimization of the coefficients for many functions Notation: Searching for the Minimum

30 min1 minimum counted for both functions min2 minimum counted for one function Ackley Function Test for Ackley and Booth functions the number of n variables global minimum x min = (0,..., 0), f (x min ) = 0 Booth function Test 1 Results The number of variables 2 global minimum x min = (1, 3), f (x min ) = 0 The initial simplex: (3,3;3,6;6,3) The figure of the simplex movement, Fig. 3.3 Optimal parameters: (0,3) Table 3.1. Ackley, Booth minimum (test 1) Ackley Booth min1 6, min2 8,881784e-016 6, e

31 Figure 3.3. Simplex on the function contour (Ackley on the left, Booth on the right) Test 2 Results Initial simplex: (0,0;2,4;4,2) Illustration of the moving simplex, Fig. 3.4 Optimal parameters: (1,2) Table 3.2. Ackley, Booth minimum (test 2) Ackley Booth min1 5, min2 8,881784e-016 8,082975e

32 Figure 3.4. Simplex on the function contour ( Ackley on the left, Booth on the right) Test 3 Results Initial simplex: (-1,1;-2,3;-4,3) Illustration of the moving simplex, Fig. 3.5 Optimal parameters: (1,1) Table 3.3. Ackley, Booth minimum (test 3) Ackley Booth min1 3, min2 2,579934e+000 1,022467e

33 Figure 3.5. Simplex on the function contour ( Ackley on the left, Booth on the right) Test for Hump and Beale functions number of the variables 2 global minimum xmin = (0.0898, ), ( , )f (xmin) = 0 Beale function Test 1 Results number of variables 2 global minimum xmin = (3, 0.5), f (xmin) = 0 Initial simplex: (3,3;3,6;6,3) Illustration of the simplex movement, Fig. 3.6 Optimal parameters: ( , ) Table 3.4. Hump and Beale minimum (test 1) Hump Beale min1 7, ,11882 min2 5,734565e-007 6,100249e

34 Figure 3.6. Simplex on the function contour (Hump on the left, Beale on the right) Test 2 Results Initial simplex: (0,0;2,4;4,2) Illustration of the simplex movement, Fig. 3.7 Optimal parameters: ( , ) Table 3.5. Hump and Beale minimum (test 2) Hump Beale min1 0, ,6825 min2 8,161705e-001 1,420313e+0016 Test minimum (test 1)3 Results Initial simplex: (-1,1;-2,3;-4,3) Illustration of the simplex movement, Fig. 3.8 Optimal parameters: (0.25,0.75)

35 Table 3.6. Hump and Beale minimum (test 3) Hump Beale min 1 0, ,8014 min 2 2,974622e-006 1,320323e+001 Figure 3.7. Simplex on the function contour (Hump on the left, Beale on the right) Test for Rastrigin and Griewank functions Rastigin function The number of variables n global minimum xmin = (0, 0), f (xmin) = 0 Griewank function Number of n variables global minimum xmin = (0,..., 0), f (xmin) = 0 Test 1 Results Initial simplex (3,3;3,6;6,3) Illustration of the simplex movement, Fig. 3.9 Optimal parameters (3,1875,3,09375)

36 Table 3.7. Rastrigin and Griewank minimum (test 1) Rastrigin Griewank min1 10,2344 0, min2 0 2,288969e-002 Figure 3.8. Simplex on the function contour (Hump on the left, Beale on the right) Test 2 Results Initial simplex: (0,0;2,4;4,2) Illustration of the simplex movement, Fig Optimal parameters: (0,0) Table 3.8. Rastrigin and Griewank minimum (test2) Rastrigin Griewank min1 0 0 min

37 Test 3 Initial simplex: (-1,1;-2,3;-4,3) Results Illustration of the movement of the simplex, Fig Optimal prameters: ( , ) Table 3.9. Rastrigin and Griewank minimum (test2) Rastrigin Griewank min1 0, , min2 9,94621e-001 9,94621e-001 Figure 3.9. Simplex on the function contour (Rastrigin on the left, Griewank on the right) Coefficient Optimization Test1 Test for Ackley, Beale, Hump and Booth function Initial simplex: (0,0; 0,1; 1,0) Simplex of coefficients (reflection, expansion, contraction, shrinking): (1, 2, 0.5, 0.5; 0.8, 1.6, 0.4, 0.4; 1.2, 2, 0.6, 0.5; 1, 2.2, 0.5, 0.7; 1, 2, 0.3, 0.3;)

38 Optimal Coefficients: Table Optimal coefficients (test 1) reflection expansion contraction shrinking 0,8626 1,9813 0,2625 0,39638 Test 2 Test for Ackley, Beale, Hump and Booth function Initial simplex: (0,0; 0,1; 1,0) Simplex of coefficients (reflection, expansion, contraction, shrinking): (1.2, -2.3, 1.5, 3.5; 0.2, -1.6, 1, 2.4; -1.2, 1, 1.6, 3.5; -1, 1.2, 0.1, , 2.3, 3.3, -0.3;) Figure Simplex on the function contour (Rastrigin on the left, Griewank on the right) Optimal Coefficients: Table Optimal coefficients (test 2) reflection expansion contraction shrinking 1,6906-1,7203-6,125 10,

39 Test 3 Test for Rastrigin, Griewank, Hump, Booth functions Initial simplex: (0,0; 0,1; 1,0) Simplex of the coefficients (reflection, expansion, contraction, shrinking): (1, 2, 0.5, 0.5; 0.8, 1.6, 0.4, 0.4; 1.2, 2, 0.6, 0.5; 1, 2.2, 0.5, 0.7; 1, 2, 0.3, 0.3;) Optimal coefficients: Table Optimal coefficients (test 3) reflection expansion contraction shrinking 0,9 1,55 0,3 0,125 Figure Simplex on the function contour (Rastrigin on the left, Griewank on the right) Test 4 Test for Rastrigin, Griewank, Hump, Booth functions Intial simplex: (0,0; 0,1; 1,0) Simplex of coefficients (reflection, expansion, contraction, shrinking): (1.2, -2.3, 1.5, 3.5; 0.2, -1.6, 1, 2.4; -1.2, 1, 1.6, 3.5; -1, 1.2, 0.1, 3.; -1.3, 2.3, 3.3, -0.3)

40 Optimal coefficients: Table Optimal coefficients (test 4) reflection expansion contraction shrinking 1,3125-2,3063-0,15 6, The Library of the Test Functions The program for management of testing functions was created as a part of the multicriteria optimization project. This application was realized in Matlab 2008b, and the Database Toolbox Connection to database How to add the JDBC driver. For the connection with database a JDBC driver is used. It is not active in Matlab as a standard, so its location must be specified. Follow the instructions on In the project a JDBC driver should be available after following steps: 1. Open the classpath.txt file, which is in the catalogue <katalogmatlab>/toolbox/local/ 2. At the end of the file paste adress track with the ODBC driver, e.g. C:/java/mysqlconnector-java bin.jar This controller is attached to the project. You can also download it from the internet. 3. Restart Matlab Database Creation The testing functions module uses database the structure of which is defined in the tf_db.sql file. 1. The MySQL database should be created generating the script from the tf_db.sql file. It can easily be done with help of GUI tools for the MySQL database management, e.g. MySQL Workbench, or the MySQL Query Browser

41 Configuration of the connection For the module of management of the testing functions database to communicate well with the database adequate connection parameters should be set: Open the tf_database.m file Find the text lines with %% CONFIGURATION Set the values of variables containing the connection configuration databasename the name of the database which was created from the script tf_db.sql username name of the user who can connect to the above mentioned userpassword the password for the user Optionally: host - name / IP adress of the server, where the database is available. Testing the connection If everything was configured correctly you can activate the module of management of testing functions - starting tf_gui_main.m. The easiest way to test if the program can connect to the database is to produce the command: tf_database If everything is set property than no error will be displayed. Otherwise there will be the error message:??? Error using ==> tf_database>tf_database.tf_database at 42 Access denied for user matlaba'@'localhost' (using password: YES) in this case informing about the wrong username

42 3.9. Database Schema Figure Database schema

43 3.10. Application Chosen application windows are presented below. Figure Main window of the program Figure Window edition of function categories

44 Figure Window- bibliography edition

45 4. Approaches to Solve Global Optimization Problems Occurring in Control 4.1. Local Search Methods Let us begin by providing an overview of the local optimization and by describing several local search algorithms. Next, we discuss the global optimization problem and review standard methods of global optimization. This review highlights the use of local search in these methods. The methods of local search have been broadly used in both theoretical computer science and numerical optimization. A classification of global optimization and local search methods is presented in Table 4.1. Table 4.1. A classification of global optimization and local search methods The description of the method The class of methods Applicability Random Local Search Local search methods performing local search on smooth function without derivative information Stochastic Approximation Local search methods pattern recognition problem makes changes to the current solution using randomly selected samples Conjugate Gradient Local search methods Applied to functions like this that have narrow valleys the steepest descent method is inefficient. You might expect that the first line minimization would take you to the bottom of the valley, and the second would finish the minimization A classification of global optimization methods minimized the presence of constraints that restrict the domain of the search [39]. In this dissertation we will both, modify the methods for unconstrained optimization to solve constrained problems, as well as we will use methods developed more specifically for constrained problems

46 Optimization focuses on local search methods used for minimizing continuous function on compact sets. In the previous Chapter we have outlined several non-gradient methods. The other group of methods, the gradient methods base on the use of derivative information likes gradients, f (x), and Hessians, f (x). Differentiable optimization algorithms can be classified by the highest order derivatives that they use. Algorithms using derivative information of order greater than zero are somewhat more effective than those which only use function evaluations (order zero derivatives). However, derivative information demands additional calculation, and these algorithms do not always generate good solutions quickly enough to compensate for the additional expense. First, we will discuss the non derivative methods proposed by Solis and Wets [128]. Next, conjugate gradient methods [47], [53], which are used to minimize continuous function using gradient information, will be presented. Finally, stochastic approximation is used in pattern recognition methods to find the optimal weights for parametric models of data [27] Random Local Search Solis and Wets [128] propose a lot of random local search methods to be used in local search on smooth function without derivative information. These use normally distributed steps to generate new points in the search space. A new point is generated by adding zero mean normal deviates to every dimension of the current point. If the value of the new point is worse than the current point, then this algorithm examines the point generated by taking a step in the opposite direction from the new point. If neither point is better than the current point, another new point is generated. This algorithm consists of parameters which automatically reduce or increase the variance of the normal deviates responding to the rate at which better solutions are found. If new solutions are better often enough, the variance is increased to allow the algorithm to take larger steps. If poorer solutions are repeatedly generated, the variance is decreased to focus on the search closer to the current solution. It is important to remember, however, that this algorithm does not have a well defined stopping condition. Solis and Wets examine several attempts to define stopping criteria for random search techniques, and come to a conclusion that the search for a good stopping

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011

Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

MA651 Topology. Lecture 6. Separation Axioms.

MA651 Topology. Lecture 6. Separation Axioms. MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion Daniel Marbach January 31th, 2005 Swiss Federal Institute of Technology at Lausanne Daniel.Marbach@epfl.ch

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Random graphs with a given degree sequence

Random graphs with a given degree sequence Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

ON FIBER DIAMETERS OF CONTINUOUS MAPS

ON FIBER DIAMETERS OF CONTINUOUS MAPS ON FIBER DIAMETERS OF CONTINUOUS MAPS PETER S. LANDWEBER, EMANUEL A. LAZAR, AND NEEL PATEL Abstract. We present a surprisingly short proof that for any continuous map f : R n R m, if n > m, then there

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

1 Local Brouwer degree

1 Local Brouwer degree 1 Local Brouwer degree Let D R n be an open set and f : S R n be continuous, D S and c R n. Suppose that the set f 1 (c) D is compact. (1) Then the local Brouwer degree of f at c in the set D is defined.

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Euler: A System for Numerical Optimization of Programs

Euler: A System for Numerical Optimization of Programs Euler: A System for Numerical Optimization of Programs Swarat Chaudhuri 1 and Armando Solar-Lezama 2 1 Rice University 2 MIT Abstract. We give a tutorial introduction to Euler, a system for solving difficult

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

ON TORI TRIANGULATIONS ASSOCIATED WITH TWO-DIMENSIONAL CONTINUED FRACTIONS OF CUBIC IRRATIONALITIES.

ON TORI TRIANGULATIONS ASSOCIATED WITH TWO-DIMENSIONAL CONTINUED FRACTIONS OF CUBIC IRRATIONALITIES. ON TORI TRIANGULATIONS ASSOCIATED WITH TWO-DIMENSIONAL CONTINUED FRACTIONS OF CUBIC IRRATIONALITIES. O. N. KARPENKOV Introduction. A series of properties for ordinary continued fractions possesses multidimensional

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

Chapter 6. The stacking ensemble approach

Chapter 6. The stacking ensemble approach 82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described

More information

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Introduction to Topology

Introduction to Topology Introduction to Topology Tomoo Matsumura November 30, 2010 Contents 1 Topological spaces 3 1.1 Basis of a Topology......................................... 3 1.2 Comparing Topologies.......................................

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

THE BANACH CONTRACTION PRINCIPLE. Contents

THE BANACH CONTRACTION PRINCIPLE. Contents THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms

Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms G. Wang, Z. Wan and X. Wang Abstract The paper studies and designs an genetic algorithm (GA) of the bilevel linear programming

More information

CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

More information

Mean Value Coordinates

Mean Value Coordinates Mean Value Coordinates Michael S. Floater Abstract: We derive a generalization of barycentric coordinates which allows a vertex in a planar triangulation to be expressed as a convex combination of its

More information

4. Expanding dynamical systems

4. Expanding dynamical systems 4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

4.1 Learning algorithms for neural networks

4.1 Learning algorithms for neural networks 4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

How To Prove The Theory Of Topological Structure

How To Prove The Theory Of Topological Structure Part 1 General Topology The goal of this part of the book is to teach the language of mathematics. More specifically, one of its most important components: the language of set-theoretic topology, which

More information

CHAPTER 1 BASIC TOPOLOGY

CHAPTER 1 BASIC TOPOLOGY CHAPTER 1 BASIC TOPOLOGY Topology, sometimes referred to as the mathematics of continuity, or rubber sheet geometry, or the theory of abstract topological spaces, is all of these, but, above all, it is

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Solving Geometric Problems with the Rotating Calipers *

Solving Geometric Problems with the Rotating Calipers * Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of

More information

Shortcut sets for plane Euclidean networks (Extended abstract) 1

Shortcut sets for plane Euclidean networks (Extended abstract) 1 Shortcut sets for plane Euclidean networks (Extended abstract) 1 J. Cáceres a D. Garijo b A. González b A. Márquez b M. L. Puertas a P. Ribeiro c a Departamento de Matemáticas, Universidad de Almería,

More information

8.1 Examples, definitions, and basic properties

8.1 Examples, definitions, and basic properties 8 De Rham cohomology Last updated: May 21, 211. 8.1 Examples, definitions, and basic properties A k-form ω Ω k (M) is closed if dω =. It is exact if there is a (k 1)-form σ Ω k 1 (M) such that dσ = ω.

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

SPERNER S LEMMA AND BROUWER S FIXED POINT THEOREM

SPERNER S LEMMA AND BROUWER S FIXED POINT THEOREM SPERNER S LEMMA AND BROUWER S FIXED POINT THEOREM ALEX WRIGHT 1. Intoduction A fixed point of a function f from a set X into itself is a point x 0 satisfying f(x 0 ) = x 0. Theorems which establish the

More information

Online Convex Optimization

Online Convex Optimization E0 370 Statistical Learning heory Lecture 19 Oct 22, 2013 Online Convex Optimization Lecturer: Shivani Agarwal Scribe: Aadirupa 1 Introduction In this lecture we shall look at a fairly general setting

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering

2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering 2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering Compulsory Courses IENG540 Optimization Models and Algorithms In the course important deterministic optimization

More information

Baltic Way 1995. Västerås (Sweden), November 12, 1995. Problems and solutions

Baltic Way 1995. Västerås (Sweden), November 12, 1995. Problems and solutions Baltic Way 995 Västerås (Sweden), November, 995 Problems and solutions. Find all triples (x, y, z) of positive integers satisfying the system of equations { x = (y + z) x 6 = y 6 + z 6 + 3(y + z ). Solution.

More information

Max-Min Representation of Piecewise Linear Functions

Max-Min Representation of Piecewise Linear Functions Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 297-302. Max-Min Representation of Piecewise Linear Functions Sergei Ovchinnikov Mathematics Department,

More information

An axiomatic approach to capital allocation

An axiomatic approach to capital allocation An axiomatic approach to capital allocation Michael Kalkbrener Deutsche Bank AG Abstract Capital allocation techniques are of central importance in portfolio management and risk-based performance measurement.

More information

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

More information

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014 LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Metric Spaces Joseph Muscat 2003 (Last revised May 2009)

Metric Spaces Joseph Muscat 2003 (Last revised May 2009) 1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of

More information

Understanding Basic Calculus

Understanding Basic Calculus Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

PSEUDOARCS, PSEUDOCIRCLES, LAKES OF WADA AND GENERIC MAPS ON S 2

PSEUDOARCS, PSEUDOCIRCLES, LAKES OF WADA AND GENERIC MAPS ON S 2 PSEUDOARCS, PSEUDOCIRCLES, LAKES OF WADA AND GENERIC MAPS ON S 2 Abstract. We prove a Bruckner-Garg type theorem for the fiber structure of a generic map from a continuum X into the unit interval I. We

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE/ACM TRANSACTIONS ON NETWORKING 1 A Greedy Link Scheduler for Wireless Networks With Gaussian Multiple-Access and Broadcast Channels Arun Sridharan, Student Member, IEEE, C Emre Koksal, Member, IEEE,

More information

Imprecise probabilities, bets and functional analytic methods in Łukasiewicz logic.

Imprecise probabilities, bets and functional analytic methods in Łukasiewicz logic. Imprecise probabilities, bets and functional analytic methods in Łukasiewicz logic. Martina Fedel joint work with K.Keimel,F.Montagna,W.Roth Martina Fedel (UNISI) 1 / 32 Goal The goal of this talk is to

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information