Secondorder conditions in C 1,1 constrained vector optimization


 Brice Price
 2 years ago
 Views:
Transcription
1 Ivan Ginchev, Angelo Guerreggio, Matteo Rocca Secondorder conditions in C 1,1 constrained vector optimization 2004/17 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA
2 In questi quaderni vengono pubblicati i lavori dei docenti della Facoltà di Economia dell Università dell Insubria. La pubblicazione di contributi di altri studiosi, che abbiano un rapporto didattico o scientifico stabile con la Facoltà, può essere proposta da un professore della Facoltà, dopo che il contributo sia stato discusso pubblicamente. Il nome del proponente è riportato in nota all'articolo. I punti di vista espressi nei quaderni della Facoltà di Economia riflettono unicamente le opinioni degli autori, e non rispecchiano necessariamente quelli della Facoltà di Economia dell'università dell'insubria. These Woring papers collect the wor of the Faculty of Economics of the University of Insubria. The publication of wor by other Authors can be proposed by a member of the Faculty, provided that the paper has been presented in public. The name of the proposer is reported in a footnote. The views expressed in the Woring papers reflect the opinions of the Authors only, and not necessarily the ones of the Economics Faculty of the University of Insubria. Copyright Ivan Ginchev, Angelo Guerreggio, Matteo Rocca Printed in Italy in June 2004 Università degli Studi dell'insubria Via Ravasi 2, Varese, Italy All rights reserved. No part of this paper may be reproduced in any form without permission of the Author.
3 Secondorder conditions in C 1,1 constrained vector optimization Ivan Ginchev Technical University of Varna Department of Mathematics 9010 Varna, Bulgaria Angelo Guerraggio University of Insubria Department of Economics Varese, Italy Matteo Rocca University of Insubria Department of Economics Varese, Italy Abstract We consider the constrained vector optimization problem min C fx, gx K, where f : R n R m and g : R n R p are C 1,1 functions, and C R m and K R p are closed convex cones with nonempty interiors. Two type of solutions are important for our consideration, namely wminimizers wealy efficient points and iminimizers isolated minimizers. We formulate and prove in terms of the Dini directional derivative secondorder necessary conditions a point x 0 to be a wminimizer and secondorder sufficient conditions x 0 to be a iminimizer of order two. We discuss the possible reversal of the sufficient conditions under suitable constraint qualifications of KuhnTucer type. The obtained results improve the ones in Liu, Neittaanmäi, Kříře [20]. Key words: Vector optimization, C 1,1 optimization, Dini derivatives, Secondorder conditions, Duality. Math. Subject Classification: 90C29, 90C30, 90C46, 49J52. 1 Introduction In this paper we deal with the constrained vector optimization problem min C fx, gx K, 1 where f : R n R m and g : R n R p are given functions, and C R m and K R p are closed convex cones with nonempty interiors. Here n, m and p are positive integers. In the case when f and g are C 1,1 functions we derive secondorder optimality conditions a point x 0 to be a solution of this problem. The paper is thought as a continuation of the investigation initiated by the authors in [7], where firstorder conditions are derived in the case when f and g are C 0,1 functions. Recall that a function is said to be C,0 if it is times Fréchet differentiable with locally Lipschitz th derivative. The C 0,1 functions are the locally Lipschitz functions. The C 1,1 functions have been introduced in HiriartUrruty, Strodiot, Hien Nguen [13] and since then have found various application in optimization. In particular secondorder conditions for C 1,1 scalar problems are studied in [13, 5, 16, 25, 26]. Secondorder optimality conditions in vector optimization are investigated in [1, 4, 15, 21, 24], and what concerns C 1,1 vector optimization in [9, 10, 18, 19, 20]. The given in the present paper approach and results generalize that of [20]. The assumption that f and g are defined on the whole space R n is taen for convenience. Since we deal only with local solutions of problem 1, evidently our results generalize straightforward for functions f and g being defined on an open subset of R n. Usually the solutions of 1 are called points of efficiency. We prefer, lie in the scalar optimization, to call them minimizers. In Section 2 we define different type of minimizers. Among them in our considerations an important role play the wminimizers wealy 1
4 efficient points and the iminimizers isolated minimizers. When we say first or secondorder conditions we mean as usually conditions expressed in suitable first or secondorder derivatives of the given functions. Here we deal with the Dini directional derivatives. In Section 3 firstorder Dini derivative is defined and after [7] firstorder optimality conditions for C 0,1 functions are recalled. In Section 4 we define secondorder Dini derivative and formulate and prove secondorder optimality conditions for C 1,1 functions. We note that our results improve the ones obtained in Liu, Neittaanmäi, Kříře [20]. In Section 5 we discuss the possibility to revert the obtained sufficient conditions under suitable constraint qualifications. 2 Preliminaries For the norm and the dual parity in the considered finitedimensional spaces we write and,. From the context it should be clear to exactly which spaces these notations are applied. For the cone M R its positive polar cone M is defined by M = {ζ R ζ, φ 0 for all φ M}. The cone M is closed and convex. It is nown that M := M = cl co M, see Rocafellar [23, Theorem 14.1, page 121]. In particular for the closed convex cone M we have M = {ζ R ζ, φ 0 for all φ M} and M = M = {φ R ζ, φ 0 for all ζ M }. If φ cl conv M, then ζ, φ 0 for all ζ M. We set M [φ] = {ζ M ζ, φ = 0}. Then M [φ] is a closed convex cone and M [φ] M. Consequently its positive polar cone M[φ] = M [φ] is a closed convex cone, M M[φ] and its positive polar cone satisfies M[φ] = M [φ]. In this paper we apply this notation for M = K and φ = gx 0. Then we deal with the cones K [gx 0 ] we call this cone the index set of problem 1 at x 0 and K[gx 0 ]. For the closed convex cone M we apply in the sequel the notation Γ M = {ζ M ζ = 1}. The sets Γ M is compact, since it is closed and bounded. Now we recall the concept of the so called oriented distance from a point to a set, being applied to give a scalar characterization of some type of solutions of vector optimization problems. Given a set A R, then the distance from y R to A is dy, A = inf{ a y a A}. The oriented distance from y to A is defined by Dy, A = dy, A dy, R \ A. The function D is introduced in HiriartUrruty [11, 12] and since then many authors show its usefulness in optimization. Ginchev, Hoffmann [8] apply the oriented distance to study approximation of setvalued functions by singlevalued ones and in case of a convex set A show the representation Dy, A = sup ξ =1 ξ, y sup a A ξ, a this formula resembles [23] the conjugate of the support function of A. From here, when A = C and C is a closed convex cone, taing into account { 0, ξ C sup ξ, a =, a C +, ξ / C, we get easily Dy, C = sup ξ =1, ξ C ξ, y. In terms of the distance function we have K[gx 0 ] = {w R p 1 lim sup t 0 + t dgx0 + tw, C = 0}, that is K[gx 0 ] is the contingent cone [3] of K at gx 0. We call the solutions of problem 1 minimizers. The solutions are understood in a local sense. In any case a solution is a feasible point x 0, that is a point satisfying the constraint x 0 K. The feasible point x 0 is said to be a wminimizer wealy efficient point for 1 if there exists a neighbourhood U of x 0, such that fx / fx 0 int C for all x U g 1 K. The feasible point x 0 is said to be a eminimizer efficient point for 1 if there exists a neighbourhood U of x 0, such that 2
5 fx / fx 0 C \ {0} for all x U g 1 K. We say that the feasible point x 0 is a sminimizer strong minimizer for 1 if there exists a neighbourhood U of x 0, such that fx / fx 0 C for all x U \ {x 0 } g 1 K. In [7] through the oriented distance the following characterization is derived. The feasible point x 0 is a wminimizer sminimizer for the vector problem 1 if and only if x 0 is a minimizer strong minimizer for the scalar problem min Dfx fx 0, C, gx K. 2 Here we have an example of a scalarization of the vector optimization problem. By scalarization we mean a reduction of the vector problem to an equivalent scalar optimization problem. We introduce another concept of optimality. We say that the feasible point x 0 is a iminimizer isolated minimizer of order, > 0, for 1 if there exists a neighbourhood U of x 0 and a constant A > 0 such that Dfx fx 0, C A x x 0 for x U g 1 K. In spite that the notion of isolated minimizer is defined through the norms, the following reasoning shows that in fact it is normindependent. From the nonnegativeness of the righthand side the definition of the iminimizer of order equivalently can be given by the inequality dfx fx 0, C A x x 0 for x U g 1 K. Assume that another pair of norms in the original and image space denoted is introduced. As it is nown any two norms in a finite dimensional space over the reals are equivalent. Therefore there exist positive constants α 1, α 2, β 1, β 2, such that α 1 x x 1 α 2 x for all x X := R n, β 1 y y 1 β 2 y for all y Y := R m. Denote by d 1 the distance associated to the norm 1. For A Y we have whence d 1 y, A = sup{ y a a A} sup{β 1 y a a A} = β 1 dy, A, d 1 fx fx 0, C β 1 dfx fx 0, C β 1 A x x 0 β 1A α 2 x x 0 1 for x U g 1 K. Therefore, if x 0 is a iminimizer of order for 1 with respect to the pair of norms, it is also a iminimizer with respect to the pair of norms 1, only the constant A should be changed to β 1 A/α 2. Obviously, each iminimizer is a sminimizer. Further each sminimizer is a eminimizer and each eminimizer is a wminimizer claiming this we assume that C R m. The concept of an isolated minimizer for scalar problems is introduced in Auslender [2]. For vector problems it has been extended in Ginchev [6], Ginchev, Guerraggio, Rocca [7], and under the name of strict minimizers in Jiménez [14] and Jiménez, Novo [15]. We prefer the name isolated minimizer given originally by A. Auslender. 3 Firstorder conditions In this section after [7] we recall optimality conditions for problem 1 with C 0,1 data in terms of the firstorder Dini directional derivative. The proofs can be found in [7]. Given a C 0,1 function Φ : R n R, we define the Dini directional derivative Φ ux 0 of Φ at x 0 in direction u R n as the set of the cluster points of 1/tΦx 0 + tu Φx 0 as t 0 +, that is as the upper limit Φ ux 0 1 = Limsup Φx 0 + tu Φx 0. t 0 + t 3
6 If Φ is Fréchet differentiable at x 0 then the Dini derivative is a singleton, coincides with the usual directional derivative and can be expressed in terms of the Fréchet derivative Φ x 0 by Φ ux 0 = Φ x 0 u. In connection with problem 1 we deal with the Dini derivative of the function Φ : R n R m+p, Φx = fx, gx. Then we use the notation Φ ux 0 = fx 0, gx 0 u. If at least one of the derivatives f ux 0 and g ux 0 is a singleton, then fx 0, gx 0 u = f ux 0, g ux 0. Let us turn attention that always fx 0, gx 0 u f ux 0 g ux 0, but in general these two sets do not coincide. The following constraint qualification appears in the Sufficient Conditions part of Theorem 1. Q 0,1 x 0 : If gx 0 K and 1 t gx 0 + t u 0 gx 0 z 0 Kx 0 then u u 0 : 0 N : > 0 : gx 0 + t u K. The constraint qualification Q 0,1 x 0 is of KuhnTucer type. In 1951 Kuhn, Tucer [17] published the classical variant for differentiable functions and since then it is often cited in optimization theory. Theorem 1 Firstorder conditions Consider problem 1 with f, g being C 0,1 functions and C and K closed convex cones. Necessary Conditions Let x 0 be wminimizer of problem 1. Then for each u S the following condition is satisfied: N 0,1 : y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 C K : ξ 0, η 0 0, 0, η 0, gx 0 = 0 and ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions Let x 0 R n and suppose that for each u S the following condition is satisfied: S 0,1 : y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 C K : ξ 0, η 0 0, 0, η 0, gx 0 = 0 and ξ 0, y 0 + η 0, z 0 > 0. Then x 0 is a iminimizer of order one for problem 1. Conversely, if x 0 is a iminimizer of order one for problem 1 and the constraint qualification Q 0,1 x 0 holds, then condition S 0,1 is satisfied. If g is Fréchet differentiable at x 0, then instead of constraint qualification Q 0,1 x 0 we may consider the constraint qualification Q 1 x 0 given below. Q 1 x 0 : If gx 0 K and g x 0 u 0 = z 0 Kx 0 then there exists δ > 0 and a differentiable injective function ϕ : [0, δ] K such that ϕ0 = x 0 and ϕ 0 = g x 0 u 0. In the case of a polyhedral cone K in Q 1 x 0 the requirement ϕ : [0, δ] K can be replaced by ϕ : [0, δ] Kx 0. This condition coincides with the classical KuhnTucer constraint qualification compare with Mangasarian [22, p. 102]. The next theorem is a reformulation of Theorem 1 for C 1 problems, that is problems with f and g being C 1 functions. Theorem 2 Consider problem 1 with f, g being C 1 functions and C and K closed convex cones. Necessary Conditions Let x 0 be wminimizer of problem 1. Then for each u S the following condition is satisfied: N 1 : ξ 0, η 0 C K \ {0, 0} : η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u 0. Sufficient Conditions Let x 0 R n. Suppose that for each u S the following condition is satisfied: 4
7 S 1 : ξ 0, η 0 C K \ {0, 0} : η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u > 0. Then x 0 is a iminimizer of first order for problem 1. Conversely, if x 0 is a iminimizer of first order for problem 1 and let the constraint qualification Q 1 x 0 have place, then condition S 1 is satisfied. The pairs of vectors ξ 0, η 0 are usually referred to as the Lagrange multipliers. Here we have different Lagrange multipliers to different u S and different y 0, z 0 fx 0, gx 0 u. The natural question arises, whether a common pair ξ 0, η 0 can be chosen to all directions. The next example shows that the answer is negative even for C 1 problems. Example 1 Let f : R 2 R 2, fx 1, x 2 = x 1, x x2 2, and g : R2 R 2, fx 1, x 2 = x 1, x 2. Define C = {y y 1, y 2 R 2 y 2 = 0}, K = R 2. Then f and g are C 1 functions and the point x 0 = 0, 0 is a wminimizer of problem 1 in fact x 0 is also a iminimizer of order two, but not a iminimizer of order one. At the same time the only pair ξ 0, η 0 C K for which ξ 0, f x 0 u + η 0, g x 0 u 0 for all u S is ξ 0 = 0, 0 and η 0 = 0, 0. Theorem below 3 guarantees, that in the case of cones with nonempty interiors a nonzero pair ξ 0, η 0 exists, which satisfies the Necessary Conditions of Theorem 1 and which is common for all directions. This is the reason why the secondorder conditions in the next section are derived under the assumption of C and K closed convex cones with nonempty interior. Theorem 3 Necessary Conditions Consider problem 1 with f, g being C 1 functions and C and K closed convex cones with nonempty interiors. Let x 0 be wminimizer of problem 1. Then there exists a pair ξ 0, η 0 C K \ {0, 0} such that η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u = 0 for all u R n. The latter equality could be written also as ξ 0, f x 0 + η 0, g x 0 = 0. 4 Secondorder conditions In this section we derive optimality conditions for problem 1 with C 1,1 data in terms of the secondorder Dini directional derivative. Given a C 1,1 function Φ : R n R, we define the secondorder Dini directional derivative Φ ux 0 of Φ at x 0 in direction u R n as the set of the cluster points of 1/t 2 Φx 0 + tu Φx 0 Φ x 0 u as t 0 +, that is as the upper limit Φ ux 0 2 = Limsup Φx 0 t 0 + t 2 + tu Φx 0 t Φ x 0 u. If Φ is twice Fréchet differentiable at x 0 then the Dini derivative is a singleton and can be expressed in terms of the Hessian Φ ux 0 = Φ x 0 u, u. In connection with problem 1 we deal with the Dini derivative of the function Φ : R n R m+p, Φx = fx, gx. Then we use the notation Φ ux 0 = fx 0, gx 0 u. Let us turn attention that always fx 0, gx 0 u f u x 0 g ux 0, but in general these two sets do not coincide. We need the following lemma. Lemma 1 Let Φ : R n R be a C 1,1 function and Φ be Lipschitz with constant L on the ball {x x x 0 r}, where x 0 R n and r > 0. Then, for u, v R m and 0 < t < r/ max u, v we have 2 Φx 0 t 2 + tv Φx 0 tφ x 0 v 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u 5
8 L u + v v u. In particular, for v = 0 we get 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u L u 2. Proof. For 0 < t < r/ max u, v we have 2 Φx 0 t 2 + tv Φx 0 tφ x 0 v 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u 2L = 2 Φx 0 t 2 + tv Φx 0 + tu tφ x 0 v u = 2 1 t Φ x 0 + tu + stv u Φ x 0 v u ds L su + sv ds v u 1 s u + s v ds v u = L u + v v u. The next theorem states secondorder necessary conditions in primal form, that is in terms of directional derivatives. Theorem 4 Secondorder necessary conditions, Primal form Consider problem 1 with f, g being C 1,1 functions, and C and K closed convex cones. Let x 0 be a wminimizer for 1. Then for each u R n the following two conditions hold. N p : f x 0 u, g x 0 u / int C int K[gx 0 ], N p : if f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and y 0, z 0 fx 0, gx 0 u then it holds conv {y 0, z 0, im f x 0, g x 0 } int C K[gx 0 ] =. Proof. Condition N p. Assume in the contrary, that f x 0 1 u = lim fx 0 + tu fx 0 int C and 3 t 0 + t g x 0 1 u = lim gx 0 + tu gx 0 int K[gx 0 ]. 4 t 0 + t We show first that there exists δ > 0 such that x 0 + tu is feasible for 0 < t < δ. For each η Γ K there exists δ η and a neighbourhood V η of η such that η, gx 0 + tu < 0 for η V η. To show this we consider the two possibilities. If η Γ K[gx 0 ] the claim follows from 1 t η, gx0 + tu = η, 1 gx 0 + tu gx 0 η, z 0 < 0. t For η Γ K \Γ K[gx 0 ] we have η, gx0, whence from the continuity of g we get η, gx 0 + tu < 0 for η, t sufficiently close to η, 0. From the compactness of Γ K it holds Γ K V η 1... V η. Then η, gx 0 + tu < 0 holds for all η Γ K with 0 < t < δ := min 1 i δ η i. Now 3 gives that fx 0 + tu fx 0 int C for all sufficiently small t, which contradicts the assumption that x 0 is a wminimizer for 1. 6
9 Condition N p. Assume in the contrary, that for some u 0 condition N p for u = 0 obviously N p is satisfied. Therefore we have f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and there exists yu, zu fx 0, gx 0 u such that for some λ 0, 1 and some w R n it holds 1 λ yu + λ f x 0 w int C, 1 λ zu + λ g x 0 w int K[gx 0 ]. According to the definition of the Dini derivative there exists a sequence t 0 +, such that lim y u = yu and lim z u = zu, where for v R n we put y v = 2 t 2 fx 0 + t v fx 0 t f x 0 v, z v = 2 t 2 gx 0 + t v gx 0 t g x 0 v. Let v u and f and g be Lipschitz with constant L in a neighbourhood of x 0. For large enough we have, as in Lemma 1, y v y u L u + v v u, z v z u L u + v v u. Now we have y v yu and z v zu, which follows from the estimations y v yu y v y u + y u yu L u + v v u + y u yu, z v zu z v z u + z u zu L u + v v u + y u yu. For = 1, 2,, let v be such that w = 21 λ t λ v u, i. e. v = u + λ 21 λ t w and hence, v u. For every, we have gx 0 + t v gx 0 = t g x 0 u + t g x 0 v u t2 zu + ot2 = t g x λ u λ t2 1 λ zu + g x 0 v u + ot 2 t = t g x λ u λ t2 1 λ zu + λ g x 0 v u + ot 2 t λ = t g x 0 1 u λ t2 1 λ zu + λ g x 0 w + ot 2 K[gx 0 ] int K[gx 0 ] + ot 2 int K[gx 0 ], the last inclusion is satisfied for large enough. We get from here repeating partially the reasoning from the proof of N p that gx 0 + t v int K for all sufficiently large. Passing to a subsequence, we may assume that all x 0 + t v are feasible. The above chain of equalities and inclusions can be repeated substituting g, zu and K[gx 0 ] respectively by f, yu and C. We obtain in such a way fx 0 + t v fx 0 int C, which contradicts to x 0 wminimizer for 1. Now we establish secondorder necessary and sufficient optimality conditions in dual form. We assume as usual that f : R n R m and g : R n R p are C 1,1 functions and C R m, K R p are closed convex cones. Then for x 0 R n we put x 0 = { ξ, η C K ξ, η 0, η, gx 0 = 0, ξ, f x 0 + η, g x 0 = 0 }. 7
10 Theorem 5 Secondorder conditions, Dual form Consider problem 1 with f and g being C 1,1 functions, and C and K closed convex cones with nonempty interiors. Necessary Conditions Let x 0 be a wminimizer for 1. Then for each u R n the following two conditions hold: N p : f x 0 u, g x 0 u / int C int K[gx 0 ], N d : if f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] then y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions Let x 0 be a feasible point for problem 1. Suppose that for each u R n \ {0} one of the following two conditions is satisfied: S p : f x 0 u, g x 0 u / C K[gx 0 ], S d : f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 > 0. Then x 0 is a iminimizer of order two for problem 1. Proof. Necessary Conditions. The necessity of condition N p is proved in Theorem 4. To complete the proof we show that condition N p implies N d. Thus, we assume that the convex sets F + = conv {y 0, z 0, im f x 0, g x 0 } and F = int C int K[gx 0 ] do not intersect. Since K K[gx 0 ] and both C and K have nonempty interior, we see that both F + and F are nonempty. According to the Separation Theorem there exists a pair ξ 0, η 0 0, 0 and a real number α, such that ξ 0, y + η 0, z α for all y, z F, ξ 0, y + η 0, z α for all y, z F +. 5 Since F is a cone and F + contains the cone im f x 0, g x 0, we get α = 0. Since moreover, im f x 0, g x 0 is a linear space, we see that ξ 0, y + η 0, z = 0 for all ξ, η im f x 0, g x 0. The first line in in 5 gives now ξ 0, η 0 C K[gx 0 ], that is ξ 0, η 0 x 0. With regard to y 0, z 0 F +, the second line gives ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions. We prove that if x 0 is not a iminimizer of order two for 1, then there exists u 0 R n, u 0 = 1, for which neither of the conditions S p and S d is satisfied. Choose a monotone decreasing sequence ε 0 +. Since x 0 is not a iminimizer of order two, there exist sequences t 0 + and u R n, u = 1, such that gx 0 + t u K and Dfx 0 + t u fx 0, C = max ξ, fx 0 + t u fx 0 < ε t 2. 6 ξ Γ C Passing to a subsequence, we may assume u u 0. We may assume also y 0, y 0, z 0, z 0, where we have put y 0, = 2 fx 0 t 2 + t u 0 fx 0 t f x 0 u 0, y = 2 t 2 fx 0 + t u fx 0 t f x 0 u, z 0, = 2 t 2 gx 0 + t u 0 gx 0 t g x 0 u 0, z = 2 t 2 gx 0 + t u gx 0 t g x 0 u. 8
11 The possibility to choose convergent subsequence y 0, y 0, z 0, z 0, follows from the boundedness of the differential quotient proved in Lemma 1. Now we have y 0, z 0 fx 0, gx 0 u. We may assume also that 0 < t < r and both f and g are Lipschitz with constant L on {x x x 0 r}. Now we have z z 0 which follows from the estimations obtained on the base of Lemma 1 Similar estimations for y show that y y 0. z z 0 z z 0, + z 0, z 0 L u + u 0 u u 0 + z 0, z 0. We prove that S p is not satisfied at u 0. For this purpose we must prove that f x 0 u C, g x 0 u K[gx 0 ]. Let ε > 0. We claim that there exists 0 such that for all ξ Γ C and all > 0 the following inequalities hold: 1 ξ, fx 0 + t u fx 0 < 1 t 3 ε, 7 ξ, f x 0 u 1 fx 0 + t u fx 0 < 1 t 3 ε, 8 ξ, f x 0 u 0 u < 1 3 ε. 9 Inequality 7 follows from 6. Inequality 8 follows from the Fréchet differentiability of f ξ, f x 0 u 1 fx 0 + t u fx 0 t 1 fx 0 + t u fx 0 f x 0 u < 1 t 3 ε, which is true for all sufficiently small t. Inequality 9 follows from ξ, f x 0 u 0 u f x 0 u 0 u < 1 3 ε, which is true for u u 0 small enough. Now we see, that for arbitrary ξ Γ C and > 0 we have ξ, f x 0 u 0 1 = ξ, fx 0 + t u fx 0 t + ξ, f x 0 u 1 fx 0 + t u fx 0 t + ξ, f x 0 u 0 u < 1 3 ε ε ε = ε, and Df x 0 u 0, C = max ξ Γ ξ, f x 0 u 0 < ε. Since ε > 0 is arbitrary, we see Df x 0 u 0, C 0, whence f x 0 u 0 C. Similar estimations can be repeated with f and ξ Γ C substituted respectively by g and η K[gx 0 ], whence we get g x 0 u 0 K[gx 0 ]. The only difference occurs with the estimation 7. Then we have 1 η, gx 0 + t u gx 0 = 1 η, gx 0 + t u 0, t t since gx 0 + t u K K[gx 0 ]. 9
12 Now we prove that S d is not satisfied at u0. We assume that f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ], since otherwise the first assertion in condition S d would not be satisfied. It is obvious, that to complete the proof we must show that ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 0. Obviously, it is enough to show this inequality for ξ 0, η 0 x 0 such that max ξ 0, η 0 = 1. Then ξ 0, y 0 + η 0, z 0 = lim ξ 0, y + η 0, z 2 = lim t 2 ξ 0, fx 0 + t u fx t 2 η 0, gx 0 + t u gx 0 2 t 2 ξ 0, f x 0 u + η 0, g x 0 u 0 lim sup 2 t 2 Dfx 0 + t u fx 0, C + lim sup lim sup 2 t 2 ε t 2 = 0. 2 t 2 η 0, gx 0 + t u In fact, in Theorem 5 only conditions N d and S d are given in a dual form. Because of convenience for the proof conditions N p and S p are given in the same primal form as they appear in Theorem 4. However these conditions can be represented in a dual form too. For instance condition N p can be written in the equivalent dual form ξ, η C K[gx 0 ] : ξ, f x 0 u + η, g x 0 u > 0. Theorems 3.1 and 4.2 in Liu, Neittaanmäi, Kříže [20] are of the same type as Theorem 5. The latter is however more general in the following aspects. Theorem 5 in opposite to [20] concerns arbitrary and not only polyhedral cones C. In Theorem 5 the conclusion in the sufficient conditions part is that x 0 is a iminimizer of order two, while in [20] the weaer conclusion is proved, namely that the reference point is only eminimizer. 5 Reversal of the sufficient conditions The isolated minimizers as more qualified notion of efficiency should be more essential solutions for the vector optimization problem than the eminimizers. As a result of using iminimizers, we have seen in Theorem 1 that under suitable constraint qualifications the sufficient conditions admit reversal. Now we discuss the similar possibility to revert the sufficient conditions part of Theorem 5 under suitable secondorder constraint qualifications of KuhnTucer type. Unfortunately, the secondorder case is more complicated than the firstorder one. The proposed secondorder constraint qualifications called Q 1,1 x 0 loo more complicated than the firstorder ones applied in Theorem 1. In spite of this, we succeeded in proving that conditions from the sufficient part of Theorem 5 are satisfied under the stronger hypothesis that x 0 is a iminimizer of order two for the following constrained problem minĉfx, gx K, where Ĉ = conv C, im f x
13 Obviously, each iminimizer of order two for 10 is also a iminimizer of order two for 1. It remains the open question, whether under the hypotheses that x 0 is a iminimizer of order two for 1 the conclusion of Theorem 6 remains true. In the sequel we consider the constraint qualification. Q 1,1 x 0 : The following conditions are satisfied: 1 0. gx 0 K, 2 0. g x 0 u 0 K[gx 0 ] \ int K[gx 0 ], gx 0 + t u 0 gx 0 t g x 0 u 0 z 0, t η K[gx 0 ], z im g x 0 : η, z = 0 : η, z 0 0 implies u u 0 : 0 N : > 0 : gx 0 + t u K. Theorem 6 Reversal of the secondorder sufficient conditions Let x 0 be a iminimizer of order two for the constrained problem 10 and suppose that the constraint qualification Q 1,1 x 0 holds. Then one of the conditions S p or S d from Theorem 5 is satisfied. Proof. Let x 0 be a iminimizer of order two for problem 10, which means that gx 0 K and there exists r > 0 and A > 0 such that gx K and x x 0 r implies Dfx fx 0, Ĉ = max ξ, fx fx 0 A x x ξ ΓĈ The point x 0 being a iminimizer of order two for problem 10 is also a wminimzer for 10 and hence also for 1. Therefore it satisfies condition N p. Now it becomes obvious, that for each u = u 0 one and only one of the firstorder conditions in S p and the first part of S d is satisfied. Suppose that S p is not satisfied. Then the first part of condition S d holds, that is f x 0 u 0, g x 0 u 0 C K[gx 0 ] \ int C int K[gx 0 ]. We prove, that also the second part of condition S d holds. One of the following two cases may arise: 1 0. There exists η 0 K[gx 0 ] such that η 0, z = 0 for all z im f x 0 and η 0, z 0 > 0. We put now ξ 0 = 0. Then we have obviously ξ 0, η 0 C K[gx 0 ] and ξ 0, f x 0 + η 0, g x 0 = 0. Thus ξ 0, η 0 x 0 and ξ 0, y 0 + η 0, z 0 > 0. Therefore condition S d is satisfied For all η K[gx 0 ], such that η, z = 0 for all z im f x 0, it holds η, z 0 0. This condition coincides with condition 4 0 in the constraint qualification Q 1,1 x 0. Now we see that all points in the constraint qualification Q 1,1 x 0 are satisfied. Therefore there exists u u 0 and a positive integer 0 such that for all > 0 it holds gx 0 + t u K. Passing to a subsequence, we may assume that this inclusion holds for all. From 11 it follows that there exists ξ ΓĈ such that Dfx 0 + t u fx 0, Ĉ = ξ, fx 0 + t u fx 0 A t 2 u 2. Passing to a subsequence, we may assume that ξ ξ 0 ΓĈ. Let us underline that each ξ Ĉ satisfies ξ, f x 0 = 0. This follows from ξ, z 0 for all z im f x 0 and the fact that im f x 0 is a linear space. Now we have 2 2 t 2 ξ, ξ, t 2 fx 0 + t u fx 0 fx 0 + t u fx 0 t f x 0 u 2 t 2 A t 2 u 2 = A u 2. 11
14 Passing to a limit gives ξ 0, y 0 2 A u 0 2 = 2 A. Putting η 0 = 0, we have obviously ξ 0, η 0 x 0 and ξ 0, y 0 + η 0, z 0 > 0. References [1] B. Aghezzaf: Secondorder necessary conditions of the KuhnTucer type in multiobjective programming problems. Control Cybernet , no. 2, [2] A. Auslender: Stability in mathematical programming with nondifferentiable data. SIAM J. Control Optim , [3] J.P. Aubin, H. Franowsa: Setvalued analysis. Birhäuser, Boston, [4] S. Bolintenéanu, M. El Maghri: Secondorder efficiency conditions and sensitivity of efficient points. J. Optim. Theory Appl. 98 No , [5] P. G. Georgiev, N. Zlateva: Secondorder subdifferentials of C 1,1 functions and optimality conditions. SetValued Anal , no. 2, [6] I. Ginchev: Higher order optimality conditions in nonsmooth vector optimization. In: A. Cambini, B. K. Dass, L. Martein Eds., Generalized Convexity, Generalized Monotonicity, Optimality Conditions and Duality in Scalar and Vector Optimization, J. Stat. Manag. Syst , no. 1 3, [7] I. Ginchev, A. Guerraggio, M. Rocca: Firstorder conditions for C 0,1 constrained vector optimization. In: F. Giannessi, A. Maugeri eds., Variational analysis and applications, Proc. Erice, June 20 July 1, 2003, Kluwer Acad. Publ., Dordrecht, 2004, to appear. [8] I. Ginchev, A. Hoffmann: Approximation of setvalued functions by singlevalued one. Discuss. Math. Differ. Incl. Control Optim , [9] A. Guerraggio, D. T. Luc: Optimality conditions for C 1,1 vector optimization problems. J. Optim. Theory Appl , no. 3, [10] A. Guerraggio, D. T. Luc: Optimality conditions for C 1,1 constrained multiobjective problems. J. Optim. Theory Appl , no. 1, [11] J.B. HiriartUrruty: New concepts in nondifferentiable programming. Analyse non convexe, Bull. Soc. Math. France , [12] J.B. HiriartUrruty: Tangent cones, generalized gradients and mathematical programming in Banach spaces. Math. Oper. Res , [13] J.B. HiriartUrruty, J.J Strodiot, V. Hien Nguen: Generalized Hessian matrix and second order optimality conditions for problems with C 1,1 data. Appl. Math. Optim , [14] B. Jiménez: Strict efficiency in vector optimization. J. Math. Anal. Appl , [15] B. Jiménez, V. Novo: First and second order conditions for strict minimality in nonsmooth vector optimization. J. Math. Anal. Appl , [16] D. Klatte, K. Tammer: On the second order sufficient conditions to perturbed C 1,1 optimization problems. Optimization ,
From scalar to vector optimization
I.Ginchev, A.Guerraggio, M.Rocca From scalar to vector optimization 2003/17 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono pubblicati i lavori dei docenti
More informationFirstorder conditions for C 0,1 constrained vector optimization
I.Ginchev, A.Guerraggio, M.Rocca Firstorder conditions for C 0,1 constrained vector optimization 2003/19 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono
More informationC 1,1 vector optmization problems and Riemann derivatives
Ivan Ginchev, Angelo Guerraggio, Matteo Rocca C, vector optmization problems and Riemann derivatives 2002/9 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono
More informationWellposedness and scalarization in vector optimization
E.Miglierina, E. Molho, M. Rocca Wellposedness and scalarization in vector optimization 2004/7 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono pubblicati
More informationApproximating continuous functions by iterated function systems and optimization problems
Davide La Torre e Matteo Rocca Approximating continuous functions by iterated function systems and optimization problems 2002/13 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationFurther Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
More informationResearch Article Stability Analysis for HigherOrder Adjacent Derivative in Parametrized Vector Optimization
Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 510838, 15 pages doi:10.1155/2010/510838 Research Article Stability Analysis for HigherOrder Adjacent Derivative
More informationNecessary Optimality Condition for Vector Optimization Problems in Infinitedimensional Linear Spaces Xuanwei Zhou
International Conference on Education, Management and Computing Technology (ICEMCT 25) Necessary Optimality Condition for Vector Optimization Problems in Infinitedimensional Linear Spaces Xuanwei Zhou
More informationA NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION
1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationCourse 221: Analysis Academic year , First Semester
Course 221: Analysis Academic year 200708, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................
More informationDate: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
More informationOn generalized gradients and optimization
On generalized gradients and optimization Erik J. Balder 1 Introduction There exists a calculus for general nondifferentiable functions that englobes a large part of the familiar subdifferential calculus
More informationConvex analysis and profit/cost/support functions
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More information6. Metric spaces. In this section we review the basic facts about metric spaces. d : X X [0, )
6. Metric spaces In this section we review the basic facts about metric spaces. Definitions. A metric on a nonempty set X is a map with the following properties: d : X X [0, ) (i) If x, y X are points
More informationSeparation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationMetric Spaces. Chapter 1
Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence
More informationAbsolute Value Programming
Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences
More informationCost Minimization and the Cost Function
Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is
More informationSequences and Convergence in Metric Spaces
Sequences and Convergence in Metric Spaces Definition: A sequence in a set X (a sequence of elements of X) is a function s : N X. We usually denote s(n) by s n, called the nth term of s, and write {s
More informationMath 317 HW #7 Solutions
Math 17 HW #7 Solutions 1. Exercise..5. Decide which of the following sets are compact. For those that are not compact, show how Definition..1 breaks down. In other words, give an example of a sequence
More informationBy W.E. Diewert. July, Linear programming problems are important for a number of reasons:
APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)
More informationProperties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a nonnegative BMOfunction w, whose reciprocal is also in BMO, belongs to p> A p,and
More informationConvex Programming Tools for Disjunctive Programs
Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible
More informationA Game Theoretic Approach for Solving Multiobjective Linear Programming Problems
LIBERTAS MATHEMATICA, vol XXX (2010) A Game Theoretic Approach for Solving Multiobjective Linear Programming Problems Irinel DRAGAN Abstract. The nucleolus is one of the important concepts of solution
More informationQuasi Contraction and Fixed Points
Available online at www.ispacs.com/jnaa Volume 2012, Year 2012 Article ID jnaa00168, 6 Pages doi:10.5899/2012/jnaa00168 Research Article Quasi Contraction and Fixed Points Mehdi Roohi 1, Mohsen Alimohammady
More informationSome New Classes of Generalized Concave Vector Valued Functions. Riccardo CAMBINI
UNIVERSITÀ DEGLI STUDI DI PISA DIPARTIMENTO DI STATISTICA E MATEMATICA APPLICATA ALL ECONOMIA Some New Classes of Generalized Concave Vector Valued Functions Riccardo CAMBINI This is a preliminary version
More informationTECHNICAL NOTE. Proof of a Conjecture by Deutsch, Li, and Swetits on Duality of Optimization Problems1
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 102, No. 3, pp. 697703, SEPTEMBER 1999 TECHNICAL NOTE Proof of a Conjecture by Deutsch, Li, and Swetits on Duality of Optimization Problems1 H. H.
More informationCONSTANTSIGN SOLUTIONS FOR A NONLINEAR NEUMANN PROBLEM INVOLVING THE DISCRETE plaplacian. Pasquale Candito and Giuseppina D Aguí
Opuscula Math. 34 no. 4 2014 683 690 http://dx.doi.org/10.7494/opmath.2014.34.4.683 Opuscula Mathematica CONSTANTSIGN SOLUTIONS FOR A NONLINEAR NEUMANN PROBLEM INVOLVING THE DISCRETE plaplacian Pasquale
More informationThe Italian University reform impact on management systems of each individual University
Anna Arcari The Italian University reform impact on management systems of each individual University 2004/16 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono
More informationStationarity Results for Generating Set Search for Linearly Constrained Optimization
SANDIA REPORT SAND20038550 Unlimited Release Printed October 2003 Stationarity Results for Generating Set Search for Linearly Constrained Optimization Tamara G. Kolda, Robert Michael Lewis, and Virginia
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NPcomplete. Then one can conclude according to the present state of science that no
More informationME128 ComputerAided Mechanical Design Course Notes Introduction to Design Optimization
ME128 Computerided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation
More informationNotes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
More informationALMOST COMMON PRIORS 1. INTRODUCTION
ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type
More informationDegree Hypergroupoids Associated with Hypergraphs
Filomat 8:1 (014), 119 19 DOI 10.98/FIL1401119F Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Degree Hypergroupoids Associated
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationG. Gavana, G. Guggiola and A. Marenzi Tax Impact of International Financial Reporting Standards: Evidence from a Sample of Italian Companies 2010/10
G. Gavana, G. Guggiola and A. Marenzi Tax Impact of International Financial Reporting Standards: Evidence from a Sample of Italian Companies 2010/10 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it
More informationFuzzy Differential Systems and the New Concept of Stability
Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida
More informationNotes on weak convergence (MAT Spring 2006)
Notes on weak convergence (MAT4380  Spring 2006) Kenneth H. Karlsen (CMA) February 2, 2006 1 Weak convergence In what follows, let denote an open, bounded, smooth subset of R N with N 2. We assume 1 p
More informationCHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More information10. Proximal point method
L. Vandenberghe EE236C Spring 201314) 10. Proximal point method proximal point method augmented Lagrangian method MoreauYosida smoothing 101 Proximal point method a conceptual algorithm for minimizing
More informationNumerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
More informationMA651 Topology. Lecture 6. Separation Axioms.
MA651 Topology. Lecture 6. Separation Axioms. This text is based on the following books: Fundamental concepts of topology by Peter O Neil Elements of Mathematics: General Topology by Nicolas Bourbaki Counterexamples
More informationAdvanced results on variational inequality formulation in oligopolistic market equilibrium problem
Filomat 26:5 (202), 935 947 DOI 0.2298/FIL205935B Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Advanced results on variational
More informationMath 5311 Gateaux differentials and Frechet derivatives
Math 5311 Gateaux differentials and Frechet derivatives Kevin Long January 26, 2009 1 Differentiation in vector spaces Thus far, we ve developed the theory of minimization without reference to derivatives.
More informationWeak topologies. David Lecomte. May 23, 2006
Weak topologies David Lecomte May 3, 006 1 Preliminaries from general topology In this section, we are given a set X, a collection of topological spaces (Y i ) i I and a collection of maps (f i ) i I such
More informationA QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More information1 Introduction, Notation and Statement of the main results
XX Congreso de Ecuaciones Diferenciales y Aplicaciones X Congreso de Matemática Aplicada Sevilla, 2428 septiembre 2007 (pp. 1 6) Skewproduct maps with base having closed set of periodic points. Juan
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationMixedÀ¾ Ð½Optimization Problem via Lagrange Multiplier Theory
MixedÀ¾ Ð½Optimization Problem via Lagrange Multiplier Theory Jun WuÝ, Sheng ChenÞand Jian ChuÝ ÝNational Laboratory of Industrial Control Technology Institute of Advanced Process Control Zhejiang University,
More informationINTEGRAL OPERATORS ON THE PRODUCT OF C(K) SPACES
INTEGRAL OPERATORS ON THE PRODUCT OF C(K) SPACES FERNANDO BOMBAL AND IGNACIO VILLANUEVA Abstract. We study and characterize the integral multilinear operators on a product of C(K) spaces in terms of the
More informationThe role of credit in a Keynesian monetary economy
Giancarlo Bertocco The role of credit in a Keynesian monetary economy 2002/39 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni vengono pubblicati i lavori dei docenti
More informationSubsets of Euclidean domains possessing a unique division algorithm
Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness
More information1 Polyhedra and Linear Programming
CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material
More informationRieszFredhölm Theory
RieszFredhölm Theory T. Muthukumar tmk@iitk.ac.in Contents 1 Introduction 1 2 Integral Operators 1 3 Compact Operators 7 4 Fredhölm Alternative 14 Appendices 18 A AscoliArzelá Result 18 B Normed Spaces
More informationError Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach
Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and
More informationMathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions
Natalia Lazzati Mathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions Note 5 is based on Madden (1986, Ch. 1, 2, 4 and 7) and Simon and Blume (1994, Ch. 13 and 21). Concave functions
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 WeiTa Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2016 LECTURE NOTES Series
ANALYTICAL MATHEMATICS FOR APPLICATIONS 206 LECTURE NOTES 8 ISSUED 24 APRIL 206 A series is a formal sum. Series a + a 2 + a 3 + + + where { } is a sequence of real numbers. Here formal means that we don
More informationThe HenstockKurzweilStieltjes type integral for real functions on a fractal subset of the real line
The HenstockKurzweilStieltjes type integral for real functions on a fractal subset of the real line D. Bongiorno, G. Corrao Dipartimento di Ingegneria lettrica, lettronica e delle Telecomunicazioni,
More informationTHE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
More informationmax cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x
Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study
More information1 Local Brouwer degree
1 Local Brouwer degree Let D R n be an open set and f : S R n be continuous, D S and c R n. Suppose that the set f 1 (c) D is compact. (1) Then the local Brouwer degree of f at c in the set D is defined.
More informationIntroduction and message of the book
1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =
More informationON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION. E. I. Pancheva, A. GacovskaBarandovska
Pliska Stud. Math. Bulgar. 22 (2015), STUDIA MATHEMATICA BULGARICA ON LIMIT LAWS FOR CENTRAL ORDER STATISTICS UNDER POWER NORMALIZATION E. I. Pancheva, A. GacovskaBarandovska Smirnov (1949) derived four
More informationMath 317 HW #5 Solutions
Math 317 HW #5 Solutions 1. Exercise 2.4.2. (a) Prove that the sequence defined by x 1 = 3 and converges. x n+1 = 1 4 x n Proof. I intend to use the Monotone Convergence Theorem, so my goal is to show
More informationAbout the inverse football pool problem for 9 games 1
Seventh International Workshop on Optimal Codes and Related Topics September 61, 013, Albena, Bulgaria pp. 15133 About the inverse football pool problem for 9 games 1 Emil Kolev Tsonka Baicheva Institute
More informationFACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
More informationBasic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011
Basic Concepts of Point Set Topology Notes for OU course Math 4853 Spring 2011 A. Miller 1. Introduction. The definitions of metric space and topological space were developed in the early 1900 s, largely
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More informationAn optimal transportation problem with import/export taxes on the boundary
An optimal transportation problem with import/export taxes on the boundary Julián Toledo Workshop International sur les Mathématiques et l Environnement Essaouira, November 2012..................... Joint
More informationc 2006 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 17, No. 4, pp. 943 968 c 2006 Society for Industrial and Applied Mathematics STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT
More informationON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction
ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE J.M. CALABUIG, J. RODRÍGUEZ, AND E.A. SÁNCHEZPÉREZ Abstract. Let m be a vector measure taking values in a Banach space X. We prove that
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationNONLINEAR AND DYNAMIC OPTIMIZATION From Theory to Practice
NONLINEAR AND DYNAMIC OPTIMIZATION From Theory to Practice IC32: Winter Semester 2006/2007 Benoît C. CHACHUAT Laboratoire d Automatique, École Polytechnique Fédérale de Lausanne CONTENTS 1 Nonlinear
More informationOn the Spaces of λconvergent and Bounded Sequences
Thai Journal of Mathematics Volume 8 2010 Number 2 : 311 329 www.math.science.cmu.ac.th/thaijournal Online ISSN 16860209 On the Spaces of λconvergent and Bounded Sequences M. Mursaleen 1 and A.K. Noman
More information1.3 Induction and Other Proof Techniques
4CHAPTER 1. INTRODUCTORY MATERIAL: SETS, FUNCTIONS AND MATHEMATICAL INDU 1.3 Induction and Other Proof Techniques The purpose of this section is to study the proof technique known as mathematical induction.
More informationTOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
More informationPROPERTIES OF SOME NEW SEMINORMED SEQUENCE SPACES DEFINED BY A MODULUS FUNCTION
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume L, Number 3, September 2005 PROPERTIES OF SOME NEW SEMINORMED SEQUENCE SPACES DEFINED BY A MODULUS FUNCTION YAVUZ ALTIN AYŞEGÜL GÖKHAN HIFSI ALTINOK Abstract.
More informationIrreducibility criteria for compositions and multiplicative convolutions of polynomials with integer coefficients
DOI: 10.2478/auom20140007 An. Şt. Univ. Ovidius Constanţa Vol. 221),2014, 73 84 Irreducibility criteria for compositions and multiplicative convolutions of polynomials with integer coefficients Anca
More informationTRIPLE FIXED POINTS IN ORDERED METRIC SPACES
Bulletin of Mathematical Analysis and Applications ISSN: 18211291, URL: http://www.bmathaa.org Volume 4 Issue 1 (2012., Pages 197207 TRIPLE FIXED POINTS IN ORDERED METRIC SPACES (COMMUNICATED BY SIMEON
More informationPrime Numbers and Irreducible Polynomials
Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationOn Nicely Smooth Banach Spaces
E extracta mathematicae Vol. 16, Núm. 1, 27 45 (2001) On Nicely Smooth Banach Spaces Pradipta Bandyopadhyay, Sudeshna Basu StatMath Unit, Indian Statistical Institute, 203 B.T. Road, Calcutta 700035,
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationAdvanced Microeconomics
Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions
More informationPOWER SETS AND RELATIONS
POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty
More informationIntroduction to Convex Optimization for Machine Learning
Introduction to Convex Optimization for Machine Learning John Duchi University of California, Berkeley Practical Machine Learning, Fall 2009 Duchi (UC Berkeley) Convex Optimization for Machine Learning
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationRecall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula:
Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationEMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE DEGREES
EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE ENUMERATION DEGREES AND THE ωenumeration DEGREES MARIYA I. SOSKOVA AND IVAN N. SOSKOV 1. Introduction One of the most basic measures of the complexity of a
More information