Secondorder conditions in C 1,1 constrained vector optimization


 Brice Price
 1 years ago
 Views:
Transcription
1 Ivan Ginchev, Angelo Guerreggio, Matteo Rocca Secondorder conditions in C 1,1 constrained vector optimization 2004/17 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA
2 In questi quaderni vengono pubblicati i lavori dei docenti della Facoltà di Economia dell Università dell Insubria. La pubblicazione di contributi di altri studiosi, che abbiano un rapporto didattico o scientifico stabile con la Facoltà, può essere proposta da un professore della Facoltà, dopo che il contributo sia stato discusso pubblicamente. Il nome del proponente è riportato in nota all'articolo. I punti di vista espressi nei quaderni della Facoltà di Economia riflettono unicamente le opinioni degli autori, e non rispecchiano necessariamente quelli della Facoltà di Economia dell'università dell'insubria. These Woring papers collect the wor of the Faculty of Economics of the University of Insubria. The publication of wor by other Authors can be proposed by a member of the Faculty, provided that the paper has been presented in public. The name of the proposer is reported in a footnote. The views expressed in the Woring papers reflect the opinions of the Authors only, and not necessarily the ones of the Economics Faculty of the University of Insubria. Copyright Ivan Ginchev, Angelo Guerreggio, Matteo Rocca Printed in Italy in June 2004 Università degli Studi dell'insubria Via Ravasi 2, Varese, Italy All rights reserved. No part of this paper may be reproduced in any form without permission of the Author.
3 Secondorder conditions in C 1,1 constrained vector optimization Ivan Ginchev Technical University of Varna Department of Mathematics 9010 Varna, Bulgaria Angelo Guerraggio University of Insubria Department of Economics Varese, Italy Matteo Rocca University of Insubria Department of Economics Varese, Italy Abstract We consider the constrained vector optimization problem min C fx, gx K, where f : R n R m and g : R n R p are C 1,1 functions, and C R m and K R p are closed convex cones with nonempty interiors. Two type of solutions are important for our consideration, namely wminimizers wealy efficient points and iminimizers isolated minimizers. We formulate and prove in terms of the Dini directional derivative secondorder necessary conditions a point x 0 to be a wminimizer and secondorder sufficient conditions x 0 to be a iminimizer of order two. We discuss the possible reversal of the sufficient conditions under suitable constraint qualifications of KuhnTucer type. The obtained results improve the ones in Liu, Neittaanmäi, Kříře [20]. Key words: Vector optimization, C 1,1 optimization, Dini derivatives, Secondorder conditions, Duality. Math. Subject Classification: 90C29, 90C30, 90C46, 49J52. 1 Introduction In this paper we deal with the constrained vector optimization problem min C fx, gx K, 1 where f : R n R m and g : R n R p are given functions, and C R m and K R p are closed convex cones with nonempty interiors. Here n, m and p are positive integers. In the case when f and g are C 1,1 functions we derive secondorder optimality conditions a point x 0 to be a solution of this problem. The paper is thought as a continuation of the investigation initiated by the authors in [7], where firstorder conditions are derived in the case when f and g are C 0,1 functions. Recall that a function is said to be C,0 if it is times Fréchet differentiable with locally Lipschitz th derivative. The C 0,1 functions are the locally Lipschitz functions. The C 1,1 functions have been introduced in HiriartUrruty, Strodiot, Hien Nguen [13] and since then have found various application in optimization. In particular secondorder conditions for C 1,1 scalar problems are studied in [13, 5, 16, 25, 26]. Secondorder optimality conditions in vector optimization are investigated in [1, 4, 15, 21, 24], and what concerns C 1,1 vector optimization in [9, 10, 18, 19, 20]. The given in the present paper approach and results generalize that of [20]. The assumption that f and g are defined on the whole space R n is taen for convenience. Since we deal only with local solutions of problem 1, evidently our results generalize straightforward for functions f and g being defined on an open subset of R n. Usually the solutions of 1 are called points of efficiency. We prefer, lie in the scalar optimization, to call them minimizers. In Section 2 we define different type of minimizers. Among them in our considerations an important role play the wminimizers wealy 1
4 efficient points and the iminimizers isolated minimizers. When we say first or secondorder conditions we mean as usually conditions expressed in suitable first or secondorder derivatives of the given functions. Here we deal with the Dini directional derivatives. In Section 3 firstorder Dini derivative is defined and after [7] firstorder optimality conditions for C 0,1 functions are recalled. In Section 4 we define secondorder Dini derivative and formulate and prove secondorder optimality conditions for C 1,1 functions. We note that our results improve the ones obtained in Liu, Neittaanmäi, Kříře [20]. In Section 5 we discuss the possibility to revert the obtained sufficient conditions under suitable constraint qualifications. 2 Preliminaries For the norm and the dual parity in the considered finitedimensional spaces we write and,. From the context it should be clear to exactly which spaces these notations are applied. For the cone M R its positive polar cone M is defined by M = {ζ R ζ, φ 0 for all φ M}. The cone M is closed and convex. It is nown that M := M = cl co M, see Rocafellar [23, Theorem 14.1, page 121]. In particular for the closed convex cone M we have M = {ζ R ζ, φ 0 for all φ M} and M = M = {φ R ζ, φ 0 for all ζ M }. If φ cl conv M, then ζ, φ 0 for all ζ M. We set M [φ] = {ζ M ζ, φ = 0}. Then M [φ] is a closed convex cone and M [φ] M. Consequently its positive polar cone M[φ] = M [φ] is a closed convex cone, M M[φ] and its positive polar cone satisfies M[φ] = M [φ]. In this paper we apply this notation for M = K and φ = gx 0. Then we deal with the cones K [gx 0 ] we call this cone the index set of problem 1 at x 0 and K[gx 0 ]. For the closed convex cone M we apply in the sequel the notation Γ M = {ζ M ζ = 1}. The sets Γ M is compact, since it is closed and bounded. Now we recall the concept of the so called oriented distance from a point to a set, being applied to give a scalar characterization of some type of solutions of vector optimization problems. Given a set A R, then the distance from y R to A is dy, A = inf{ a y a A}. The oriented distance from y to A is defined by Dy, A = dy, A dy, R \ A. The function D is introduced in HiriartUrruty [11, 12] and since then many authors show its usefulness in optimization. Ginchev, Hoffmann [8] apply the oriented distance to study approximation of setvalued functions by singlevalued ones and in case of a convex set A show the representation Dy, A = sup ξ =1 ξ, y sup a A ξ, a this formula resembles [23] the conjugate of the support function of A. From here, when A = C and C is a closed convex cone, taing into account { 0, ξ C sup ξ, a =, a C +, ξ / C, we get easily Dy, C = sup ξ =1, ξ C ξ, y. In terms of the distance function we have K[gx 0 ] = {w R p 1 lim sup t 0 + t dgx0 + tw, C = 0}, that is K[gx 0 ] is the contingent cone [3] of K at gx 0. We call the solutions of problem 1 minimizers. The solutions are understood in a local sense. In any case a solution is a feasible point x 0, that is a point satisfying the constraint x 0 K. The feasible point x 0 is said to be a wminimizer wealy efficient point for 1 if there exists a neighbourhood U of x 0, such that fx / fx 0 int C for all x U g 1 K. The feasible point x 0 is said to be a eminimizer efficient point for 1 if there exists a neighbourhood U of x 0, such that 2
5 fx / fx 0 C \ {0} for all x U g 1 K. We say that the feasible point x 0 is a sminimizer strong minimizer for 1 if there exists a neighbourhood U of x 0, such that fx / fx 0 C for all x U \ {x 0 } g 1 K. In [7] through the oriented distance the following characterization is derived. The feasible point x 0 is a wminimizer sminimizer for the vector problem 1 if and only if x 0 is a minimizer strong minimizer for the scalar problem min Dfx fx 0, C, gx K. 2 Here we have an example of a scalarization of the vector optimization problem. By scalarization we mean a reduction of the vector problem to an equivalent scalar optimization problem. We introduce another concept of optimality. We say that the feasible point x 0 is a iminimizer isolated minimizer of order, > 0, for 1 if there exists a neighbourhood U of x 0 and a constant A > 0 such that Dfx fx 0, C A x x 0 for x U g 1 K. In spite that the notion of isolated minimizer is defined through the norms, the following reasoning shows that in fact it is normindependent. From the nonnegativeness of the righthand side the definition of the iminimizer of order equivalently can be given by the inequality dfx fx 0, C A x x 0 for x U g 1 K. Assume that another pair of norms in the original and image space denoted is introduced. As it is nown any two norms in a finite dimensional space over the reals are equivalent. Therefore there exist positive constants α 1, α 2, β 1, β 2, such that α 1 x x 1 α 2 x for all x X := R n, β 1 y y 1 β 2 y for all y Y := R m. Denote by d 1 the distance associated to the norm 1. For A Y we have whence d 1 y, A = sup{ y a a A} sup{β 1 y a a A} = β 1 dy, A, d 1 fx fx 0, C β 1 dfx fx 0, C β 1 A x x 0 β 1A α 2 x x 0 1 for x U g 1 K. Therefore, if x 0 is a iminimizer of order for 1 with respect to the pair of norms, it is also a iminimizer with respect to the pair of norms 1, only the constant A should be changed to β 1 A/α 2. Obviously, each iminimizer is a sminimizer. Further each sminimizer is a eminimizer and each eminimizer is a wminimizer claiming this we assume that C R m. The concept of an isolated minimizer for scalar problems is introduced in Auslender [2]. For vector problems it has been extended in Ginchev [6], Ginchev, Guerraggio, Rocca [7], and under the name of strict minimizers in Jiménez [14] and Jiménez, Novo [15]. We prefer the name isolated minimizer given originally by A. Auslender. 3 Firstorder conditions In this section after [7] we recall optimality conditions for problem 1 with C 0,1 data in terms of the firstorder Dini directional derivative. The proofs can be found in [7]. Given a C 0,1 function Φ : R n R, we define the Dini directional derivative Φ ux 0 of Φ at x 0 in direction u R n as the set of the cluster points of 1/tΦx 0 + tu Φx 0 as t 0 +, that is as the upper limit Φ ux 0 1 = Limsup Φx 0 + tu Φx 0. t 0 + t 3
6 If Φ is Fréchet differentiable at x 0 then the Dini derivative is a singleton, coincides with the usual directional derivative and can be expressed in terms of the Fréchet derivative Φ x 0 by Φ ux 0 = Φ x 0 u. In connection with problem 1 we deal with the Dini derivative of the function Φ : R n R m+p, Φx = fx, gx. Then we use the notation Φ ux 0 = fx 0, gx 0 u. If at least one of the derivatives f ux 0 and g ux 0 is a singleton, then fx 0, gx 0 u = f ux 0, g ux 0. Let us turn attention that always fx 0, gx 0 u f ux 0 g ux 0, but in general these two sets do not coincide. The following constraint qualification appears in the Sufficient Conditions part of Theorem 1. Q 0,1 x 0 : If gx 0 K and 1 t gx 0 + t u 0 gx 0 z 0 Kx 0 then u u 0 : 0 N : > 0 : gx 0 + t u K. The constraint qualification Q 0,1 x 0 is of KuhnTucer type. In 1951 Kuhn, Tucer [17] published the classical variant for differentiable functions and since then it is often cited in optimization theory. Theorem 1 Firstorder conditions Consider problem 1 with f, g being C 0,1 functions and C and K closed convex cones. Necessary Conditions Let x 0 be wminimizer of problem 1. Then for each u S the following condition is satisfied: N 0,1 : y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 C K : ξ 0, η 0 0, 0, η 0, gx 0 = 0 and ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions Let x 0 R n and suppose that for each u S the following condition is satisfied: S 0,1 : y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 C K : ξ 0, η 0 0, 0, η 0, gx 0 = 0 and ξ 0, y 0 + η 0, z 0 > 0. Then x 0 is a iminimizer of order one for problem 1. Conversely, if x 0 is a iminimizer of order one for problem 1 and the constraint qualification Q 0,1 x 0 holds, then condition S 0,1 is satisfied. If g is Fréchet differentiable at x 0, then instead of constraint qualification Q 0,1 x 0 we may consider the constraint qualification Q 1 x 0 given below. Q 1 x 0 : If gx 0 K and g x 0 u 0 = z 0 Kx 0 then there exists δ > 0 and a differentiable injective function ϕ : [0, δ] K such that ϕ0 = x 0 and ϕ 0 = g x 0 u 0. In the case of a polyhedral cone K in Q 1 x 0 the requirement ϕ : [0, δ] K can be replaced by ϕ : [0, δ] Kx 0. This condition coincides with the classical KuhnTucer constraint qualification compare with Mangasarian [22, p. 102]. The next theorem is a reformulation of Theorem 1 for C 1 problems, that is problems with f and g being C 1 functions. Theorem 2 Consider problem 1 with f, g being C 1 functions and C and K closed convex cones. Necessary Conditions Let x 0 be wminimizer of problem 1. Then for each u S the following condition is satisfied: N 1 : ξ 0, η 0 C K \ {0, 0} : η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u 0. Sufficient Conditions Let x 0 R n. Suppose that for each u S the following condition is satisfied: 4
7 S 1 : ξ 0, η 0 C K \ {0, 0} : η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u > 0. Then x 0 is a iminimizer of first order for problem 1. Conversely, if x 0 is a iminimizer of first order for problem 1 and let the constraint qualification Q 1 x 0 have place, then condition S 1 is satisfied. The pairs of vectors ξ 0, η 0 are usually referred to as the Lagrange multipliers. Here we have different Lagrange multipliers to different u S and different y 0, z 0 fx 0, gx 0 u. The natural question arises, whether a common pair ξ 0, η 0 can be chosen to all directions. The next example shows that the answer is negative even for C 1 problems. Example 1 Let f : R 2 R 2, fx 1, x 2 = x 1, x x2 2, and g : R2 R 2, fx 1, x 2 = x 1, x 2. Define C = {y y 1, y 2 R 2 y 2 = 0}, K = R 2. Then f and g are C 1 functions and the point x 0 = 0, 0 is a wminimizer of problem 1 in fact x 0 is also a iminimizer of order two, but not a iminimizer of order one. At the same time the only pair ξ 0, η 0 C K for which ξ 0, f x 0 u + η 0, g x 0 u 0 for all u S is ξ 0 = 0, 0 and η 0 = 0, 0. Theorem below 3 guarantees, that in the case of cones with nonempty interiors a nonzero pair ξ 0, η 0 exists, which satisfies the Necessary Conditions of Theorem 1 and which is common for all directions. This is the reason why the secondorder conditions in the next section are derived under the assumption of C and K closed convex cones with nonempty interior. Theorem 3 Necessary Conditions Consider problem 1 with f, g being C 1 functions and C and K closed convex cones with nonempty interiors. Let x 0 be wminimizer of problem 1. Then there exists a pair ξ 0, η 0 C K \ {0, 0} such that η 0, gx 0 = 0 and ξ 0, f x 0 u + η 0, g x 0 u = 0 for all u R n. The latter equality could be written also as ξ 0, f x 0 + η 0, g x 0 = 0. 4 Secondorder conditions In this section we derive optimality conditions for problem 1 with C 1,1 data in terms of the secondorder Dini directional derivative. Given a C 1,1 function Φ : R n R, we define the secondorder Dini directional derivative Φ ux 0 of Φ at x 0 in direction u R n as the set of the cluster points of 1/t 2 Φx 0 + tu Φx 0 Φ x 0 u as t 0 +, that is as the upper limit Φ ux 0 2 = Limsup Φx 0 t 0 + t 2 + tu Φx 0 t Φ x 0 u. If Φ is twice Fréchet differentiable at x 0 then the Dini derivative is a singleton and can be expressed in terms of the Hessian Φ ux 0 = Φ x 0 u, u. In connection with problem 1 we deal with the Dini derivative of the function Φ : R n R m+p, Φx = fx, gx. Then we use the notation Φ ux 0 = fx 0, gx 0 u. Let us turn attention that always fx 0, gx 0 u f u x 0 g ux 0, but in general these two sets do not coincide. We need the following lemma. Lemma 1 Let Φ : R n R be a C 1,1 function and Φ be Lipschitz with constant L on the ball {x x x 0 r}, where x 0 R n and r > 0. Then, for u, v R m and 0 < t < r/ max u, v we have 2 Φx 0 t 2 + tv Φx 0 tφ x 0 v 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u 5
8 L u + v v u. In particular, for v = 0 we get 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u L u 2. Proof. For 0 < t < r/ max u, v we have 2 Φx 0 t 2 + tv Φx 0 tφ x 0 v 2 Φx 0 t 2 + tu Φx 0 tφ x 0 u 2L = 2 Φx 0 t 2 + tv Φx 0 + tu tφ x 0 v u = 2 1 t Φ x 0 + tu + stv u Φ x 0 v u ds L su + sv ds v u 1 s u + s v ds v u = L u + v v u. The next theorem states secondorder necessary conditions in primal form, that is in terms of directional derivatives. Theorem 4 Secondorder necessary conditions, Primal form Consider problem 1 with f, g being C 1,1 functions, and C and K closed convex cones. Let x 0 be a wminimizer for 1. Then for each u R n the following two conditions hold. N p : f x 0 u, g x 0 u / int C int K[gx 0 ], N p : if f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and y 0, z 0 fx 0, gx 0 u then it holds conv {y 0, z 0, im f x 0, g x 0 } int C K[gx 0 ] =. Proof. Condition N p. Assume in the contrary, that f x 0 1 u = lim fx 0 + tu fx 0 int C and 3 t 0 + t g x 0 1 u = lim gx 0 + tu gx 0 int K[gx 0 ]. 4 t 0 + t We show first that there exists δ > 0 such that x 0 + tu is feasible for 0 < t < δ. For each η Γ K there exists δ η and a neighbourhood V η of η such that η, gx 0 + tu < 0 for η V η. To show this we consider the two possibilities. If η Γ K[gx 0 ] the claim follows from 1 t η, gx0 + tu = η, 1 gx 0 + tu gx 0 η, z 0 < 0. t For η Γ K \Γ K[gx 0 ] we have η, gx0, whence from the continuity of g we get η, gx 0 + tu < 0 for η, t sufficiently close to η, 0. From the compactness of Γ K it holds Γ K V η 1... V η. Then η, gx 0 + tu < 0 holds for all η Γ K with 0 < t < δ := min 1 i δ η i. Now 3 gives that fx 0 + tu fx 0 int C for all sufficiently small t, which contradicts the assumption that x 0 is a wminimizer for 1. 6
9 Condition N p. Assume in the contrary, that for some u 0 condition N p for u = 0 obviously N p is satisfied. Therefore we have f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and there exists yu, zu fx 0, gx 0 u such that for some λ 0, 1 and some w R n it holds 1 λ yu + λ f x 0 w int C, 1 λ zu + λ g x 0 w int K[gx 0 ]. According to the definition of the Dini derivative there exists a sequence t 0 +, such that lim y u = yu and lim z u = zu, where for v R n we put y v = 2 t 2 fx 0 + t v fx 0 t f x 0 v, z v = 2 t 2 gx 0 + t v gx 0 t g x 0 v. Let v u and f and g be Lipschitz with constant L in a neighbourhood of x 0. For large enough we have, as in Lemma 1, y v y u L u + v v u, z v z u L u + v v u. Now we have y v yu and z v zu, which follows from the estimations y v yu y v y u + y u yu L u + v v u + y u yu, z v zu z v z u + z u zu L u + v v u + y u yu. For = 1, 2,, let v be such that w = 21 λ t λ v u, i. e. v = u + λ 21 λ t w and hence, v u. For every, we have gx 0 + t v gx 0 = t g x 0 u + t g x 0 v u t2 zu + ot2 = t g x λ u λ t2 1 λ zu + g x 0 v u + ot 2 t = t g x λ u λ t2 1 λ zu + λ g x 0 v u + ot 2 t λ = t g x 0 1 u λ t2 1 λ zu + λ g x 0 w + ot 2 K[gx 0 ] int K[gx 0 ] + ot 2 int K[gx 0 ], the last inclusion is satisfied for large enough. We get from here repeating partially the reasoning from the proof of N p that gx 0 + t v int K for all sufficiently large. Passing to a subsequence, we may assume that all x 0 + t v are feasible. The above chain of equalities and inclusions can be repeated substituting g, zu and K[gx 0 ] respectively by f, yu and C. We obtain in such a way fx 0 + t v fx 0 int C, which contradicts to x 0 wminimizer for 1. Now we establish secondorder necessary and sufficient optimality conditions in dual form. We assume as usual that f : R n R m and g : R n R p are C 1,1 functions and C R m, K R p are closed convex cones. Then for x 0 R n we put x 0 = { ξ, η C K ξ, η 0, η, gx 0 = 0, ξ, f x 0 + η, g x 0 = 0 }. 7
10 Theorem 5 Secondorder conditions, Dual form Consider problem 1 with f and g being C 1,1 functions, and C and K closed convex cones with nonempty interiors. Necessary Conditions Let x 0 be a wminimizer for 1. Then for each u R n the following two conditions hold: N p : f x 0 u, g x 0 u / int C int K[gx 0 ], N d : if f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] then y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions Let x 0 be a feasible point for problem 1. Suppose that for each u R n \ {0} one of the following two conditions is satisfied: S p : f x 0 u, g x 0 u / C K[gx 0 ], S d : f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ] and y 0, z 0 fx 0, gx 0 u : ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 > 0. Then x 0 is a iminimizer of order two for problem 1. Proof. Necessary Conditions. The necessity of condition N p is proved in Theorem 4. To complete the proof we show that condition N p implies N d. Thus, we assume that the convex sets F + = conv {y 0, z 0, im f x 0, g x 0 } and F = int C int K[gx 0 ] do not intersect. Since K K[gx 0 ] and both C and K have nonempty interior, we see that both F + and F are nonempty. According to the Separation Theorem there exists a pair ξ 0, η 0 0, 0 and a real number α, such that ξ 0, y + η 0, z α for all y, z F, ξ 0, y + η 0, z α for all y, z F +. 5 Since F is a cone and F + contains the cone im f x 0, g x 0, we get α = 0. Since moreover, im f x 0, g x 0 is a linear space, we see that ξ 0, y + η 0, z = 0 for all ξ, η im f x 0, g x 0. The first line in in 5 gives now ξ 0, η 0 C K[gx 0 ], that is ξ 0, η 0 x 0. With regard to y 0, z 0 F +, the second line gives ξ 0, y 0 + η 0, z 0 0. Sufficient Conditions. We prove that if x 0 is not a iminimizer of order two for 1, then there exists u 0 R n, u 0 = 1, for which neither of the conditions S p and S d is satisfied. Choose a monotone decreasing sequence ε 0 +. Since x 0 is not a iminimizer of order two, there exist sequences t 0 + and u R n, u = 1, such that gx 0 + t u K and Dfx 0 + t u fx 0, C = max ξ, fx 0 + t u fx 0 < ε t 2. 6 ξ Γ C Passing to a subsequence, we may assume u u 0. We may assume also y 0, y 0, z 0, z 0, where we have put y 0, = 2 fx 0 t 2 + t u 0 fx 0 t f x 0 u 0, y = 2 t 2 fx 0 + t u fx 0 t f x 0 u, z 0, = 2 t 2 gx 0 + t u 0 gx 0 t g x 0 u 0, z = 2 t 2 gx 0 + t u gx 0 t g x 0 u. 8
11 The possibility to choose convergent subsequence y 0, y 0, z 0, z 0, follows from the boundedness of the differential quotient proved in Lemma 1. Now we have y 0, z 0 fx 0, gx 0 u. We may assume also that 0 < t < r and both f and g are Lipschitz with constant L on {x x x 0 r}. Now we have z z 0 which follows from the estimations obtained on the base of Lemma 1 Similar estimations for y show that y y 0. z z 0 z z 0, + z 0, z 0 L u + u 0 u u 0 + z 0, z 0. We prove that S p is not satisfied at u 0. For this purpose we must prove that f x 0 u C, g x 0 u K[gx 0 ]. Let ε > 0. We claim that there exists 0 such that for all ξ Γ C and all > 0 the following inequalities hold: 1 ξ, fx 0 + t u fx 0 < 1 t 3 ε, 7 ξ, f x 0 u 1 fx 0 + t u fx 0 < 1 t 3 ε, 8 ξ, f x 0 u 0 u < 1 3 ε. 9 Inequality 7 follows from 6. Inequality 8 follows from the Fréchet differentiability of f ξ, f x 0 u 1 fx 0 + t u fx 0 t 1 fx 0 + t u fx 0 f x 0 u < 1 t 3 ε, which is true for all sufficiently small t. Inequality 9 follows from ξ, f x 0 u 0 u f x 0 u 0 u < 1 3 ε, which is true for u u 0 small enough. Now we see, that for arbitrary ξ Γ C and > 0 we have ξ, f x 0 u 0 1 = ξ, fx 0 + t u fx 0 t + ξ, f x 0 u 1 fx 0 + t u fx 0 t + ξ, f x 0 u 0 u < 1 3 ε ε ε = ε, and Df x 0 u 0, C = max ξ Γ ξ, f x 0 u 0 < ε. Since ε > 0 is arbitrary, we see Df x 0 u 0, C 0, whence f x 0 u 0 C. Similar estimations can be repeated with f and ξ Γ C substituted respectively by g and η K[gx 0 ], whence we get g x 0 u 0 K[gx 0 ]. The only difference occurs with the estimation 7. Then we have 1 η, gx 0 + t u gx 0 = 1 η, gx 0 + t u 0, t t since gx 0 + t u K K[gx 0 ]. 9
12 Now we prove that S d is not satisfied at u0. We assume that f x 0 u, g x 0 u C K[gx 0 ] \ int C int K[gx 0 ], since otherwise the first assertion in condition S d would not be satisfied. It is obvious, that to complete the proof we must show that ξ 0, η 0 x 0 : ξ 0, y 0 + η 0, z 0 0. Obviously, it is enough to show this inequality for ξ 0, η 0 x 0 such that max ξ 0, η 0 = 1. Then ξ 0, y 0 + η 0, z 0 = lim ξ 0, y + η 0, z 2 = lim t 2 ξ 0, fx 0 + t u fx t 2 η 0, gx 0 + t u gx 0 2 t 2 ξ 0, f x 0 u + η 0, g x 0 u 0 lim sup 2 t 2 Dfx 0 + t u fx 0, C + lim sup lim sup 2 t 2 ε t 2 = 0. 2 t 2 η 0, gx 0 + t u In fact, in Theorem 5 only conditions N d and S d are given in a dual form. Because of convenience for the proof conditions N p and S p are given in the same primal form as they appear in Theorem 4. However these conditions can be represented in a dual form too. For instance condition N p can be written in the equivalent dual form ξ, η C K[gx 0 ] : ξ, f x 0 u + η, g x 0 u > 0. Theorems 3.1 and 4.2 in Liu, Neittaanmäi, Kříže [20] are of the same type as Theorem 5. The latter is however more general in the following aspects. Theorem 5 in opposite to [20] concerns arbitrary and not only polyhedral cones C. In Theorem 5 the conclusion in the sufficient conditions part is that x 0 is a iminimizer of order two, while in [20] the weaer conclusion is proved, namely that the reference point is only eminimizer. 5 Reversal of the sufficient conditions The isolated minimizers as more qualified notion of efficiency should be more essential solutions for the vector optimization problem than the eminimizers. As a result of using iminimizers, we have seen in Theorem 1 that under suitable constraint qualifications the sufficient conditions admit reversal. Now we discuss the similar possibility to revert the sufficient conditions part of Theorem 5 under suitable secondorder constraint qualifications of KuhnTucer type. Unfortunately, the secondorder case is more complicated than the firstorder one. The proposed secondorder constraint qualifications called Q 1,1 x 0 loo more complicated than the firstorder ones applied in Theorem 1. In spite of this, we succeeded in proving that conditions from the sufficient part of Theorem 5 are satisfied under the stronger hypothesis that x 0 is a iminimizer of order two for the following constrained problem minĉfx, gx K, where Ĉ = conv C, im f x
13 Obviously, each iminimizer of order two for 10 is also a iminimizer of order two for 1. It remains the open question, whether under the hypotheses that x 0 is a iminimizer of order two for 1 the conclusion of Theorem 6 remains true. In the sequel we consider the constraint qualification. Q 1,1 x 0 : The following conditions are satisfied: 1 0. gx 0 K, 2 0. g x 0 u 0 K[gx 0 ] \ int K[gx 0 ], gx 0 + t u 0 gx 0 t g x 0 u 0 z 0, t η K[gx 0 ], z im g x 0 : η, z = 0 : η, z 0 0 implies u u 0 : 0 N : > 0 : gx 0 + t u K. Theorem 6 Reversal of the secondorder sufficient conditions Let x 0 be a iminimizer of order two for the constrained problem 10 and suppose that the constraint qualification Q 1,1 x 0 holds. Then one of the conditions S p or S d from Theorem 5 is satisfied. Proof. Let x 0 be a iminimizer of order two for problem 10, which means that gx 0 K and there exists r > 0 and A > 0 such that gx K and x x 0 r implies Dfx fx 0, Ĉ = max ξ, fx fx 0 A x x ξ ΓĈ The point x 0 being a iminimizer of order two for problem 10 is also a wminimzer for 10 and hence also for 1. Therefore it satisfies condition N p. Now it becomes obvious, that for each u = u 0 one and only one of the firstorder conditions in S p and the first part of S d is satisfied. Suppose that S p is not satisfied. Then the first part of condition S d holds, that is f x 0 u 0, g x 0 u 0 C K[gx 0 ] \ int C int K[gx 0 ]. We prove, that also the second part of condition S d holds. One of the following two cases may arise: 1 0. There exists η 0 K[gx 0 ] such that η 0, z = 0 for all z im f x 0 and η 0, z 0 > 0. We put now ξ 0 = 0. Then we have obviously ξ 0, η 0 C K[gx 0 ] and ξ 0, f x 0 + η 0, g x 0 = 0. Thus ξ 0, η 0 x 0 and ξ 0, y 0 + η 0, z 0 > 0. Therefore condition S d is satisfied For all η K[gx 0 ], such that η, z = 0 for all z im f x 0, it holds η, z 0 0. This condition coincides with condition 4 0 in the constraint qualification Q 1,1 x 0. Now we see that all points in the constraint qualification Q 1,1 x 0 are satisfied. Therefore there exists u u 0 and a positive integer 0 such that for all > 0 it holds gx 0 + t u K. Passing to a subsequence, we may assume that this inclusion holds for all. From 11 it follows that there exists ξ ΓĈ such that Dfx 0 + t u fx 0, Ĉ = ξ, fx 0 + t u fx 0 A t 2 u 2. Passing to a subsequence, we may assume that ξ ξ 0 ΓĈ. Let us underline that each ξ Ĉ satisfies ξ, f x 0 = 0. This follows from ξ, z 0 for all z im f x 0 and the fact that im f x 0 is a linear space. Now we have 2 2 t 2 ξ, ξ, t 2 fx 0 + t u fx 0 fx 0 + t u fx 0 t f x 0 u 2 t 2 A t 2 u 2 = A u 2. 11
14 Passing to a limit gives ξ 0, y 0 2 A u 0 2 = 2 A. Putting η 0 = 0, we have obviously ξ 0, η 0 x 0 and ξ 0, y 0 + η 0, z 0 > 0. References [1] B. Aghezzaf: Secondorder necessary conditions of the KuhnTucer type in multiobjective programming problems. Control Cybernet , no. 2, [2] A. Auslender: Stability in mathematical programming with nondifferentiable data. SIAM J. Control Optim , [3] J.P. Aubin, H. Franowsa: Setvalued analysis. Birhäuser, Boston, [4] S. Bolintenéanu, M. El Maghri: Secondorder efficiency conditions and sensitivity of efficient points. J. Optim. Theory Appl. 98 No , [5] P. G. Georgiev, N. Zlateva: Secondorder subdifferentials of C 1,1 functions and optimality conditions. SetValued Anal , no. 2, [6] I. Ginchev: Higher order optimality conditions in nonsmooth vector optimization. In: A. Cambini, B. K. Dass, L. Martein Eds., Generalized Convexity, Generalized Monotonicity, Optimality Conditions and Duality in Scalar and Vector Optimization, J. Stat. Manag. Syst , no. 1 3, [7] I. Ginchev, A. Guerraggio, M. Rocca: Firstorder conditions for C 0,1 constrained vector optimization. In: F. Giannessi, A. Maugeri eds., Variational analysis and applications, Proc. Erice, June 20 July 1, 2003, Kluwer Acad. Publ., Dordrecht, 2004, to appear. [8] I. Ginchev, A. Hoffmann: Approximation of setvalued functions by singlevalued one. Discuss. Math. Differ. Incl. Control Optim , [9] A. Guerraggio, D. T. Luc: Optimality conditions for C 1,1 vector optimization problems. J. Optim. Theory Appl , no. 3, [10] A. Guerraggio, D. T. Luc: Optimality conditions for C 1,1 constrained multiobjective problems. J. Optim. Theory Appl , no. 1, [11] J.B. HiriartUrruty: New concepts in nondifferentiable programming. Analyse non convexe, Bull. Soc. Math. France , [12] J.B. HiriartUrruty: Tangent cones, generalized gradients and mathematical programming in Banach spaces. Math. Oper. Res , [13] J.B. HiriartUrruty, J.J Strodiot, V. Hien Nguen: Generalized Hessian matrix and second order optimality conditions for problems with C 1,1 data. Appl. Math. Optim , [14] B. Jiménez: Strict efficiency in vector optimization. J. Math. Anal. Appl , [15] B. Jiménez, V. Novo: First and second order conditions for strict minimality in nonsmooth vector optimization. J. Math. Anal. Appl , [16] D. Klatte, K. Tammer: On the second order sufficient conditions to perturbed C 1,1 optimization problems. Optimization ,
Controllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationOptimization with SparsityInducing Penalties. Contents
Foundations and Trends R in Machine Learning Vol. 4, No. 1 (2011) 1 106 c 2012 F. Bach, R. Jenatton, J. Mairal and G. Obozinski DOI: 10.1561/2200000015 Optimization with SparsityInducing Penalties By
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationA Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationNew Results on Hermitian Matrix RankOne Decomposition
New Results on Hermitian Matrix RankOne Decomposition Wenbao Ai Yongwei Huang Shuzhong Zhang June 22, 2009 Abstract In this paper, we present several new rankone decomposition theorems for Hermitian
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationOptimization by Direct Search: New Perspectives on Some Classical and Modern Methods
SIAM REVIEW Vol. 45,No. 3,pp. 385 482 c 2003 Society for Industrial and Applied Mathematics Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods Tamara G. Kolda Robert Michael
More informationGeneralized compact knapsacks, cyclic lattices, and efficient oneway functions
Generalized compact knapsacks, cyclic lattices, and efficient oneway functions Daniele Micciancio University of California, San Diego 9500 Gilman Drive La Jolla, CA 920930404, USA daniele@cs.ucsd.edu
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationProperties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a nonnegative BMOfunction w, whose reciprocal is also in BMO, belongs to p> A p,and
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationHow many numbers there are?
How many numbers there are? RADEK HONZIK Radek Honzik: Charles University, Department of Logic, Celetná 20, Praha 1, 116 42, Czech Republic radek.honzik@ff.cuni.cz Contents 1 What are numbers 2 1.1 Natural
More informationLaw Invariant Risk Measures have the Fatou Property
Law Invariant Risk Measures have the Fatou Property E. Jouini W. Schachermayer N. Touzi Abstract S. Kusuoka [K 1, Theorem 4] gave an interesting dual characterization of law invariant coherent risk measures,
More informationAn Elementary Introduction to Modern Convex Geometry
Flavors of Geometry MSRI Publications Volume 3, 997 An Elementary Introduction to Modern Convex Geometry KEITH BALL Contents Preface Lecture. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8
More informationCOMPUTING EQUILIBRIA FOR TWOPERSON GAMES
COMPUTING EQUILIBRIA FOR TWOPERSON GAMES Appeared as Chapter 45, Handbook of Game Theory with Economic Applications, Vol. 3 (2002), eds. R. J. Aumann and S. Hart, Elsevier, Amsterdam, pages 1723 1759.
More informationHow Bad is Forming Your Own Opinion?
How Bad is Forming Your Own Opinion? David Bindel Jon Kleinberg Sigal Oren August, 0 Abstract A longstanding line of work in economic theory has studied models by which a group of people in a social network,
More informationFrom Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
SIAM REVIEW Vol. 51,No. 1,pp. 34 81 c 2009 Society for Industrial and Applied Mathematics From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein David
More informationWhen Is There a Representer Theorem? Vector Versus Matrix Regularizers
Journal of Machine Learning Research 10 (2009) 25072529 Submitted 9/08; Revised 3/09; Published 11/09 When Is There a Representer Theorem? Vector Versus Matrix Regularizers Andreas Argyriou Department
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationA SELFGUIDE TO OMINIMALITY
A SELFGUIDE TO OMINIMALITY CAMERINO TUTORIAL JUNE 2007 Y. PETERZIL, U. OF HAIFA 1. How to read these notes? These notes were written for the tutorial in the Camerino Modnet Summer school. The main source
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationMUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS
MUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS William Neilson Department of Economics University of Tennessee Knoxville September 29 289 by William Neilson web.utk.edu/~wneilson/mathbook.pdf Acknowledgments
More informationErgodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains
Ergodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains Peter Balint 1, Kevin K. Lin 2, and LaiSang Young 3 Abstract. We consider systems of moving particles in 1dimensional
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationMatthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka
Matthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka Version.5 Matthias Beck A First Course in Complex Analysis Version.5 Gerald Marchesi Department of Mathematics Department of Mathematical Sciences
More information