Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1



Similar documents
Duality of linear conic problems

2.3 Convex Constrained Optimization Problems

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

Date: April 12, Contents

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

10. Proximal point method

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Properties of BMO functions whose reciprocals are also BMO

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Convex Programming Tools for Disjunctive Programs

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Big Data - Lecture 1 Optimization reminders

1 Norms and Vector Spaces

F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)

Separation Properties for Locally Convex Cones

1 if 1 x 0 1 if 0 x 1

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

ALMOST COMMON PRIORS 1. INTRODUCTION

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

Online Convex Optimization

Stochastic Inventory Control

Nonlinear Programming Methods.S2 Quadratic Programming

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line

Adaptive Online Gradient Descent

ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction

Some representability and duality results for convex mixed-integer programs.

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Convex analysis and profit/cost/support functions

Numerical Verification of Optimality Conditions in Optimal Control Problems

24. The Branch and Bound Method

Optimal shift scheduling with a global service level constraint

An example of a computable

Quasi Contraction and Fixed Points

CHAPTER 9. Integer Programming

Pacific Journal of Mathematics

Notes on metric spaces

About the inverse football pool problem for 9 games 1

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!


Gambling Systems and Multiplication-Invariant Measures

Integer factorization is in P

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Nonlinear Optimization: Algorithms 3: Interior-point methods

A FIRST COURSE IN OPTIMIZATION THEORY

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang

Continued Fractions and the Euclidean Algorithm

Fuzzy Probability Distributions in Bayesian Analysis

ON THE EXISTENCE AND LIMIT BEHAVIOR OF THE OPTIMAL BANDWIDTH FOR KERNEL DENSITY ESTIMATION

Solutions of Equations in One Variable. Fixed-Point Iteration II

arxiv: v2 [q-fin.mf] 26 Feb 2015

1. Prove that the empty set is a subset of every set.

MA651 Topology. Lecture 6. Separation Axioms.

Some remarks on Phragmén-Lindelöf theorems for weak solutions of the stationary Schrödinger operator

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Shape Optimization Problems over Classes of Convex Domains

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

x a x 2 (1 + x 2 ) n.

How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

Quasi-static evolution and congested transport

Mathematical Methods of Engineering Analysis

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

3. Mathematical Induction

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Corollary. (f є C n+1 [a,b]). Proof: This follows directly from the preceding theorem using the inequality

Several Views of Support Vector Machines

Stiffie's On Line Scheduling Algorithm

Online Learning of Optimal Strategies in Unknown Environments

ON FIBER DIAMETERS OF CONTINUOUS MAPS

A central problem in network revenue management

Breaking The Code. Ryan Lowe. Ryan Lowe is currently a Ball State senior with a double major in Computer Science and Mathematics and

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Geometrical Characterization of RN-operators between Locally Convex Vector Spaces

4. Expanding dynamical systems

TD(0) Leads to Better Policies than Approximate Value Iteration

THE BANACH CONTRACTION PRINCIPLE. Contents

Predictive Control Algorithms: Stability despite Shortened Optimization Horizons

Chapter 3 Nonlinear Model Predictive Control

THE SCHEDULING OF MAINTENANCE SERVICE

Practice with Proofs

3. Linear Programming and Polyhedral Combinatorics

Non-Arbitrage and the Fundamental Theorem of Asset Pricing: Summary of Main Results

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Universal Algorithm for Trading in Stock Market Based on the Method of Calibration

Integrating Benders decomposition within Constraint Programming

The degree, size and chromatic index of a uniform hypergraph

On Nicely Smooth Banach Spaces

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

Transcription:

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China X. X. Huang School of Management, Fudan University, Shanghai 00433, China Abstract. In this paper, we apply the quadratic penalization technique to derive strong Lagrangian duality property for an inequality constrained invex program. Our results extend and improve the corresponding results in the literature. Key words: Penalty function, Lagrangian duality, coercivity of a function, level-boundedness of a function, invex function. 1 This work is supported by the National Science Foundation of China and Shanghai Pujiang Program. Corresponding author. 1

1 Introduction It is known that Lagrangian duality theory is an important issue in optimization theory and methodology. What is of special interest in Lagrangian duality theory is the so-called strong duality property, i.e., there exists no duality gap between the primal problem and its Lagrangian dual problem. More specifically, the optimal value of the primal problem is equal to that of its Lagrangian dual problem. For a constrained convex program, a number of conditions have been obtained for its strong duality property, see, e.g., [5, 1, 6] and the references therein. It is also well-known that penalty method is a very popular method in constrained nonlinear programming [3]. In [4], a quadratic penalization technique was applied to establish strong Lagrangian duality property for an invex program under the assumption that the objective function is coercive. In this paper, we will derive the same results under weaker conditions. So our results improve those of [4]. Consider the following inequality constrained optimization prolem: (P) min f(x) s.t. x R n, (x) 0, j = 1,, m, where f, (j = 1,, m) : R n R 1 are continuously differentiable. The Lagrangian function for (P) is L(x, µ) = f(x) µ j (x), x R n, µ = (µ 1,, µ m ) R m. The Lagrangian dual function for (P) is The Lagrangian dual problem for (P) is h(µ) = inf x R nl(x, µ), µ R m. (D) sup u R m h(µ). Denote by M P and M D the optimal values of (P) and (D), respectively. It is known that weak duality: M P M D holhs. However, there is usually a duality gap, i.e. M P > M D. If M P = M D, we say that strong Lagrangian duality property holds (or zero duality gap property holds). Recall that a differentiable function u : R n R 1 is invex if there exists a vector-valued function η : R R n R n such that u(x) u(y) η T (x, y) u(y), x, y R n. Clearly, a differentiable convex function u is invex with η(x, y) = x y. It is known from [] that a differentiable convex function u is invex if and only if each stationary point of u is a global optimal solution of u on R n.

Let X R n be nonempty. u : R n R 1 is said to be level-bounded on X if for any real number t, the set {x X : f(x) t} is bounded. It is easily checked that u is level-bounded on X if and only if X is bounded or u is coercive on X if X is unbounded (i.e., lim x X, x u(x) = ). Main Results In this section, we present the main results of this paper. Consider the following quadratic penalty function and the corresponding penalty problem for (P): P k (x) = f(x) k (x), x R n, (P k ) min x R np k (x), where the integer k > 0 is the penalty parameter. For any t R 1, denote X(t) = {x R n : (x) t, j = 1,, m}. It is obvious that X(0) is the feasible set of (P)., In the sequel, we always assume that X(0). We need the following lemma. Lemma.1. Suppose that there exists t 0 > 0 such that f is level bounded on X(t 0 ) and there exists k > 0 and m 0 R 1 such that P k (x) m o, x R n. Then (i) the optimal set of (P) is nonempty and compact; (ii) there exists k > 0 such that for each k k, the penalty problem (P k ) has an optimal solution x k ; the sequence {x k } is bounded and all of its limiting points are optimal solutions of (P). Proof. (i) Since X(0) X(t 0 ) is nonempty and f is level-bounded on X(t 0 ), we see that f is level-bounded on X(0). By the standard existence theory in optimization, we conclude that the solution set of (P) is nonempty and compact. (ii) Let x 0 X(0) and k k 1 satisfy f(x 0 ) 1 m 0 k k t 0. 3

Note that when k k P k (x) = f(x) k m 0 (k k ) m (x) (k k ) (x). (x) Consequently, P k (x) is bounded below by m 0 on R n. For any fixed k k 1, suppose that {y l } satisfies P k (y l ) inf x R np k (x). Then, when l is sufficiently large, f(x 0 ) 1 = P k (x 0 ) 1 p k (y l ) = f(y l ) k g j (y l ) m 0 (k k ) g j (y l ). (1) Thus, It follows that f(x 0 ) 1 m 0 k k (y l ) g j (y l ), j = 1,, m. [ ] 1/ [ ] 1/ f(x0 ) 1 m 0 f(x0 ) 1 m 0 (y l ) t k k k k 0, j = 1,, m. That is, y l X(t 0 ) when l is sufficiently large. From (1), we have f(y l ) f(x 0 ) 1 when l is sufficiently large. By the level-boundedness of f on X(t 0 ), we see that {y l } is bounded. We assume without loss of generality that y l x k as l. Then P k (y l ) P k (x k ) = inf x R np k (x). Moreover, x k X(t 0 ). Thus, {x k } is bounded. Let {x ki } be a subsequence which converges to x. Then, for any feasible solution x of (P), f(x ki ) k i m (x ki ) f(x). () That is, m 0 (k i k ) (x) f(x ki ) k m (x) (k i k ) (x) f(x), namely, (x) f(x) m 0 k i k. 4

Passing to the limit as i, we have It follows that g j (x ) = 0, j = 1,, m. (x ) 0, j = 1,, m. Consequently, x X(0). Moreover, from (), we have f(x ki ) f(x). Passing to the limit as i, we obtain f(x ) f(x). By the arbitrariness of x X(0), we conclude that x is an optimal solution of (P). Remark.1. If f(x) is bounded below on R n, then for any k > 0, P k (x) is bounded below on R n. The next proposition presents sufficient conditions that guarantee all the conditions of Lemma.1. Proposition.1. Any one of the following conditions ensures the validity of the conditions of Lemma.1. (i) f(x) is coercive on R n ; (ii) the function max{f(x), g j (x), j = 1,, m} is coercive on R n and there exists k > 0 and m 0 R 1 such that P k (x) m o, x R n. Proof. We need only to show that if (ii) holds, then the conditions of Lemma.1 hold since condition (i) is stronger than condition (ii). Let t 0 > 0. We need only to show that f is coercive on X(t 0 ). Otherwise, there exists σ > 0 and {y k } X(t 0 ) with y k satisfying f(y k ) σ. (3) From {y k } X(t 0 ), we deduce It follows from (3) and (4) that (y k ) t 0, j = 1,, m. (4) max{f(y k ), g j (y k ), j = 1,, m} max{σ, t 0 }, contradicting the coercivity of max{f(x), (x), j = 1,, m} since y k as k. 5

The next proposition follows immediately from Lemma.1 and Proposition.1. Proposition.. If one of the two conditions (i) and (ii) of Proposition.1 holds, then the conclusions of Lemma.1 hold. The following theorem can be established similarly to ([4], Theorem 4) by using Lemma.1. Theorem.1. Suppose that f, (j = 1,, m) are all invex with the same η and the conditions of Lemma.1 hold. Then M P = M D. Corollary.1. Suppose that f, (j = 1,, m) are all invex with the same η and one of the conditions (i) and (ii) of Proposition.1 holds. Then M P = M D. References [1] Bazara, M. S. and Shetty, C. M., Nonlinear Programming Theory and Algorithms, John Wiley & Sons, New York, 1979. [] Ben-Israel, A. and Mond, B., What is invexity? J. Aust. Math. Soc., Ser. B. Vol. 8, pp. 1C9, 1986. [3] Bertsekas, D., Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, 198. [4] Nahak, C., Application of the penalty function method to generalized convex programs, Applied Mathematics Letters, Vol. 0, pp. 479C483, 007. [5] Rockafellar, R. T., Convex Analysis, Princeton University Press, Princeton, 1970. [6] Tseng, P., Some convex programs without a duality gap, Mathematical Programming, Vol. 116, pp. 553-578, 009. 6