Advanced Optimization

Size: px
Start display at page:

Download "Advanced Optimization"

Transcription

1 Advanced Optimization (10-801: CMU) Lecture 22 Fixed-point theory; nonlinear conic optimization 07 Apr 2014 Suvrit Sra

2 Fixed-point theory Many optimization problems h(x) = 0 x h(x) = 0 (I h)(x) = x 2 / 19

3 Fixed-point theory Many optimization problems h(x) 0 x h(x) 0 (I h)(x) x 2 / 19

4 Fixed-point theory Fixed-point Any x that solves x = f(x) 3 / 19

5 Fixed-point theory Fixed-point Any x that solves x = f(x) Three types of results 1 Geometric: Banach contraction and relatives 3 / 19

6 Fixed-point theory Fixed-point Any x that solves x = f(x) Three types of results 1 Geometric: Banach contraction and relatives 2 Order-theoretic: Knaster-Tarski 3 / 19

7 Fixed-point theory Fixed-point Any x that solves x = f(x) Three types of results 1 Geometric: Banach contraction and relatives 2 Order-theoretic: Knaster-Tarski 3 Topological: Brouwer, Schauder-Leray, etc. 3 / 19

8 Fixed-point theory main concerns existence of a solution uniqueness of a solution stability under small perturbations of parameters structure of solution set (failing uniqueness) algorithms / approximation methods to obtain solutions rate of convergence analyses 4 / 19

9 Fixed-point theory Banach contraction Some conditions under which the nonlinear equation x = T x, x M X, can be solved by iterating x k+1 = T x k, x 0 M, k = 0, 1, / 19

10 Fixed-point theory Banach contraction Theorem (Banach 1922.) Suppose (i) T : M X M; (ii) M is closed, nonempty set in a complete metric space (X, d); (iii) T is q-contractive, i.e., d(t x, T y) qd(x, y), x, y M, constant 0 q < 1. Then, we have the following: (i) T x = x has exactly one solution (T has a unique FP in M) (ii) The sequence {x k } converges to the solution for any x 0 M (iii) A priori error estimate (iv) A posterior error estimate d(x k, x ) q k (1 q) 1 d(x 0, x 1 ) d(x k+1, x ) q(1 q) 1 d(x k, x k+1 ) (v) (Global) linear rate of convergence: d(x k+1, x ) qd(x k, x ) 5 / 19

11 Fixed-point theory Banach contraction If X is a Banach space with distance d = x y T x T y q x y, 0 q < 1 (contraction) 6 / 19

12 Fixed-point theory Banach contraction If X is a Banach space with distance d = x y T x T y q x y, 0 q < 1 (contraction) If inequality holds with q = 1, we call map nonexpansive d(t x, T y) d(x, y) Example: x x + 1 is nonexpansive, but has no fixed points! 6 / 19

13 Fixed-point theory Banach contraction If X is a Banach space with distance d = x y T x T y q x y, 0 q < 1 (contraction) If inequality holds with q = 1, we call map nonexpansive d(t x, T y) d(x, y) Example: x x + 1 is nonexpansive, but has no fixed points! Map is called contractive or weakly-contractive if d(t x, T y) < d(x, y), x, y M. 6 / 19

14 Fixed-point theory Banach contraction If X is a Banach space with distance d = x y T x T y q x y, 0 q < 1 (contraction) If inequality holds with q = 1, we call map nonexpansive d(t x, T y) d(x, y) Example: x x + 1 is nonexpansive, but has no fixed points! Map is called contractive or weakly-contractive if d(t x, T y) < d(x, y), x, y M. Several other variations of maps have been studied for Banach spaces (see e.g., book by Bauschke, Combettes (2012)) 6 / 19

15 Banach contraction proof Blackboard 7 / 19

16 Banach contraction proof Summary: Blackboard d must be positive-definite, i.e, d(x, y) = 0 iff x = y (X, d) must be complete (contain all its Cauchy sequences) T : M M, M must be closed But M need not be compact! Contraction is often a rare luxury; nonexpansive maps are more common (we ve already seen several) 7 / 19

17 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. 8 / 19

18 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! 8 / 19

19 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! Generalizes the intermediate-value theorem. 8 / 19

20 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! Generalizes the intermediate-value theorem. Theorem (Schauder FP.) Every continuous function from a convex compact subset M of a Banach space to M itself has a fixed-point. 8 / 19

21 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! Generalizes the intermediate-value theorem. Theorem (Schauder FP.) Every continuous function from a convex compact subset M of a Banach space to M itself has a fixed-point. Remarks: Brouwer FPs very hard. Exponential lower bounds for finding Brouwer fixed points Hirsch, Papadimitriou, Vavasis (1988). 8 / 19

22 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! Generalizes the intermediate-value theorem. Theorem (Schauder FP.) Every continuous function from a convex compact subset M of a Banach space to M itself has a fixed-point. Remarks: Brouwer FPs very hard. Exponential lower bounds for finding Brouwer fixed points Hirsch, Papadimitriou, Vavasis (1988). Any algorithm for computing a Brouwer FP based on function evaluations only must in the worst case perform a number of function evaluations exponential in both the number of digits of accuracy and the dimension. 8 / 19

23 More general fixed-point theorems Theorem (Brouwer FP.) Every continuous function from a convex compact subset M R d to M itself has a fixed-point. Note: Proves existence only! Generalizes the intermediate-value theorem. Theorem (Schauder FP.) Every continuous function from a convex compact subset M of a Banach space to M itself has a fixed-point. Remarks: Brouwer FPs very hard. Exponential lower bounds for finding Brouwer fixed points Hirsch, Papadimitriou, Vavasis (1988). Any algorithm for computing a Brouwer FP based on function evaluations only must in the worst case perform a number of function evaluations exponential in both the number of digits of accuracy and the dimension. Contrast with n = 1, where bisection yields f(ˆx) f 2 δ in O(δ) 8 / 19

24 Kakutani fixed-point theorem FP theorem for set-valued mappings (recall x (I f)(x)) 9 / 19

25 Kakutani fixed-point theorem FP theorem for set-valued mappings (recall x (I f)(x)) Set-valued map F : M 2 M, x M F (x) 2 M, i.e. F (x) M. Closed-graph {(x, y) y F (x)} is a closed subset ofx X (i.e., x k x, y k y and y k F (x k ) = y F (x)) 9 / 19

26 Kakutani fixed-point theorem FP theorem for set-valued mappings (recall x (I f)(x)) Set-valued map F : M 2 M, x M F (x) 2 M, i.e. F (x) M. Closed-graph {(x, y) y F (x)} is a closed subset ofx X (i.e., x k x, y k y and y k F (x k ) = y F (x)) Theorem (S. Kakutani 1941.) Let M R n be nonempty, convex, compact. Let F : M 2 M be a set-valued map with a closed graph; also for all x M, let F (x) be non-empty and convex. Then, F has a fixed point. Application: See proof of Nash equilibrium on Wikipedia 9 / 19

27 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n 10 / 19

28 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. 10 / 19

29 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. 10 / 19

30 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. Let n be probability simplex (compact, convex subset of R n ) 10 / 19

31 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. Let n be probability simplex (compact, convex subset of R n ) Verify that if x n then Ax n 10 / 19

32 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. Let n be probability simplex (compact, convex subset of R n ) Verify that if x n then Ax n Thus, A : n n ; A is obviously continuous 10 / 19

33 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. Let n be probability simplex (compact, convex subset of R n ) Verify that if x n then Ax n Thus, A : n n ; A is obviously continuous Hence by Brouwer FP: there is an x n such that Ax = x 10 / 19

34 Brouwer FP example Consider a Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Claim. There is a probability vector x that is an eigenvector of A. Prove: x 0, x T 1 = 1 such that Ax = x. Let n be probability simplex (compact, convex subset of R n ) Verify that if x n then Ax n Thus, A : n n ; A is obviously continuous Hence by Brouwer FP: there is an x n such that Ax = x How to compute such an x? 10 / 19

35 Conic optimization 11 / 19

36 Some definitions Let K be a cone in a real vector space V 12 / 19

37 Some definitions Let K be a cone in a real vector space V Let y K and x V. We say y dominates x if αy K x K βy, for some α, β R. 12 / 19

38 Some definitions Let K be a cone in a real vector space V Let y K and x V. We say y dominates x if αy K x K βy, for some α, β R. Max-min gauges M K (x/y) := inf {β R x βy} m K (x/y) := sup {α R αy x}. Shorthand: K 12 / 19

39 Some definitions Let K be a cone in a real vector space V Let y K and x V. We say y dominates x if αy K x K βy, for some α, β R. Max-min gauges M K (x/y) := inf {β R x βy} m K (x/y) := sup {α R αy x}. Shorthand: K Parts: We have an equivalence relation x K y on K if x dominates y and vice versa. 12 / 19

40 Some definitions Let K be a cone in a real vector space V Let y K and x V. We say y dominates x if αy K x K βy, for some α, β R. Max-min gauges M K (x/y) := inf {β R x βy} m K (x/y) := sup {α R αy x}. Shorthand: K Parts: We have an equivalence relation x K y on K if x dominates y and vice versa. The equivalence classes are called parts of the cone. 12 / 19

41 Hilbert projective metric If x K y with y 0, then α, β > 0 s.t. αy x βy. 13 / 19

42 Hilbert projective metric If x K y with y 0, then α, β > 0 s.t. αy x βy. Def. (Hilbert metric.) Let x K y and y 0. Then, d H (x, y) := log M(x/y) m(x/y) 13 / 19

43 Hilbert projective metric If x K y with y 0, then α, β > 0 s.t. αy x βy. Def. (Hilbert metric.) Let x K y and y 0. Then, d H (x, y) := log M(x/y) m(x/y) Proposition. Let K be a cone in V ; (K, d H ) satisfies: d H (x, y) 0, and d H (x, y) = d H (y, x) for all x, y K d H (x, z) d H (x, y) + d H (y, z) for all x K y K z, and d H (αx, βy) = d H (x, y) for all α, β > 0 and x, y K. If K is closed, then d H (x, y) = 0 iff x = λy for some λ > 0. In this case, if X K satisfies that for each x K \ {0} there is a unique λ > 0 such that λx X and P is a part of K, then (P X, d H ) is a genuine metric space. Proof: on blackboard 13 / 19

44 Nonexpansive maps with d H Def. (OPSH maps.) Let K V and K V be closed cones. The f : K K is called order preserving if for x K y, f(x) K f(y). It is homogeneous of degree r if f(λx) = λ r f(x) for all x K and λ > 0. It is subhomogeneous if λf(x) f(λx) for all x K and 0 < λ < / 19

45 Nonexpansive maps with d H Def. (OPSH maps.) Let K V and K V be closed cones. The f : K K is called order preserving if for x K y, f(x) K f(y). It is homogeneous of degree r if f(λx) = λ r f(x) for all x K and λ > 0. It is subhomogeneous if λf(x) f(λx) for all x K and 0 < λ < 1. Exercise: Prove that if f : K K is OPH of degree r > 0 then d H (f(x), f(y)) rd H (x, y). 14 / 19

46 Nonexpansive maps with d H Def. (OPSH maps.) Let K V and K V be closed cones. The f : K K is called order preserving if for x K y, f(x) K f(y). It is homogeneous of degree r if f(λx) = λ r f(x) for all x K and λ > 0. It is subhomogeneous if λf(x) f(λx) for all x K and 0 < λ < 1. Exercise: Prove that if f : K K is OPH of degree r > 0 then d H (f(x), f(y)) rd H (x, y). In particular, if r = 1, then f is nonexpansive (in d H ) 14 / 19

47 Birkhoff s theorem Let L be a linear operator on a cone K (L : K K) 15 / 19

48 Birkhoff s theorem Let L be a linear operator on a cone K (L : K K) Contraction ratio κ(l) := inf {λ 0 d H (Lx, Ly) λd H (x, y) for all x K y in K}. 15 / 19

49 Birkhoff s theorem Let L be a linear operator on a cone K (L : K K) Contraction ratio κ(l) := inf {λ 0 d H (Lx, Ly) λd H (x, y) for all x K y in K}. Theorem (Birkhoff.) Let (L) := sup {d H (Lx, Ly) Lx K Ly} be the projective diameter of L. Then κ(l) = tanh( 1 4 (L)) 15 / 19

50 Birkhoff s theorem Let L be a linear operator on a cone K (L : K K) Contraction ratio κ(l) := inf {λ 0 d H (Lx, Ly) λd H (x, y) for all x K y in K}. Theorem (Birkhoff.) Let (L) := sup {d H (Lx, Ly) Lx K Ly} be the projective diameter of L. Then κ(l) = tanh( 1 4 (L)) If (L) <, then we have a strict contraction! 15 / 19

51 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n 16 / 19

52 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Consider cone K R n + Suppose (A) < (next slide) 16 / 19

53 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Consider cone K R n + Suppose (A) < (next slide) Then d H (Ax, Ay) κ(a)d H (x, y) strict contraction 16 / 19

54 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Consider cone K R n + Suppose (A) < (next slide) Then d H (Ax, Ay) κ(a)d H (x, y) strict contraction Need to argue that ( n, d H ) is a complete metric space 16 / 19

55 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Consider cone K R n + Suppose (A) < (next slide) Then d H (Ax, Ay) κ(a)d H (x, y) strict contraction Need to argue that ( n, d H ) is a complete metric space Invoke Banach contraction theorem. 16 / 19

56 Application to Pagerank eigenvector Markov transition matrix A R n n + Column stochastic: a ij 0 and i a ij = 1 for 1 j n Consider cone K R n + Suppose (A) < (next slide) Then d H (Ax, Ay) κ(a)d H (x, y) strict contraction Need to argue that ( n, d H ) is a complete metric space Invoke Banach contraction theorem. Linear rate of convergence 16 / 19

57 Application to Pagerank eigenvector Let K = R n + and K = R m +, and A R m n 17 / 19

58 Application to Pagerank eigenvector Let K = R n + and K = R m +, and A R m n A(K) K iff a ij 0 17 / 19

59 Application to Pagerank eigenvector Let K = R n + and K = R m +, and A R m n A(K) K iff a ij 0 x K y is equivalent to I x := {i x i > 0} = {i y i > 0} 17 / 19

60 Application to Pagerank eigenvector Let K = R n + and K = R m +, and A R m n A(K) K iff a ij 0 x K y is equivalent to I x := {i x i > 0} = {i y i > 0} In this case, we obtain d H (x, y) = log ( ) x i y j max i,j I x x j y i 17 / 19

61 Application to Pagerank eigenvector Let K = R n + and K = R m +, and A R m n A(K) K iff a ij 0 x K y is equivalent to I x := {i x i > 0} = {i y i > 0} In this case, we obtain d H (x, y) = log ( ) x i y j max i,j I x x j y i Lemma If A R m n +. If there exists J [n] s.t. Ae i K Ae j for all i, j J, and Ae i = 0 for all i J then the projective diameter (A) = max i,j J d H(Ae i, Ae j ) <. 17 / 19

62 More applications Geometric optimization on the psd cone Sra, Hosseini (2013). Conic geometric optimisation on the manifold of positive definite matrices. arxiv: MDPs, Stochastic games, Nonlinear eigenvalue problems, etc. 18 / 19

63 References Nonlinear functional analysis Vol.1 (Fixed-point theorems). E. Zeidler. Nonlinear Perron-Frobenius theory. Lemmens, Nussbaum (2013). 19 / 19

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

THE BANACH CONTRACTION PRINCIPLE. Contents

THE BANACH CONTRACTION PRINCIPLE. Contents THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,

More information

Fixed Point Theorems For Set-Valued Maps

Fixed Point Theorems For Set-Valued Maps Fixed Point Theorems For Set-Valued Maps Bachelor s thesis in functional analysis Institute for Analysis and Scientific Computing Vienna University of Technology Andreas Widder, July 2009. Preface In this

More information

Random graphs with a given degree sequence

Random graphs with a given degree sequence Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 510838, 15 pages doi:10.1155/2010/510838 Research Article Stability Analysis for Higher-Order Adjacent Derivative

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Chapter 5. Banach Spaces

Chapter 5. Banach Spaces 9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line

The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line D. Bongiorno, G. Corrao Dipartimento di Ingegneria lettrica, lettronica e delle Telecomunicazioni,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

The Expectation Maximization Algorithm A short tutorial

The Expectation Maximization Algorithm A short tutorial The Expectation Maximiation Algorithm A short tutorial Sean Borman Comments and corrections to: em-tut at seanborman dot com July 8 2004 Last updated January 09, 2009 Revision history 2009-0-09 Corrected

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Fixed Point Theory. With 14 Illustrations. %1 Springer

Fixed Point Theory. With 14 Illustrations. %1 Springer Andrzej Granas James Dugundji Fixed Point Theory With 14 Illustrations %1 Springer Contents Preface vii 0. Introduction 1 1. Fixed Point Spaces 1 2. Forming New Fixed Point Spaces from Old 3 3. Topological

More information

15 Limit sets. Lyapunov functions

15 Limit sets. Lyapunov functions 15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior

More information

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING Abstract. In [1] there was proved a theorem concerning the continuity of the composition mapping, and there was announced a theorem on sequential continuity

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Game Theory: Supermodular Games 1

Game Theory: Supermodular Games 1 Game Theory: Supermodular Games 1 Christoph Schottmüller 1 License: CC Attribution ShareAlike 4.0 1 / 22 Outline 1 Introduction 2 Model 3 Revision questions and exercises 2 / 22 Motivation I several solution

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Tools for the analysis and design of communication networks with Markovian dynamics

Tools for the analysis and design of communication networks with Markovian dynamics 1 Tools for the analysis and design of communication networks with Markovian dynamics Arie Leizarowitz, Robert Shorten, Rade Stanoević Abstract In this paper we analyze the stochastic properties of a class

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Universal Algorithm for Trading in Stock Market Based on the Method of Calibration

Universal Algorithm for Trading in Stock Market Based on the Method of Calibration Universal Algorithm for Trading in Stock Market Based on the Method of Calibration Vladimir V yugin Institute for Information Transmission Problems, Russian Academy of Sciences, Bol shoi Karetnyi per.

More information

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider

More information

ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction

ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE J.M. CALABUIG, J. RODRÍGUEZ, AND E.A. SÁNCHEZ-PÉREZ Abstract. Let m be a vector measure taking values in a Banach space X. We prove that

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Perron vector Optimization applied to search engines

Perron vector Optimization applied to search engines Perron vector Optimization applied to search engines Olivier Fercoq INRIA Saclay and CMAP Ecole Polytechnique May 18, 2011 Web page ranking The core of search engines Semantic rankings (keywords) Hyperlink

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

Sensitivity analysis of utility based prices and risk-tolerance wealth processes Sensitivity analysis of utility based prices and risk-tolerance wealth processes Dmitry Kramkov, Carnegie Mellon University Based on a paper with Mihai Sirbu from Columbia University Math Finance Seminar,

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Metric Spaces Joseph Muscat 2003 (Last revised May 2009)

Metric Spaces Joseph Muscat 2003 (Last revised May 2009) 1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of

More information

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7 (67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Fabienne Comte, Celine Duval, Valentine Genon-Catalot To cite this version: Fabienne

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

1. Let X and Y be normed spaces and let T B(X, Y ).

1. Let X and Y be normed spaces and let T B(X, Y ). Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: NVP, Frist. 2005-03-14 Skrivtid: 9 11.30 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Ranking on Data Manifolds

Ranking on Data Manifolds Ranking on Data Manifolds Dengyong Zhou, Jason Weston, Arthur Gretton, Olivier Bousquet, and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany {firstname.secondname

More information

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin November 15, 2012; Lecture 10 1 / 27 Randomized online bipartite matching and the adwords problem. We briefly return to online algorithms

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information