Computing a Nearest Correlation Matrix with Factor Structure

Size: px
Start display at page:

Download "Computing a Nearest Correlation Matrix with Factor Structure"

Transcription

1 Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk Joint work with Rüdiger Borsdorf, Marcos Raydan NAG Quant Event, London, October 2009

2 Outline Properties of structured correlation matrices. Nearness problem for factor structured correlation matrices. Selection of optimization method. Numerical analysis issues. MIMS Nick Higham Nearest Correlation Matrix 2 / 33

3 Correlation Matrix n n symmetric positive semidefinite matrix A with a ii 1. symmetric, 1s on the diagonal, eigenvalues nonnegative or all principal minors nonnegative. Properties: off-diagonal elements between 1 and 1, convex set. MIMS Nick Higham Nearest Correlation Matrix 3 / 33

4 Quiz Is this a correlation matrix? MIMS Nick Higham Nearest Correlation Matrix 4 / 33

5 Quiz Is this a correlation matrix? Spectrum: , , MIMS Nick Higham Nearest Correlation Matrix 4 / 33

6 Quiz Is this a correlation matrix? Spectrum: , , For what w is this a correlation matrix? 1 w w w 1 w. w w 1 MIMS Nick Higham Nearest Correlation Matrix 4 / 33

7 Quiz Is this a correlation matrix? Spectrum: , , For what w is this a correlation matrix? 1 w w 1 w 1 w. n 1 w 1. w w 1 MIMS Nick Higham Nearest Correlation Matrix 4 / 33

8 Structured Correlation Matrices Nonnegative: MIMS Nick Higham Nearest Correlation Matrix 5 / 33

9 Structured Correlation Matrices Nonnegative: Low rank: MIMS Nick Higham Nearest Correlation Matrix 5 / 33

10 Structured Correlation Matrices Nonnegative: Low rank: Factor structure: 1 x 1x 2 x 1 x 3 x 1 x 2 1 x 2 x 3. x 1 x 3 x 2 x 3 1 MIMS Nick Higham Nearest Correlation Matrix 5 / 33

11 Approximate Correlation Matrices Empirical correlation matrices often not true correlation matrices, due to asynchronous data missing data limited precision stress testing MIMS Nick Higham Nearest Correlation Matrix 6 / 33

12 Nearest Correlation Matrix Find X achieving min{ A X F : X is a correlation matrix }, where A 2 F = i,j a2 ij. Constraint set is a closed, convex set, so unique minimizer. Nonlinear optimization problem. MIMS Nick Higham Nearest Correlation Matrix 7 / 33

13 Optimization Problems Issues Existence and uniqueness of solution. Convexity? Explicit, closed-form solution? Choice of algorithm. Availability of derivatives. Starting matrix and convergence criterion. Practical behaviour. MIMS Nick Higham Nearest Correlation Matrix 8 / 33

14 Quick and Dirty Differentiation Analytic function f : R R, i = 1. Complex step approximation: E.g., h = f (x) Im f(x + ih). h MIMS Nick Higham Nearest Correlation Matrix 9 / 33

15 Quick and Dirty Differentiation Analytic function f : R R, i = 1. Complex step approximation: E.g., h = f (x) Im f(x + ih). h f f(x + ih) (x) = Im + O(h 2 ), h f(x) = Re f(x + ih) + O(h 2 ). See MIMS EPrint , April MIMS Nick Higham Nearest Correlation Matrix 9 / 33

16 Alternating Projections Method H (2002): repeatedly project onto the positive semidefinite matrices then the unit diagonal matrices. S 1 S 2 Easy to implement. Guaranteed convergence, at a linear rate. Can add further constraints/projections, e.g., fixed elements (Lucas, 2001). MIMS Nick Higham Nearest Correlation Matrix 10 / 33

17 Newton Method Qi & Sun (2006): Newton method based on theory of strongly semismooth matrix functions. Applies Newton to dual (unconstrained) of min 1 2 A X 2 F problem. Globally and quadratically convergent. H & Borsdorf (2007) improve efficiency and reliability by using appropriate iterative method and eigensolver, preconditioning the Newton direction solve. The basis of G02AAF (nearest correlation matrix) in NAG Library Mark 22. MIMS Nick Higham Nearest Correlation Matrix 11 / 33

18 Factor Model ξ = X }{{} n k Since E(ξ) = 0, η }{{} k 1 + diag(f i ) } {{ } n n }{{} ε n 1, η,ε N(0, 1). cov(ξ) = E(ξξ T ) = XX T + F 2. Assume var(ξ i ) 1. Then k j=1 x 2 ij + f 2 ii = 1, so k j=1 x 2 ij 1, i = 1: n. Collateralized debt obligations (CDOs), multivariate time series. MIMS Nick Higham Nearest Correlation Matrix 12 / 33

19 Structured Correlation Matrix Yields correlation matrix of form C(X) = D + XX T = D + k j=1 x j x T j, D = diag(i XX T ), X = [x 1,...,x k ]. C(X) has k factor correlation matrix structure. MIMS Nick Higham Nearest Correlation Matrix 13 / 33

20 Structured Correlation Matrix Yields correlation matrix of form C(X) = D + XX T = D + k j=1 x j x T j, D = diag(i XX T ), X = [x 1,...,x k ]. C(X) has k factor correlation matrix structure. 1 y1 Ty 2... y1 Ty n y C(X) = 1 T y yn 1 T y, y i R k. n y1 Ty n... yn 1 T y n 1 MIMS Nick Higham Nearest Correlation Matrix 13 / 33

21 Aims For k factor correlation matrices, investigate mathematical properties, nearness problem. MIMS Nick Higham Nearest Correlation Matrix 14 / 33

22 1-Parameter Correlation Matrix X(w) = 1 w w w 1 w w w 1, w R. Theorem min{ A X(w) F : X(w) a corr. matrix } has unique solution the projection of onto [ 1/(n 1), 1]. w = et Ae trace(a), n 2 n MIMS Nick Higham Nearest Correlation Matrix 15 / 33

23 Block Structured Correlation Matrix 1 γ 11 γ 12 γ 12 γ 11 1 γ 12 γ 12 γ 12 γ 12 1 γ 22 γ 12 γ 12 γ 22 1, C ij = { C(γii ) R n i n i, i = j, γ ij ee T R n i n j, i j. Objective function: f(γ) = A C(Γ) 2 F = m A ii C(γ ii ) 2 F + i j i=1 A ij γ ij ee T 2 F. Convex constraint set unique minimizer. Alternating projections converges. MIMS Nick Higham Nearest Correlation Matrix 16 / 33

24 1-Factor Correlation Matrix i.e., c ij = x i x j, i j. C(x) = diag(1 x 2 i ) + xx T, x R n Lemma det(c(x)) = n (1 xi 2 ) + i=1 n i=1 x 2 i n j=1 j i (1 x 2 j ). Corollary If x e with x i = 1 for at most one i then C(x) is nonsingular. C(x) is singular if x i = x j = 1 for some i j. MIMS Nick Higham Nearest Correlation Matrix 17 / 33

25 Rank Result Theorem C(x) = diag(1 x 2 i ) + xx T, Let x R n with x e. Then rank(c(x)) = min(p + 1, n), where p is the number of x i for which x i < x 4 x x 4 x 5 x = [1 1 1 x 4 x 5 ] C(x) = x 4 x 5 x 4 x 4 x 4 1 x 4 x 5 x 5 x 5 x 5 x 4 x 5 1. MIMS Nick Higham Nearest Correlation Matrix 18 / 33

26 One-Factor Problem min x R n f(x) := A C(x) 2 F subject to e x e. Objective function is nonconvex. The constraint implies C(x) is a correlation matrix. MIMS Nick Higham Nearest Correlation Matrix 19 / 33

27 One-Factor Problem: Derivatives Objective: f(x) = A I, A I F 2x T (A I)x + (x T x) 2 n i=1 x 4 i. Gradient: f(x) = 4((x T x)x (A I)x diag(x 2 i )x). Hessian: 2 f(x) = 4(2xx T + (x T x + 1)I A 3diag(x 2 i )). f(x), 2 f(x) cheap. f(x) has a saddle point at x = 0. MIMS Nick Higham Nearest Correlation Matrix 20 / 33

28 Case 1: f(x ) = 0 If f(x ) = 0 then 2 f(x ) has the form H n (x) = (x T x)i + xx T 2D 2, x R n. For example, for n = 4: x2 2 + x x 4 2 x 1 x 2 x 1 x 3 x 1 x 4 x 2 x 1 x1 2 + x x 4 2 x 2 x 3 x 2 x 4 x 3 x 1 x 3 x 2 x1 2 + x x 4 2 x 3 x 4. x 4 x 1 x 4 x 2 x 4 x 3 x1 2 + x x 3 2 MIMS Nick Higham Nearest Correlation Matrix 21 / 33

29 Case 1: f(x ) = 0 If f(x ) = 0 then 2 f(x ) has the form H n (x) = (x T x)i + xx T 2D 2, x R n. For example, for n = 4: x2 2 + x x 4 2 x 1 x 2 x 1 x 3 x 1 x 4 x 2 x 1 x1 2 + x x 4 2 x 2 x 3 x 2 x 4 x 3 x 1 x 3 x 2 x1 2 + x x 4 2 x 3 x 4. x 4 x 1 x 4 x 2 x 4 x 3 x1 2 + x x 3 2 Theorem H n is positive semidefinite. Moreover, H n is nonsingular iff at least three of x 1, x 2,...,x n are nonzero. MIMS Nick Higham Nearest Correlation Matrix 21 / 33

30 Case 2: f(x ) > 0 Can write f(x) = H n (x) + E n (x) where E n has the form 0 x 1 x 2 a 12 x 1 x 3 a 13 x 1 x 4 a 14 E 4 = x 2 x 1 a 21 0 x 2 x 3 a 23 x 2 x 4 a 24 x 3 x 1 a 31 x 3 x 2 a 32 0 x 3 x 4 a 34. x 4 x 1 a 41 x 4 x 2 a 42 x 4 x 3 a 43 0 Hence, if x i x j a ij is sufficiently small and H n positive definite a stationary point will be a local minimizer. MIMS Nick Higham Nearest Correlation Matrix 22 / 33

31 k Factor Problem C(X) := I diag(xx T ) + XX T with X R n k. Representation not unique! k j=1 x 2 ij 1 = C(X) is a correlation matrix. The k factor problem is min X R n k f(x) := A C(X) 2 F subject to k j=1 x 2 ij 1. MIMS Nick Higham Nearest Correlation Matrix 23 / 33

32 k Factor Problem: Derivatives Gradient f(x) = 4(X(X T X) AX + X diag(xx T )X) Hessian given implicitly, can be viewed as a matrix representation of the Fréchet derivative of f(x). MIMS Nick Higham Nearest Correlation Matrix 24 / 33

33 Choice of Optimization Method Derivatives available. Ignore the constraints? Starting matrix, convergence test? MIMS Nick Higham Nearest Correlation Matrix 25 / 33

34 Choice of Optimization Method Derivatives available. Ignore the constraints? Starting matrix, convergence test? Rich set of solvers in NAG Library, Mark 22: E04 - Minimizing or Maximizing a Function E05 - Global Optimization of a Function MATLAB Optimization toolbox. MIMS Nick Higham Nearest Correlation Matrix 25 / 33

35 Alternating Directions Hence f (x ij ) = 0 if f(x ij ) = const. + 2 q i x ij = Project x ij onto [ 1, 1]. Convergence guaranteed. ( a iq k ) 2 x is x qs. s=1 q i x qj (a iq ) s j x isx qs. q i x 2 qj Limit may not be feasible for k > 1. MIMS Nick Higham Nearest Correlation Matrix 26 / 33

36 Principal Factors Method Anderson, Sidenius & Basu (2003): with F(X) = I diag(xx T ), X i = argmin X R n k A F(X i 1 ) XX T F. Min obtained by eigendecomposition of A F(X i 1 ). Equivalent to alternating projections method for U := {W R n n : w ij = a ij for i j} convex, S := {W R n n : W = XX T for X R n k } nonconvex! Alt proj theory says no guarantee of convergence! Constraints ignored, so project final iterate onto them. MIMS Nick Higham Nearest Correlation Matrix 27 / 33

37 Spectral Projected Gradient Method Birgin, Martínez & Raydan (2000). To minimize f : R n R over convex set Ω: x k+1 = x k + α k d k. d k = P Ω (x k t k f(x k )) x k is descent direction, α k [ 1, 1] chosen through nonmonotone line search strategy. Promising since P Ω ( ) is cheap to compute. MIMS Nick Higham Nearest Correlation Matrix 28 / 33

38 Code in ACM TOMS (Alg 813).

39 Test Examples corr: gallery( randcorr,n) nrand: 1 2 (B + BT ) + diag(i B) with B [ 1, 1] n n such that λ min (B) < 0. Results averaged over 10 instances. AD: alternating directions. PFM: principal factors method. Nwt: e04lb of NAG Toolbox for MATLAB (modified Newton), bound constraints. SPGM: spectral projected gradient method. MIMS Nick Higham Nearest Correlation Matrix 30 / 33

40 Comparison: k = 1, n = 2000 tol= 10 3 tol= 10 6 t(sec.) #its f(x ) t(sec.) #its f(x ) corr, f(x 0 ) = 26.0 AD PFM Nwt SPGM nrand f(x 0 ) = AD PFM Nwt SPGM MIMS Nick Higham Nearest Correlation Matrix 31 / 33

41 Comparison: k = 6, n = 1000 tol= 10 3 tol= 10 6 t(sec.) #its f(x ) t(sec.) #its f(x ) corr, f(x 0 ) = 18.5 AD PFM Nwt SPGM nrand, f(x 0 ) = 415 AD e4 1.28e4 414 PFM Nwt SPGM MIMS Nick Higham Nearest Correlation Matrix 32 / 33

42 Conclusions Performance of methods depends on the problem type, the required tolerance, the problem size. Alternating directions good for k = 1, low accuracy. Principal factors method generally fast, but may not converge to feasible point. Spectral projected gradient method wins overall. Incorporating the constraints need not hurt performance. Exploit convexity when present. MIMS Nick Higham Nearest Correlation Matrix 33 / 33

43 References I L. Anderson, J. Sidenius, and S. Basu. All your hedges in one basket. Risk, pages 67 72, Nov J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA J. Numer. Anal., 8: , E. G. Birgin, J. M. Martínez, and M. Raydan. Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim., 10(4): , MIMS Nick Higham Nearest Correlation Matrix 29 / 33

44 References II E. G. Birgin, J. M. Martínez, and M. Raydan. Algorithm 813: SPG Software for convex-constrained optimization. ACM Trans. Math. Software, 27(3): , E. G. Birgin, J. M. Martínez, and M. Raydan. Spectral projected gradient methods. In C. A. Floudas and P. M. Pardalos, editors, Encyclopedia of Optimization, pages Springer-Verlag, Berlin, second edition, P. Glasserman and S. Suchintabandid. Correlation expansions for CDO pricing. Journal of Banking & Finance, 31: , MIMS Nick Higham Nearest Correlation Matrix 30 / 33

45 References III N. J. Higham. Computing the nearest correlation matrix A problem from finance. IMA J. Numer. Anal., 22(3): , C. Lucas. Computing nearest covariance and correlation matrices. M.Sc. Thesis, University of Manchester, Manchester, England, Oct MIMS Nick Higham Nearest Correlation Matrix 31 / 33

46 References IV H.-D. Qi and D. Sun. A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl., 28(2): , P. Sonneveld, J. J. I. M. van Kan, X. Huang, and C. W. Oosterlee. Nonnegative matrix factorization of a correlation matrix. Linear Algebra Appl., 431: , MIMS Nick Higham Nearest Correlation Matrix 32 / 33

47 References V A. Vandendorpe, N.-D. Ho, S. Vanduffel, and P. Van Dooren. On the parameterization of the CreditRisk + model for estimating credit portfolio risk. Insurance: Mathematics and Economics, 42(2): , MIMS Nick Higham Nearest Correlation Matrix 33 / 33

Limited Memory Solution of Complementarity Problems arising in Video Games

Limited Memory Solution of Complementarity Problems arising in Video Games Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 69 Limited Memory Solution of Complementarity Problems arising in Video Games Michael C. Ferris Andrew J. Wathen Paul Armand Rapport

More information

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA) SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL

More information

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION Ş. İlker Birbil Sabancı University Ali Taylan Cemgil 1, Hazal Koptagel 1, Figen Öztoprak 2, Umut Şimşekli

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Perron vector Optimization applied to search engines

Perron vector Optimization applied to search engines Perron vector Optimization applied to search engines Olivier Fercoq INRIA Saclay and CMAP Ecole Polytechnique May 18, 2011 Web page ranking The core of search engines Semantic rankings (keywords) Hyperlink

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation M. Donatelli 1 M. Semplice S. Serra-Capizzano 1 1 Department of Science and High Technology

More information

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt. An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems

On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems Dynamics at the Horsetooth Volume 2, 2010. On using numerical algebraic geometry to find Lyapunov functions of polynomial dynamical systems Eric Hanson Department of Mathematics Colorado State University

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

An Introduction on SemiDefinite Program

An Introduction on SemiDefinite Program An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Stationarity Results for Generating Set Search for Linearly Constrained Optimization

Stationarity Results for Generating Set Search for Linearly Constrained Optimization SANDIA REPORT SAND2003-8550 Unlimited Release Printed October 2003 Stationarity Results for Generating Set Search for Linearly Constrained Optimization Tamara G. Kolda, Robert Michael Lewis, and Virginia

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems Amir Ali Ahmadi, Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Abstract We show that unless P=NP, there exists no

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

A Distributed Line Search for Network Optimization

A Distributed Line Search for Network Optimization 01 American Control Conference Fairmont Queen Elizabeth, Montréal, Canada June 7-June 9, 01 A Distributed Line Search for Networ Optimization Michael Zargham, Alejandro Ribeiro, Ali Jadbabaie Abstract

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Numerical Methods for Solving Systems of Nonlinear Equations

Numerical Methods for Solving Systems of Nonlinear Equations Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s

More information

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems

NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems Amir Ali Ahmadi, Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Abstract We show that unless P=NP, there exists no

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Parameter Estimation for Bingham Models

Parameter Estimation for Bingham Models Dr. Volker Schulz, Dmitriy Logashenko Parameter Estimation for Bingham Models supported by BMBF Parameter Estimation for Bingham Models Industrial application of ceramic pastes Material laws for Bingham

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

GI01/M055 Supervised Learning Proximal Methods

GI01/M055 Supervised Learning Proximal Methods GI01/M055 Supervised Learning Proximal Methods Massimiliano Pontil (based on notes by Luca Baldassarre) (UCL) Proximal Methods 1 / 20 Today s Plan Problem setting Convex analysis concepts Proximal operators

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Portfolio Optimization Part 1 Unconstrained Portfolios

Portfolio Optimization Part 1 Unconstrained Portfolios Portfolio Optimization Part 1 Unconstrained Portfolios John Norstad j-norstad@northwestern.edu http://www.norstad.org September 11, 2002 Updated: November 3, 2011 Abstract We recapitulate the single-period

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

The NEWUOA software for unconstrained optimization without derivatives 1. M.J.D. Powell

The NEWUOA software for unconstrained optimization without derivatives 1. M.J.D. Powell DAMTP 2004/NA05 The NEWUOA software for unconstrained optimization without derivatives 1 M.J.D. Powell Abstract: The NEWUOA software seeks the least value of a function F(x), x R n, when F(x) can be calculated

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

11 Projected Newton-type Methods in Machine Learning

11 Projected Newton-type Methods in Machine Learning 11 Projected Newton-type Methods in Machine Learning Mark Schmidt University of British Columbia Vancouver, BC, V6T 1Z4 Dongmin Kim University of Texas at Austin Austin, Texas 78712 schmidtmarkw@gmail.com

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Solving Very Large Financial Planning Problems on Blue Gene

Solving Very Large Financial Planning Problems on Blue Gene U N I V E R S School of Mathematics T H E O I T Y H F G E D I N U R Solving Very Large Financial Planning Problems on lue Gene ndreas Grothey, University of Edinburgh joint work with Jacek Gondzio, Marco

More information

Completely Positive Cone and its Dual

Completely Positive Cone and its Dual On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Observability Index Selection for Robot Calibration

Observability Index Selection for Robot Calibration Observability Index Selection for Robot Calibration Yu Sun and John M Hollerbach Abstract This paper relates 5 observability indexes for robot calibration to the alphabet optimalities from the experimental

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

Sensitivity analysis of utility based prices and risk-tolerance wealth processes Sensitivity analysis of utility based prices and risk-tolerance wealth processes Dmitry Kramkov, Carnegie Mellon University Based on a paper with Mihai Sirbu from Columbia University Math Finance Seminar,

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Massive Data Classification via Unconstrained Support Vector Machines

Massive Data Classification via Unconstrained Support Vector Machines Massive Data Classification via Unconstrained Support Vector Machines Olvi L. Mangasarian and Michael E. Thompson Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI

More information

CSCI567 Machine Learning (Fall 2014)

CSCI567 Machine Learning (Fall 2014) CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

1. Introduction and Summary

1. Introduction and Summary PSYCI-IOMETRIKA--VOL. 31, No. 2 Ju~ ~, 1966 TESTING A SIMPLE STRUCTURE HYPOTHESIS IN FACTOR ANALYSIS* K. G. JiJRESKOGt UNIVERSITY OF UPPSALA, SWEDEN It is assumed that the investigator has set up a simple

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

CS 295 (UVM) The LMS Algorithm Fall 2013 1 / 23

CS 295 (UVM) The LMS Algorithm Fall 2013 1 / 23 The LMS Algorithm The LMS Objective Function. Global solution. Pseudoinverse of a matrix. Optimization (learning) by gradient descent. LMS or Widrow-Hoff Algorithms. Convergence of the Batch LMS Rule.

More information

Sparse recovery and compressed sensing in inverse problems

Sparse recovery and compressed sensing in inverse problems Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

TWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS

TWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS TWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS XIAOLONG LONG, KNUT SOLNA, AND JACK XIN Abstract. We study the problem of constructing sparse and fast mean reverting portfolios.

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

arxiv:cs/0101001v1 [cs.ms] 3 Jan 2001

arxiv:cs/0101001v1 [cs.ms] 3 Jan 2001 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 arxiv:cs/0101001v1 [cs.ms] 3 Jan 2001 AUTOMATIC DIFFERENTIATION TOOLS IN OPTIMIZATION SOFTWARE Jorge J. Moré Mathematics and Computer

More information

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014 LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

More information