Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
|
|
|
- Peregrine Norris
- 9 years ago
- Views:
Transcription
1 Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
2 Outline 1 Simulation Metamodeling introduction and overview 2 Multi-Level Monte Carlo Metamodeling with Imry Rosenbaum http: //users.iems.northwestern.edu/~staum/mlmcm.pdf 3 Generalized Integrated Brownian Fields for Simulation Metamodeling with Peter Salemi and Barry L. Nelson
3 MCQMC / IBC Application Domain Industrial Engineering & Operations Research using math to analyze systems and improve decisions Stochastic Simulation: production, logistics, financial,... integration: µ = E[Y ] = Y (ω) dω parametric integration: approx. µ def. by µ(x) = E[Y (x)] optimization: min{µ(x) : x X }
4 What is Stochastic Simulation Metamodeling? Stochastic simulation model example Fuel injector production line System performance measure µ(x) = E[Y (x)] x: design of production line Y (x): number of fuel injectors produced Simulating each scenario (20 replications) takes 8 hours Stochastic simulation metamodeling Simulation output Ȳ (x i ) at x i, i = 1,..., n Predict µ(x) by ˆµ(x), even without simulating at x ˆµ(x) is usually a weighted average of Ȳ (x 1 ),..., Ȳ (x n )
5 Overview of Multi-Level Monte Carlo (MLMC) Error in Stochastic Simulation Metamodeling prediction ˆµ(x) = k w i (x)ȳ (x i) i=1 variance: Var[ˆµ(x)] caused by variance of simulation output interpolation error (bias): E[ˆµ(x)] µ(x)
6 Main Idea of Multi-Level Monte Carlo Ordinary Monte Carlo to reduce variance: large number n of replications per simulation run (design point) to reduce bias: large number k of design points (fine grid) very large computational effort kn Multi-Level Monte Carlo to reduce variance: coarser grids, many replications each to reduce bias: finer grids, few replications each less computational effort / better convergence rate
7 Our Contributions Theoretical mix and match or expand (from Heinrich s papers): derive desired conclusions under desired assumptions to suit IE goals and applications Practical algorithm design (based on Giles) Experimental show how much MLMC speeds up realistic examples in IE
8 Heinrich (2001): MLMC for Parametric Integration Assumptions Approximate µ given by µ(x) = Ω Y (x, ω) dω over x X. X R d and Ω R d 2 : bounded, open, Lipschitz boundary. With respect to x, Y has weak derivatives up to rth order. Y and weak derivatives are L q -integrable in (x, ω). Sobolev embedding condition: r/d > 1/q. Measure error as ( Ω ˆµ µ p q dω) 1/p, where p = min{2, q}. Conclusion: There is a MLMC method with optimal rate. MLMC attains the best rate of convergence in C, the number of evaluations of Y. The error bound is proportional to C r/d if r/d < 1 1/p C 1/p 1 log C if r/d = 1 1/p C 1/p 1 if r/d > 1 1/p.
9 Assumptions Smoothness: assume r = 1 Stock option, Y (x, ω) = max{xr(ω) K, 0} Queueing: waiting time W n+1 = max{w n + B n A n+1, 0} Inventory: S n = min{i n + P n, D n }, I n+1 = I n S n Parameter Domain Assume X R d is compact (not open). Heinrich and Sindambiwe (1999), Daun and Heinrich (2014) If X were open, we would have to extrapolate. No need to approximate unbounded µ near a boundary of X. Domain of Integration Ω R d 2 is not important; d 2 does not appear in theorem.
10 Changing Perspective Measure of Error Use p = q = 2 to get Root Mean Integrated Squared Error ( X (ˆµ(x) µ(x))2 dx dω ) 1/2 Ω
11 Changing Perspective Measure of Error Use p = q = 2 to get Root Mean Integrated Squared Error ( X (ˆµ(x) µ(x))2 dx dω ) 1/2 Ω Sobolev Embedding Criterion with r = 1, q = 2 r/d > 1/q becomes 1/d > 1/2, i.e. d = 1!??
12 Changing Perspective Measure of Error Use p = q = 2 to get Root Mean Integrated Squared Error ( X (ˆµ(x) µ(x))2 dx dω ) 1/2 Ω Sobolev Embedding Criterion with r = 1, q = 2 r/d > 1/q becomes 1/d > 1/2, i.e. d = 1!?? Why We Don t Need the Sobolev Embedding Condition Assume the domain X is compact. Assume Y (, ω) is (almost surely) Lipschitz continuous. Conclude Y (, ω) is (almost surely) bounded.
13 Our Assumptions On the Stochastic Simulation Metamodeling Problem X R d is compact Y (x) has finite variance for all x X Y (x, ω) Y (x, ω) κ(ω) x x, a.s., and E[κ 2 ] <. On the Approximation Method and MLMC Design ˆµ(x) = N i=1 w i(x)ȳ (x i ) where each w i (x) 0 and Total weight on points x i far from x gets close to 0. Total weight on points x i near x gets close to 1. Thresholds for far / near and close to are O(N 1/2φ ) as number N of points increases. Examples: piecewise linear interpolation on a grid; nearest-neighbors, Shepard s method, kernel smoothing
14 Approximation Method Used in Examples Kernel Smoothing ˆµ(x) = N w i (x)ȳ (x i ) i=1 weight w i (x) is 0 if x i is outside the cell containing x otherwise, proportional to exp( x x i ) weights are normalized to sum to 1
15 Our Conclusions MLMC Performance As number N of points used in a level increases, Errors due to bias and refinement variance are like O(N 1/φ ). Example: nearest-neighbor approximation on grid, φ = d/2 Computational Complexity (based on Giles 2013) To attain RMISE < ɛ, the required number of evaluations of Y is O(ɛ 2(1+φ) ) for standard Monte Carlo and for MLMC it is O(ɛ 2φ ) if φ > 1 O((ɛ 1 (log ɛ 1 )) 2 ) if φ = 1 O(ɛ 2 ) if φ < 1.
16 Sketch of Algorithm (based on Giles 2008) Goal: add levels until target RMISE < ɛ is achieved. 1 INITIALIZE level l = 0. 2 SIMULATE at level l: 1 Run level l simulation experiment with M 0 replications. 2 Observe sample variance of simulation output. 3 Choose number of replications M l to control variance; run more replications if needed. 3 TEST CONVERGENCE: 1 Use Monte Carlo to estimate the size of the refinement ˆµ l, X ( ˆµ l(x)) 2 dx. 2 If refinements are too large compared to target RMISE, increment l and return to step 2. 4 CLEAN UP: Finalize number of replications M 0,..., M l to control variance; run more replications at each level if needed.
17 Asian Option Example, d = 3 MLMC up to 150 times better than standard Monte Carlo
18 Inventory System Example, d = 2 MLMC was times better than standard Monte Carlo
19 Conclusion on Multi-Level Monte Carlo Celebration Multi-Level Monte Carlo works for typical IE stochastic simulation metamodeling too! Future Research Handle discontinuities in simulation output. Combine with good experiment designs. Grids are not good in high dimension.
20 Introduction: Generalized Integrated Brownian Field Kriging / Interpolating Splines Pretend µ is a realization of a Gaussian random field M with mean function m and covariance function σ 2. Kriging predictor: ˆµ(x) = m(x) + σ 2 (x)σ 1 (Ȳ m) = m(x) + i β i σ 2 (x, x i ) σ 2 (x) is a vector with ith element σ 2 (x, x i ) Σ is a matrix with i, jth element σ 2 (x i, x j ) Ȳ m is a vector with ith element Ȳ (x i) m(x i ) Stochastic Kriging / Smoothing Splines ˆµ(x) = m(x) + σ 2 (x)(σ + C) 1 (Ȳ m) = m(x) + i β i σ 2 (x, x i ) C = covariance matrix of noise, estimated from replications
21 Radial Basis Functions vs. Integrated Brownian Field Radial Basis Functions Basis Functions from r-fold Integrated Brownian Field (d) r = 0 (e) r = 1 (f) r = 2
22 Response Surfaces in IE Stochastic Simulation (g) Credit Risk (h) Inventory
23 r-integrated Brownian Field B r Covariance function / reproducing kernel σ 2 (x, y) = d i=1 1 (r!) (x i u i ) r +(y i u i ) r + du i Inner product f, g = (f ([r r]) (u))(g ([r r]) (u)) du (0,1) d Space Tensor product of Sobolev Hilbert space H r (0, 1) with boundary conditions f (j) (0) = 0 for j = 0,..., r What s missing? polynomials of degree r
24 Removing Boundary Conditions: d = 1 Generalized integrated Brownian motion r x k X r (x) = θk Z k k! + θ r+1 B r (x) k=0 k=0 Covariance function / reproducing kernel r σ 2 x k y k 1 (x, y) = θ k (k!) 2 + θ (x u) r +(y u) r + r+1 (r!) 2 Sobolev space H r (0, 1), no boundary conditions Inner product f, g = r k=0 1 (f (k) (u))(g (k) (u)) + 1 θ k θ r du (f (r) (u))(g (r) (u)) du
25 Multidimensional, Without Boundary Conditions Tensor-Product RKHS with Weights Example of reproducing kernel for d = 2, r = 1 K(x, y) = θ 00 + θ 10 x 1 y 1 + θ 20 (x 1 y 1 ) + θ 01 x 2 y 2 + θ 02 (x 2 y 2 ) +θ 11 x 1 x 2 y 1 y 2 + θ 12 x 1 y 1 (x 2 y 2 ) +θ 21 (x 1 y 1 )x 2 y 2 + θ 22 (x 1 y 1 )(x 2 y 2 ) In general, one weight for each of d i=1 (r i + 2) subspaces.
26 Multidimensional, Without Boundary Conditions Tensor-Product RKHS with Weights Example of reproducing kernel for d = 2, r = 1 K(x, y) = θ 00 + θ 10 x 1 y 1 + θ 20 (x 1 y 1 ) + θ 01 x 2 y 2 + θ 02 (x 2 y 2 ) +θ 11 x 1 x 2 y 1 y 2 + θ 12 x 1 y 1 (x 2 y 2 ) +θ 21 (x 1 y 1 )x 2 y 2 + θ 22 (x 1 y 1 )(x 2 y 2 ) In general, one weight for each of d i=1 (r i + 2) subspaces. Generalized Integrated Brownian Field Covariance function / reproducing kernel σ 2 (x, y) = d i=1 ( ri k=0 xi k y k 1 i θ i,k (k!) 2 + θ (x i u i ) r i + (y ) i u i ) r i + i,r i +1 0 (r i!) 2 du i In general, number of weights is d i=1 (r i + 2).
27 Our Contributions more parsimonious parametrization makes maximum likelihood estimation easier and MLE search for r 1,..., r d GIBF has Markov property d = 1: proof d > 1: conjecture IE simulation examples stochastic and deterministic simulation standard and nonstandard information
28 Credit Risk Example, d = 2 Experiment design: 63 Sobol points, predictions in a smaller square Factor by which MISE decreased using (1,1)-GIBF Number of replications Noise level none low medium Without gradient estimates With gradient estimates (i) Credit risk surface (j) Gaussian (k) (1, 1)-GIBF
29 Conclusion on Generalized Integrated Brownian Field Emancipating Simulation Metamodeling from Geostatistics a new covariance function for kriging, designed for simulation metamodeling in engineering Superior Practical Performance times better than Gaussian covariance function in 2-6 dimensional examples with or without gradient information
30 Thank You!
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
CS 688 Pattern Recognition Lecture 4. Linear Models for Classification
CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
Monte Carlo Simulation
1 Monte Carlo Simulation Stefan Weber Leibniz Universität Hannover email: [email protected] web: www.stochastik.uni-hannover.de/ sweber Monte Carlo Simulation 2 Quantifying and Hedging
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )
Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) and Neural Networks( 類 神 經 網 路 ) 許 湘 伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 35 13 Examples
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Optimal order placement in a limit order book. Adrien de Larrard and Xin Guo. Laboratoire de Probabilités, Univ Paris VI & UC Berkeley
Optimal order placement in a limit order book Laboratoire de Probabilités, Univ Paris VI & UC Berkeley Outline 1 Background: Algorithm trading in different time scales 2 Some note on optimal execution
Statistics Graduate Courses
Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.
Acknowledgments. Data Mining with Regression. Data Mining Context. Overview. Colleagues
Data Mining with Regression Teaching an old dog some new tricks Acknowledgments Colleagues Dean Foster in Statistics Lyle Ungar in Computer Science Bob Stine Department of Statistics The School of the
Chapter 2: Binomial Methods and the Black-Scholes Formula
Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the
Moving Least Squares Approximation
Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
From CFD to computational finance (and back again?)
computational finance p. 1/21 From CFD to computational finance (and back again?) Mike Giles [email protected] Oxford University Mathematical Institute Oxford-Man Institute of Quantitative Finance
Monte Carlo Methods in Finance
Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook October 2, 2012 Outline Introduction 1 Introduction
Designing a learning system
Lecture Designing a learning system Milos Hauskrecht [email protected] 539 Sennott Square, x4-8845 http://.cs.pitt.edu/~milos/courses/cs750/ Design of a learning system (first vie) Application or Testing
Monte Carlo Methods and Models in Finance and Insurance
Chapman & Hall/CRC FINANCIAL MATHEMATICS SERIES Monte Carlo Methods and Models in Finance and Insurance Ralf Korn Elke Korn Gerald Kroisandt f r oc) CRC Press \ V^ J Taylor & Francis Croup ^^"^ Boca Raton
Pricing and calibration in local volatility models via fast quantization
Pricing and calibration in local volatility models via fast quantization Parma, 29 th January 2015. Joint work with Giorgia Callegaro and Martino Grasselli Quantization: a brief history Birth: back to
Gaussian Processes to Speed up Hamiltonian Monte Carlo
Gaussian Processes to Speed up Hamiltonian Monte Carlo Matthieu Lê Murray, Iain http://videolectures.net/mlss09uk_murray_mcmc/ Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo
Linear Models for Classification
Linear Models for Classification Sumeet Agarwal, EEL709 (Most figures from Bishop, PRML) Approaches to classification Discriminant function: Directly assigns each data point x to a particular class Ci
Two-Stage Stochastic Linear Programs
Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic
Econometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of
Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
Classification Problems
Classification Read Chapter 4 in the text by Bishop, except omit Sections 4.1.6, 4.1.7, 4.2.4, 4.3.3, 4.3.5, 4.3.6, 4.4, and 4.5. Also, review sections 1.5.1, 1.5.2, 1.5.3, and 1.5.4. Classification Problems
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Nonlinear Regression:
Zurich University of Applied Sciences School of Engineering IDP Institute of Data Analysis and Process Design Nonlinear Regression: A Powerful Tool With Considerable Complexity Half-Day : Improved Inference
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
An Introduction to Machine Learning
An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia [email protected] Tata Institute, Pune,
Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni
1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed
How To Understand The Theory Of Probability
Graduate Programs in Statistics Course Titles STAT 100 CALCULUS AND MATR IX ALGEBRA FOR STATISTICS. Differential and integral calculus; infinite series; matrix algebra STAT 195 INTRODUCTION TO MATHEMATICAL
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
A General Approach to Variance Estimation under Imputation for Missing Survey Data
A General Approach to Variance Estimation under Imputation for Missing Survey Data J.N.K. Rao Carleton University Ottawa, Canada 1 2 1 Joint work with J.K. Kim at Iowa State University. 2 Workshop on Survey
1 The Brownian bridge construction
The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof
Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010
Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte
Erdős on polynomials
Erdős on polynomials Vilmos Totik University of Szeged and University of South Florida [email protected] Vilmos Totik (SZTE and USF) Polynomials 1 / * Erdős on polynomials Vilmos Totik (SZTE and USF)
A Study on the Comparison of Electricity Forecasting Models: Korea and China
Communications for Statistical Applications and Methods 2015, Vol. 22, No. 6, 675 683 DOI: http://dx.doi.org/10.5351/csam.2015.22.6.675 Print ISSN 2287-7843 / Online ISSN 2383-4757 A Study on the Comparison
Simulating Stochastic Differential Equations
Monte Carlo Simulation: IEOR E473 Fall 24 c 24 by Martin Haugh Simulating Stochastic Differential Equations 1 Brief Review of Stochastic Calculus and Itô s Lemma Let S t be the time t price of a particular
Poisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
How to assess the risk of a large portfolio? How to estimate a large covariance matrix?
Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Lectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
Christfried Webers. Canberra February June 2015
c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic
Lecture 2: ARMA(p,q) models (part 3)
Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.
Natural cubic splines
Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled
The Kelly criterion for spread bets
IMA Journal of Applied Mathematics 2007 72,43 51 doi:10.1093/imamat/hxl027 Advance Access publication on December 5, 2006 The Kelly criterion for spread bets S. J. CHAPMAN Oxford Centre for Industrial
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
HETEROGENEOUS AGENTS AND AGGREGATE UNCERTAINTY. Daniel Harenberg [email protected]. University of Mannheim. Econ 714, 28.11.
COMPUTING EQUILIBRIUM WITH HETEROGENEOUS AGENTS AND AGGREGATE UNCERTAINTY (BASED ON KRUEGER AND KUBLER, 2004) Daniel Harenberg [email protected] University of Mannheim Econ 714, 28.11.06 Daniel Harenberg
Stochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Support Vector Machines for Classification and Regression
UNIVERSITY OF SOUTHAMPTON Support Vector Machines for Classification and Regression by Steve R. Gunn Technical Report Faculty of Engineering, Science and Mathematics School of Electronics and Computer
Credit Risk Models: An Overview
Credit Risk Models: An Overview Paul Embrechts, Rüdiger Frey, Alexander McNeil ETH Zürich c 2003 (Embrechts, Frey, McNeil) A. Multivariate Models for Portfolio Credit Risk 1. Modelling Dependent Defaults:
Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)
Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February
Geography 4203 / 5203. GIS Modeling. Class (Block) 9: Variogram & Kriging
Geography 4203 / 5203 GIS Modeling Class (Block) 9: Variogram & Kriging Some Updates Today class + one proposal presentation Feb 22 Proposal Presentations Feb 25 Readings discussion (Interpolation) Last
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Simulation-based optimization methods for urban transportation problems. Carolina Osorio
Simulation-based optimization methods for urban transportation problems Carolina Osorio Civil and Environmental Engineering Department Massachusetts Institute of Technology (MIT) Joint work with: Prof.
Is a Brownian motion skew?
Is a Brownian motion skew? Ernesto Mordecki Sesión en honor a Mario Wschebor Universidad de la República, Montevideo, Uruguay XI CLAPEM - November 2009 - Venezuela 1 1 Joint work with Antoine Lejay and
Hedging Options In The Incomplete Market With Stochastic Volatility. Rituparna Sen Sunday, Nov 15
Hedging Options In The Incomplete Market With Stochastic Volatility Rituparna Sen Sunday, Nov 15 1. Motivation This is a pure jump model and hence avoids the theoretical drawbacks of continuous path models.
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
Calculating VaR. Capital Market Risk Advisors CMRA
Calculating VaR Capital Market Risk Advisors How is VAR Calculated? Sensitivity Estimate Models - use sensitivity factors such as duration to estimate the change in value of the portfolio to changes in
Equity-Based Insurance Guarantees Conference November 1-2, 2010. New York, NY. Operational Risks
Equity-Based Insurance Guarantees Conference November -, 00 New York, NY Operational Risks Peter Phillips Operational Risk Associated with Running a VA Hedging Program Annuity Solutions Group Aon Benfield
Linear Discrimination. Linear Discrimination. Linear Discrimination. Linearly Separable Systems Pairwise Separation. Steven J Zeil.
Steven J Zeil Old Dominion Univ. Fall 200 Discriminant-Based Classification Linearly Separable Systems Pairwise Separation 2 Posteriors 3 Logistic Discrimination 2 Discriminant-Based Classification Likelihood-based:
Error estimates for nearly degenerate finite elements
Error estimates for nearly degenerate finite elements Pierre Jamet In RAIRO: Analyse Numérique, Vol 10, No 3, March 1976, p. 43 61 Abstract We study a property which is satisfied by most commonly used
MATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
Lecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Sales forecasting # 2
Sales forecasting # 2 Arthur Charpentier [email protected] 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting
IL GOES OCAL A TWO-FACTOR LOCAL VOLATILITY MODEL FOR OIL AND OTHER COMMODITIES 15 // MAY // 2014
IL GOES OCAL A TWO-FACTOR LOCAL VOLATILITY MODEL FOR OIL AND OTHER COMMODITIES 15 MAY 2014 2 Marie-Lan Nguyen / Wikimedia Commons Introduction 3 Most commodities trade as futures/forwards Cash+carry arbitrage
Stochastic Gradient Method: Applications
Stochastic Gradient Method: Applications February 03, 2015 P. Carpentier Master MMMEF Cours MNOS 2014-2015 114 / 267 Lecture Outline 1 Two Elementary Exercices on the Stochastic Gradient Two-Stage Recourse
Monte Carlo-based statistical methods (MASM11/FMS091)
Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February 5, 2013 J. Olsson Monte Carlo-based
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data Faming Liang University of Florida August 9, 2015 Abstract MCMC methods have proven to be a very powerful tool for analyzing
Simple and efficient online algorithms for real world applications
Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld.
Logistic Regression Vibhav Gogate The University of Texas at Dallas Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld. Generative vs. Discriminative Classifiers Want to Learn: h:x Y X features
The Ergodic Theorem and randomness
The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
