Template for parameter estimation with Matlab Optimization Toolbox; including dynamic systems



Similar documents
8. Linear least-squares

Gamma Distribution Fitting

Scicos is a Scilab toolbox included in the Scilab package. The Scicos editor can be opened by the scicos command

Least Squares Estimation

AC : MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

AT2 PROBLEM SET #3. Problem 1

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

Applications to Data Smoothing and Image Processing I

x = + x 2 + x

STATISTICA Formula Guide: Logistic Regression. Table of Contents

The KaleidaGraph Guide to Curve Fitting

Manifold Learning Examples PCA, LLE and ISOMAP

MatLab - Systems of Differential Equations

SAS Software to Fit the Generalized Linear Model

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers

Curve Fitting Best Practice

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Dealing with Data in Excel 2010

The Method of Least Squares

Appendix H: Control System Computational Aids

MATLAB Workshop 14 - Plotting Data in MATLAB

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

1 Example of Time Series Analysis by SSA 1

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Typical Linear Equation Set and Corresponding Matrices

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

1 Review of Least Squares Solutions to Overdetermined Systems

Signal to Noise Instrumental Excel Assignment

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Data Mining: Algorithms and Applications Matrix Math Review

Solving ODEs in Matlab. BP205 M.Tremont

T : Classification as Spam or Ham using Naive Bayes Classifier. Santosh Tirunagari :

Dimensionality Reduction: Principal Components Analysis

PHAR 7633 Chapter 19 Multi-Compartment Pharmacokinetic Models

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

The Method of Least Squares. Lectures INF2320 p. 1/80

On Parametric Model Estimation

LS.6 Solution Matrices

Understanding and Applying Kalman Filtering

Multiple Optimization Using the JMP Statistical Software Kodak Research Conference May 9, 2005

Weighted-Least-Square(WLS) State Estimation

Curve Fitting. Before You Begin

Multi-variable Calculus and Optimization

Cooling and Euler's Method

Logistic Regression (1/24/13)

Lecture 2 Linear functions and examples

6. Cholesky factorization

Diffusione e perfusione in risonanza magnetica. E. Pagani, M. Filippi

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Lecture 5 Least-squares

Numerical Methods in MATLAB

Module 2 Introduction to SIMULINK

We shall turn our attention to solving linear systems of equations. Ax = b

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

PROGRAMMING FOR CIVIL AND BUILDING ENGINEERS USING MATLAB

Package EstCRM. July 13, 2015

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp

Fitting Subject-specific Curves to Grouped Longitudinal Data

Parameter Estimation for Bingham Models

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

General Framework for an Iterative Solution of Ax b. Jacobi s Method

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

7 Time series analysis

Regression Clustering

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute

Curve Fitting with Maple

CD-ROM Appendix E: Matlab

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Appendix 4 Simulation software for neuronal network models

Numerical Solution of Differential Equations

The Combination Forecasting Model of Auto Sales Based on Seasonal Index and RBF Neural Network

Mass Spectrometry Signal Calibration for Protein Quantitation

System Identification for Acoustic Comms.:

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

A Comparison of Nonlinear. Regression Codes

Nonlinear Statistical Models

Quick Tour of Mathcad and Examples

An Introduction to Using Simulink

PEST - Beyond Basic Model Calibration. Presented by Jon Traum

Linear Algebra Review. Vectors

Confidence Intervals for One Standard Deviation Using Standard Deviation

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Tutorial on Using Excel Solver to Analyze Spin-Lattice Relaxation Time Data

MATLAB Functions. function [Out_1,Out_2,,Out_N] = function_name(in_1,in_2,,in_m)

Operation Count; Numerical Linear Algebra

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Distribution (Weibull) Fitting

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

Nonlinear Iterative Partial Least Squares Method

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Lecture 5: Singular Value Decomposition SVD (1)

Transcription:

Template for parameter estimation with Matlab Optimization Toolbox; including dynamic systems 1. Curve fitting A weighted least squares fit for a model which is less complicated than the system that generated the data (a case of so called undermodeling ). System: 3x e x 0 1 y 2 ( x1) 1 x2 (1.1) Data (noise model): y y N(0, 0.05) (1.2) with additive zero mean white noise ZMWN with a standard deviation of 0.05. Model to fit the data: data yfit a( xb) e (1.3) with parameters to be fitted = [a,b]. The model can be forced to fit the second part of the data (1 < x < 2) by assigning different weights to the data. Error function: w1 ydata yfit ( x, ) 0 x1 e wydata yfit ( x, ) w2 ydata yfit ( x, ) 1 x2 (1.4) with w 1 = 1 and w 2 = 10. To implement and solve the weighted least squares fitting problem in Matlab the function LSQNONLIN of the Optimization Toolbox is used. Optimization algorithms (in fact a minimization is performed) require the user to specify an initial guess 0 for the parameters. Here we use 0 = [0.1, 1 ]. The Matlab code in the box below can be copied and paste in the Matlab editor and then saved (or push the Run button which will save and automatically run the code). function myfit %myfit Weighted least squares fit %% create the first half of the data xdata1 = 0:.01:1; ydata1 = exp(-3*xdata1) + randn(size(xdata1))*.05; weight1 = ones(size(xdata1))*1; %% create the second half of the data % use a different function and with higher weights xdata2 = 1:.01:2; ydata2 = (xdata2-1).^2 + randn(size(xdata2))*.05; weight2 = ones(size(xdata2))*10; NvR 1

%% combine the two data sets xdata = [ xdata1 xdata2 ]; ydata = [ ydata1 ydata2 ]; weight = [ weight1 weight2 ]; %% call LSQNONLIN parameter_hat = lsqnonlin(@mycurve,[.1-1],[],[],[],xdata,ydata) %% plot the original data and fitted function plot(xdata,ydata,'b.') hold on fitted = exp(parameter_hat(1).*(parameter_hat(2) +xdata)); plot(xdata,fitted,'r') xlabel('x'); ylabel('y') legend('data', 'Fit') %% function that reports the error function err = mycurve(parameter,real_x, real_y) fit = exp(parameter(1).*(real_x + parameter(2))); err = fit - real_y; % weight the error according to the WEIGHT vector err_weighted = err.*weight; err = err_weighted; end end Executing this Matlab program results in: parameter_hat = 3.1001-1.9679 1.2 1 Data Fit 0.8 0.6 y 0.4 0.2 0-0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x So yfit e 3.1001( x1.9679) NvR 2

However, here we do not obtain any information about the accuracy of the estimated ˆ. For example, we do not know how critical the fit depends on all digits in the parameter values returned by Matlab. 2. Parameter estimation for a dynamic model In the second example we consider a dynamical system. If blood plasma and a tissue or organ of interest can be considered as connected compartments then the following model can be used to describe tissue perfusion: dce K Ca C dt e C C t e e e in which the arterial plasma concentration C a (t) is the input and the tissue concentration C t (t) is the output. This linear differential equation model can be formed into 2 descriptions often used in system and control engineering. a) State space format: x () t Ax() t Bu() t y() t Cx() t Du() t K K u Ca, xce, y Ct A, B, C e, D 0 b) Transfer function: Ct () s K H() s C () s s K a e e The Control Toolbox from Matlab can be used to implement and simulate this model. The parameters = [K, e ] need to be estimated from clinical data. Tissue perfusion can be measured with Dynamic Contrast Enhanced MRI (DCE MRI). A contrast agent is injected into the bloodstream, circulates through the body and diffuses into tissues and organs. The kinetics of contrast enhancement is measured in a voxel in the tissue of interest, which can be lated into a concentration. The following noise (error) model is assumed/proposed: e y Ct (1.5) with ZMWN and therefore the sum of squared errors is used as estimator. In this case the purpose of the model is not just to be able to describe the data, but differences in the estimated parameter values have a biological meaning and might be used for clinical diagnosis and decisions. Hence it is critically important to assess the accuracy of the estimates. For this problem the estimator is the Maximum Likelihood Estimator (MLE) and it is possible to calculate the covariance NvR 3

matrix of the parameter estimates, which quantifies the accuracy of the estimate. (The inverse of the covariance matrix is known as the Fisher Information Matrix.) In the subsequent Matlab code it is shown how the covariance matrix can be calculated from the outputs provided by the LSQNONLIN function. As initial guesses 0 = [2e 3, 0.1] will be used. function compartment %compartment Estimating pharmacokinetic parameters using a dynamic 2 %compartment model %Loads datafile pmpat010.txt. %% Load data datafile = 'pmpat010.txt'; data = load(datafile); t=data(1,:); %[s] Ca=data(2,:); %[mm] arterial input function Ct=data(3,:); %[mm] tissue response figure; plot(t,ca,'.b',t,ct,'.r'); hold on xlabel('time [s]'); ylabel('[mm]') legend('aif','ct') N=length(Ct); %% Simulate model with initial parameter values %Simulate K = 2e-3; ve = 0.1; p = [K, ve]; [y,t] = compart_lsim(t,ca,p); plot(t,y,'k'); %% Estimate parameters and variances optim_options = optimset('display', 'iter',......%'tolfun', 1e-6,... %default: 1e-4...%'TolX', 1e-6,... %default: 1e-4 'LevenbergMarquardt', 'on'); %default: 'off' %optim_options = []; p0 = [K, ve]; [p,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(@compart_error, p0, [],[],optim_options, t,ca,ct); disp(' ') p %Estimated parameter variance Jacobian = full(jacobian); %lsqnonlin returns the Jacobian as a sparse matrix varp = resnorm*inv(jacobian'*jacobian)/n stdp = sqrt(diag(varp)); %standard deviation is square root of variance stdp = 100*stdp'./p; %[%] disp([' K: ', num2str(p(1)), ' +/- ', num2str(stdp(1)), '%']) disp([' ve: ', num2str(p(2)), ' +/- ', num2str(stdp(2)), '%']); %% Simulate estimated model [y,t] = compart_lsim(t,ca,p); figure; subplot(211); plot(t,ct,'.r',t,y,'b'); xlabel('time [s]'); ylabel('[mm]') legend('data','model') NvR 4

xi = Ct(:)-y(:); %same as residual from lsqnonlin subplot(212); plot(t,xi) xlabel('time [s]'); legend('residuals \xi') assen=axis; %% function to simulate compartment model function [y,t] = compart_lsim(t,ca,p) K = p(1); ve = p(2); num = [K*ve]; den = [ve, K]; sys = tf(num,den); [y,t] = lsim(sys,ca,t); end %% function to calculate MLE error function xi = compart_error(p, t,ca,ct) [y,t] = compart_lsim(t,ca,p); xi = Ct(:)-y(:); %make sure both vectors are columns %figure(1); plot(t,ct,'.',t,y,'g'); hold off; drawnow %uncomment this line to show %datafir for each optimization iteration (slows down the execution) end end %main function When the program is executed, first a figure is returned with the input data (concentration in blood plasma, AIF, in blue), the output data (tissue compartment, Ct, in red) and the model simulation given the initial parameter values 0 (solid black line). 35 30 AIF Ct 25 20 [mm] 15 10 5 0-5 0 100 200 300 400 500 600 Time [s] Secondly the results of the parameter estimation (vector of estimated parameters, estimated parameter covariance matrix, and the parameter estimates with their relative standard deviation, in (%)): p = NvR 5

varp = 0.0097 0.2018 1.0e-005 * 0.0133-0.0025-0.0025 0.2574 K: 0.0096992 +/- 3.764% ve: 0.20176 +/- 0.79515% Finally, a figure with a plot of the identified model and the data (top panel), and a plot of the difference between model and data (residuals, bottom). [mm] 4 2 0 data model -2 0 100 200 300 400 500 600 Time [s] 1 0.5 residuals 0-0.5-1 0 100 200 300 400 500 600 Time [s] NvR 6

3. Template Finally a template is provided to estimate a subset of the parameters in a model (some parameters are assumed to be known and therefore are fixed) and the model is composed of a set of coupled first order nonlinear differential equations (simulated with one of the Matlab ode solvers). Template for Nonlinear Least Squares estimation and Fisher Information Matrix %% Load data %for example from Excel data = xlsread(... %obtain / define experimental time vector texp = %% Simulation time and input tsim = texp; %not necessarily the same %Input matrix (1st column time (can be different from experimental time), 2nd column, etc. corresponding input values tu = [texp,u1, u2,... %% Parameters and initial conditions of state variables k1 =... %assign parameters to vector with known (fixed) parameters and a vector of parameters to be estimated pest = [... pfix = [... %total parameter vector: p = [pfix pest] %the same for the initial conditions x0est = [... x0fix = [... %total initial conditions: x0 = [x0fix x0est] %total vector of quantities to be estimated: px0est=[pest x0est] %% Optimization algorithm %Settings lsqnonlin lb = zeros(size(px0est)); ub = []; options = optimset('tolfun', 1e-4,... %default: 1e-4 'TolX',1e-5,... %default: 1e-4 'LevenbergMarquardt','on',... %default: on 'LargeScale','on'); %default: on %LSQNONLIN: objective function should return the model error vector [px0est,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(@objfnc,px0est,lb,ub,options,... texp,data,tu,pfix, x0fix); %% (estimated) Precision: Jacobian = full(jacobian); %lsqnonlin returns the jacobian as a sparse matrix %Parameter covariance matrix (inverse of Fisher Information Matrix) varp = resnorm*inv(jacobian'*jacobian)/length(texp); %Standard deviation is the square root of the variance stdp = sqrt(diag(varp)); NvR 7

%% Reconstruct parameter and initial condition vectors pest = px0est(.:.) p = [pfix pest]; x0est = px0est(.:.) x0 = [x0fix x0est]; %% Simulation with estimated parameters ode_options = []; [t,x] = ode15s(@odefnc,tsim,x0,ode_options, tu,p); -- %Objective / cost function for least squares function e = objfnc(px0est, texp, data, tu,pfix,x0fix) % e: vector of model residuals %Reconstruct parameter and initial condition vectors pest = px0est(.:.) p = [pfix pest]; x0est = px0est(.:.) x0 = [x0fix x0est]; %% Simulation ode_options = []; [t,x] = ode15s(@odefnc,texp,x0,ode_options, tu,p); %Model output(s) y = x(:,1); %% Model residuals e = data-y; -- %Model in state-space format (system of coupled 1st order ODE's) function dx = odefnc(t,x,tu, p) % t : current time % x : state values at time t % tu: matrix with time points (1st column) and corresponding input signal value (2nd column) % p : parameters % dx: new state derivatives (column vector) %calculate input u(k) by interpolation of tu u = interp1(tu(:,1),tu(:,2), t); %ode's dx1 =... %collect derivatives in column vector dx = [dx1; dx2...]; NvR 8