Multivariate Data Analysis and Modelling Basic Course. Contents



Similar documents
Statistical Analysis. NBAF-B Metabolomics Masterclass. Mark Viant

Monitoring chemical processes for early fault detection using multivariate data analysis methods

Multivariate Chemometric and Statistic Software Role in Process Analytical Technology

O2PLS for improved analysis and visualization of complex data

Alignment and Preprocessing for Data Analysis

International comparisons of road safety using Singular Value Decomposition

Data Exploration Data Visualization

Multivariate Tools for Modern Pharmaceutical Control FDA Perspective

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler

2. Simple Linear Regression

Chemometric Analysis for Spectroscopy

Nonlinear Iterative Partial Least Squares Method

Additional sources Compilation of sources:

Exercise 1.12 (Pg )

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Correlation and Regression

DATA INTERPRETATION AND STATISTICS

Common factor analysis

Iris Sample Data Set. Basic Visualization Techniques: Charts, Graphs and Maps. Summary Statistics. Frequency and Mode

How To Cluster

Linear Threshold Units

Section 1.1. Introduction to R n

Dimensionality Reduction: Principal Components Analysis

Exploratory data analysis (Chapter 2) Fall 2011

Validation and Calibration. Definitions and Terminology

Java Modules for Time Series Analysis

Statistical Machine Learning

Principal Component Analysis

Chapter 4 and 5 solutions

Algebra 1 Course Information

GUIDELINES FOR THE VALIDATION OF ANALYTICAL METHODS FOR ACTIVE CONSTITUENT, AGRICULTURAL AND VETERINARY CHEMICAL PRODUCTS.

Factor Analysis. Chapter 420. Introduction

Multivariate Statistical Inference and Applications

Multivariate Analysis of Ecological Data

For example, estimate the population of the United States as 3 times 10⁸ and the

Chapter 7: Simple linear regression Learning Objectives

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

5. Linear Regression

Introduction to Principal Components and FactorAnalysis

Guidance for Industry

Data Analysis Tools. Tools for Summarizing Data

10. Analysis of Longitudinal Studies Repeat-measures analysis

Simple Regression Theory II 2010 Samuel L. Baker

Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Introduction to Principal Component Analysis: Stock Market Values

DISCRIMINANT FUNCTION ANALYSIS (DA)

Todd R. Nelson, Department of Statistics, Brigham Young University, Provo, UT

Environmental Remote Sensing GEOG 2021

Analysing Questionnaires using Minitab (for SPSS queries contact -)

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

Simple Predictive Analytics Curtis Seare

7 Time series analysis

Validation of measurement procedures

Part 2: Analysis of Relationship Between Two Variables

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Statistics Review PSY379

VALIDATION OF ANALYTICAL PROCEDURES: TEXT AND METHODOLOGY Q2(R1)

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

How To Understand Multivariate Models

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Normality Testing in Excel

Week 1. Exploratory Data Analysis

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Multivariate Analysis of Variance (MANOVA)

Module 3: Correlation and Covariance

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Tutorial on Exploratory Data Analysis

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

SIMCA 14 MASTER YOUR DATA SIMCA THE STANDARD IN MULTIVARIATE DATA ANALYSIS

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Machine Learning Logistic Regression

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Application of Automated Data Collection to Surface-Enhanced Raman Scattering (SERS)

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Data Mining: Algorithms and Applications Matrix Math Review

Pennsylvania System of School Assessment

HOW TO USE MINITAB: DESIGN OF EXPERIMENTS. Noelle M. Richard 08/27/14

Partial Least Squares (PLS) Regression.

NIRCal Software data sheet

STATISTICA Formula Guide: Logistic Regression. Table of Contents

DRAFT. Algebra 1 EOC Item Specifications

Introduction to General and Generalized Linear Models

Session 7 Bivariate Data and Analysis

An analysis method for a quantitative outcome and two categorical explanatory variables.

2013 MBA Jump Start Program. Statistics Module Part 3

Project 6: Plant Feature Detection and Performance Monitoring

MarkerView Software for Metabolomic and Biomarker Profiling Analysis

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

Transcription:

Multivariate Data Analysis and Modelling Basic Course Contents 9/1/

Multivariate Data Analysis and Modelling Basic MVDA 1. Introduction. Principal Component Analysis (PCA) 3. Applications of PCA 4. Partial Least Squares Projections to Latent Structures (PLS) 5. Multivariate Characterisation 6. Multivariate Calibration 7. Process Data Analysis 8. Conclusions 9/1/ 3 Multivariate Data Analysis and Modelling Additional Topics 11. Additional Topics I Pre-processing Methods 1. Additional Topics II Method Extensions and Special Cases 13. Additional Topics III Process Applications 14. Exercises 9/1/ 4

Table of Contents Basic MVDA 5 1. Introduction 5. Principal Component Analysis, PCA 1 3. Applications of PCA 43 4. Partial Least Squares Projections to Latent Structures, PLS 61 5. Multivariate Characterisation 83 6. Multivariate Calibration 95 7. Process Data Analysis 111 8. Conclusions 15 Additional Topics 19 11. Additional Topics I Pre-processing methods 19 Scaling and Centering 13 Transformation and Expansion of data 144 Signal Correction and Compression 154 1. Additional Topics II Method extensions and special cases 169 PLS and one response variable 17 PLS Discriminant Analysis (PLS-DA) 175 13. Additional Topics III Process Applications 181 Multivariate Statistical Process Control, MSPC 18 Batch Statistical Process Control, BSPC 199 Exercises 19 Getting Started FOODS 1 IRIS 7 Easy PCA ARCHAEOLOGY 37 METABONOMICS 43 Easy PLS LOWARP 53 USDVOLVO 59

Quality Control THICKNESS 69 CUPRUM 79 Multivariate Characterisation SURFACTANT 87 PULP 31 Multivariate Calibration SUGAR 35 NIR_CHIP 315 CELLULOSE 33 Process Applications SOVRING 331 PROC1A 345 Baker s Yeast 353 Flow-chart for MVDA in SIMCA 367 References 369

Multivariate Data Analysis and Modelling Basic Course Chapter 1 Introduction Three Types of Problems Introduction Difficulties with Complex R&D Problems Data Tables and Notation Example Philosophy of Modelling Principle of Projections 1/1/3

Three types of Problems Multivariate Data Analysis provides tools for: Overview & Summary Structure Similarity/Dissimilarity Outliers Classification Recognise groups Relationship between Y and X Not one x/y at a time, but all x s/y s simultaneously Finding how the x-variables affect the responses Finding how the x- and y-variables correlate to each other How to set X to get the best profile of Y 1/1/3 3 Problem I --- Overview & Summary Example: FOODS Problem: To investigate the food consumption pattern in Europe; the relative amount of twenty common provisions were collected for 16 countries. We would like to examine the similarity/dissimilarity between the countries based on these data. Data: 1/1/3 4

Problem II --- Classification Example: IRIS (A classical data set in statistics) Data: The data table contains petal and sepal lengths and widths of 5 specimens each of Iris setosa, Iris versicolor and Iris virginica. This data set was introduced by the great statistician Fisher as early as 1936. It is commonly known as "The Fisher Iris Data. Goal: To find a multivariate model that classifies a new Iris specimen in the correct class according to its petal and sepal lengths and widths 1/1/3 5 Example: SUGAR Problem III --- Quantification & Prediction Multivariate calibration at sugar production plant Sug_sort_ID.M1 (PLS-Class(1)), PLS class 1 XObs 5 4 3 1 1 3 4 5 Num Problem: In sugar production two important product quality properties are impurity and colour of the sugar, but measuring these quality measures are laborious and timeconsuming. It was desired to try to replace conventional wet-chemistry measurements with rapid on-line fluorescence measurements. Data: 16 time points in the process, 571 X-variables, Y-variables 1/1/3 6

Multivariate data analysis Extracting information from data with many variables using them all simultaneously. Deals mainly with: HOW to get information out of existing multivariate data Deals less with: How to structure the problem Which variables to measure How to collect data (Design of experiments, DOE) Needs process knowledge 1/1/3 7 Data Measurements of interest on process or system Data are not information, they are collected to extract information Important features of data Variability Types of data 1/1/3 8

Variability Any measurement and experiment is influenced by noise Under stable conditions, any process or system varies around its mean, and stays within "control limits"; ± 3 standard deviations in 99.4% of the observations 1/1/3 9 Types of data What types of data for Modelling and Analysis are there? Univariate data K = 1 Bivariate data K = Few-variate data K < 5 Multivariate data K > 6 Quantitative Qualitative Processes (Continuous/Batch) Time Series (Stationary/Dynamic) Controlled/Uncontrolled 1/1/3 1

Data with many variables (multivariate) variables K Interested in many variables K > 6 Controlled variables (input or x- variables) and/or characteristics (output or y-variables) observations N _ x' s Most systems and processes are characterised by a multitude of variables => large data matrices K variable averages K variable variances K*(K-1)/ covariances Very many Dominate when K > 6 1/1/3 11 Difficulties with complex R&D problems Dimensionality K (number of X-variables) large M (number of Y-variables) large N large, medium, or small Data table short and fat, or square and very large Collinearity X s often not independent Y s often not independent Only a few variables affect the system! Which? Correlation vs causality! Noise Individual measurements are noisy with large variability Missing data 1/1/3 1

Correlation and Causality Correlation or causation? 8 Although the two variables are correlated, this does not imply that one causes the other! Inhabitants (in thousands) 75 7 65 6 55 5 45 4 1 1 14 16 18 4 6 Number of storks in Oldenburg 193-1936 1/1/3 13 Methods of Analysis COST Approach Plot and evaluate one variable or a pair of variables at time OK 5 years ago (few variables) X 1 X X 3 Y 1 Y Y 3 Classical Statistics Find a relationship between a few of the X s and one Y at a time OK 5 years ago (few and essentially uncorrelated variables) X 1 X X 3 Y 1 Y Y 3 Multivariate Analysis Model all the variables together to find relationships between all the X s and all the Y s X 1 X X 3 Y 1 Y Y 3 1/1/3 14

Risks with traditional statistical methods Comparing one group against another Typically 95% confidence level used Type I errors (False positives spurious results) Type II errors (False negatives missed opportunities, risk of not seeing the information) Risk = 1 -.95 k (N) observations Risk variables (K) Group I Group II 1..9.8.7.6.5.4 Risk of Spurious Result K 5 1 3 1.3. Risk 3% 4% 79% 99.4%.1. 5 5 75 1 15 15 No. of Variables 1/1/3 15 All data are needed In Shewhart's days (193) process engineers were lucky to have one measurement of product quality Today we may get 1 or more quality measurements on each sample Most outliers remain undetected with the use of classical SPC techniques! No covariance information 1/1/3 16

Fast and Correct Decision Making Variable 1 4 3 +3 Sd 4 3 1-1 - Variable 1-1 -3-3 Sd - -4 Variable 4 3 1 3 4 Tid +3 Sd -3-4 outliers! -4-3 - -1 1 3 4 Variable 1 1-1 - -3-4 1 3 4 Tid -3 Sd The outliers are not detected until you look at the combination of the variables The information is found in the correlation pattern - not in the individual variables! 1/1/3 17 Classical methods of statistics ~193 Multiple regression Canonical Correlation Linear Discriminant Analysis Analysis of variance & all Maximum Likelihood Tables are long and lean N K Assumptions and limitations: Independent X's (Rank X = K) => Many Cases, Few variables Precise X's ("errors" only in Y) Regression analysis one Y at a time (independent Y's) No missing data 1/1/3 18

Chemistry, Biology, Engineering... ~ Experimental Costs (+ ethics, regulations) Instrumental & electronics revolution Few cases (observations), Many variables Projection methods: PCA, PLS, PLS-DA, PCR... Each "classical" method has a projection correspondence Chemometrics Short & fat data tables K N 1/1/3 19 Problem I --- Overview Example: FOODS Problem: To investigate the food consumption pattern in Europe; the relative amount of twenty common products are given for 16 countries. Perform a multivariate analysis (PCA) to overview data!!! 1/1/3

FOODS: PC model, component 1 and FOODS.M1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] FOODS.M1 (PCA-X), PCA for overview p[comp. 1]/p[Comp. ] t[] 4 - -4 Sweden Finland Denmark Norway Portugal Austria Italy Spain Germany Belgium Ireland Switzerl FranceLuxembou Holland England p[].4.. -. -.4 Crisp_Brea Fro_Fish Fro_Ve Gr_Coffe In_Potat Margarine Tea Sweet Butter Jam Ti_Sou Olive_Oil Biscuits Garlic Oranges Ti_Fr Apples Yoghurt Pa_Soup Inst_Coffe -7-6 -5-4 -3 - -1 1 3 4 5 6 7 t[1] -.4 -....4 p[1] 1/1/3 1 FOODS: Distance to model (DModX) after components DModX represents the unexplained variation ( residuals ) 1.6 1.4 FOODS.M1 (PCA-X), PCA for overview DModX[Comp. ] D-Crit(.5) Ire Ireland is farthest away from the model plane displays largest portion of unexplained variation DModX[](Norm) 1. 1..8.6 Ger Ita Fra Hol Bel Lux Eng Swe Por AusSwi Fin Spa Den Nor 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 Obs ID (Primary) M1-D-Crit[] = 1.566 1/1/3

Projection Methods observations We have a large data table of dimensions N x K N variables K Observations might be: Analytical samples Compounds Experimental runs (trials) Reactions Process time points Individuals... Variables might be: From spectra: NMR, IR, NIR, UV, MS, X-ray,... From separation: HPLC, GC, TLC, Electrophoresis Process: T, ph, P, flows,... Other: Curve forms, structure descriptors, thermodynamics, quantum mechanics, elemental compositions,.. 1/1/3 3 The Principles of Projections x 3 3 x1 x x3 x x 1 Each row in a data table corresponds to a point in K-space, here 3-dimensional space 1/1/3 4

The Principles of Projections We are looking for windows in K-space, which can be used to overview the data and interpret their meaning Convert data tables to plots x 3 Projection onto line in K-space Projection onto plane Projection onto hyperplane 3 x1 x x3 PC PC1 x x 1 1/1/3 5 Philosophy of modelling Models WHAT? WHY? Approximation of reality Analogy: Toy train Map Math function Simplifies study of reality faster simpler cheaper HOW? Mathematical models empirical y = a + bx + e semi empirical y = a + blogx + e fundamental Hψ = Eψ + e pv = nrt + e Models founded on analogy and projection principles! All models are approximations of reality always founded on some assumptions! 1/1/3 6

Semi-Empirical modelling: Taylor series. In a limited interval, x, any continuous and differentiable function, f(x), can be arbitrarily well approximated by a polynomial, a Taylor series: y = P(x) = b + b 1 x + b x + + b p x p + + E(p + 1) For a given degree, p, the approximation is better, the smaller the interval x y For a given interval x, the approximation is better, the higher the degree, p The above can be generalised to functions of many variables: y = P(x 1, x, ) + E = b + b 1 x 1 + b x + + b 11 x 1 + b x + + b 1x 1 x + b 13 x 1 x 3 + +b 11 1 x 1p + + E(p+1) X y = f(x) y = P(x) ; shifted down x 1/1/3 7 Basic Conceptual Projection Model Any set of variables (X or Y) measured on a system or process with limited variation (observations are similar) can be approximated by a bilinear model Fictitious process 3 latent variables 6 measured variables X = TP + E 1/1/3 8

Summary - Multivariate Analysis by Projections (PCA & PLS) Deals with the dimensionality Robust to noise in both X and Y problem Handles many variables and few observations Short and wide data tables Handles few variables and many observations Long and lean data tables Handles correlation Copes with missing data Separates regularities from noise Models X and models Y Models relation between X and Y Expresses the noise Extracts information from all data simultaneously Data are not the same as information Results are displayed graphically 1/1/3 9 Summary - Three types of problems Any question to a data table has a projection method solution X I II III X Y Overview and Summary PCA, Principal Components Analysis Classification SIMCA, one PC model per class PLS-DA, PLS discriminant analysis MSPC (one class + outliers ) Relation between blocks of variables, X & Y PLS analysis PLS-DA Multiple regression 1/1/3 3

Multivariate Data Analysis and Modelling Basic Course Chapter Principal Component Analysis (PCA) Overview of data tables Contents Principal Components Analysis (PCA) Notation Scaling of Variables Geometric Interpretation Algebraic Solution Example Diagnostics Outliers Residuals Model complexity & Predictive power (Cross-validation) Conclusions 16/4/3

Notation observations N variables K Observations Analytical samples Chemical compounds Trials (experimental runs) Process time points Chemical reactions Biological individuals Etc... Variables From spectra NMR, IR, UV, MS, ESCA, X-ray, From separation HPLC, GC, TLC, Electrophoresis, Trace elements,... Process T, P, ph, flows,... Others Curves, theory,... 16/4/3 3 Notation N = number of observations K = number of variables A = number of principal components index i = 1,,3,..., N is used for observations index k = 1,,3,..., K is used for variables index a = 1,,3,..., A is used for principal components ws = scaling weights T = scores matrix, t 1,..., t A ; score vectors (column vectors) P'= loading matrix, p 1 ',..., p A '; loading vectors (row vectors) 16/4/3 4

Pre-treatment of data - Transformations If the data are not approximately normally distributed, transformation may be needed to get a good model Data without transformation skew distribution Data after log-transformation closer to normal distribution cuprum.ds1 cuprum Histogram of DS1.kNi cuprum.m1 (PCA-X), PCA for overview log-transform Histogram of M1.XVar(kNi)(trans) 18 16 1 14 1 8 Count 1 8 Count 6 6 4 4...4.6.8 1. 1. 1.4 1.6 1.8...4.6.8 3. 3. 3.4 3.6 3.8 4. 4. 4.4 4.6 4.8 5. 5. 5.4 5.6 5.8 6. 6. 6.4 6.6 Bins -1. -.94 -.87 -.8 -.74 -.68 -.61 -.54 -.48 -.4 -.35 -.8 -. -.16 -.9 -..4.1.17.4.3.36.43.5.56.63.69.76.8 Bins 16/4/3 5 Pre-treatment of data - Scaling Problem: Variables can have substantially different ranges Example: LOWARP, polymers characterised with regards to strength and warp Response wrp3 varies between. and 1. Response st4 ranges from roughly 17 to 3 Response 5 6 7 8 9 1 11 1 13 14 15 16 17 18 wrp1 wrp wrp3 wrp4 wrp5 wrp6 st1 st wrp7 st3 st4 wrp8 st5 st6.9 5.. 1..3 4. 3 151 1. 19 639 1.3 4.7 3.7 7.3.7 1.8.5 5.4 15 13 1.8 95 7.1 1.6 3.6 6.9.9.1 4.8 9.4 43 1555 1. 174 118 1.4 164.6 3.1.3.4.4 1.1 188 118 1. 17 1763 1. 186.5.3.1.3.3.8 1.1 17 1196 1. 181 17 1.3 197.5 1. 5. 45 156 1.1 59 531 1.3 49.6.3 3.9.3.4.7 1.4 4 139 1.5 189 137 1.6 178.6 5.9.4..7 1. 43 179 1.6 13 353 1.6 3.7. 5.3..7.6. 4 1117 1. 167 197 1.1 189.6 5.8 7..9 1. 5.6 11.8 6 16 1.6 193 983 1.8 189.8.9.5.6 1.1. 5 1414 1.3 14 85 1.3 11.7.8 5.1 1. 1..7 6.1 184 1517 1.9 13 34.1 15.6 1.1 4.7.6.9 1.3 3.5 198 134 1.4 175 379 1.4 193.7 1.9 4.7 1. 1..8 5.4 34 1697 1.5 19 51 1.6 179.7.9 5.9.5.6 1. 6.6 39 1548 1.5 18 314 1.6 173 5.5 7.9.8.4 5.5 9.3 56 1887 1.5 188 844 1.8 179 3. 6..3.5 1.5 5. 49 1631 1.5 186 471 1.7 178 16/4/3 6

Scaling, Example Body height/body weight Data for 3 individuals ( players + referee; football game) Height (m) 1.8 1.61 1.68 1.75 1.74 1.67 1.7 1.98 1.9 1.7 1.77 1.9 Weight (kg) 86 74 73 84 79 78 8 96 9 8 86 93 Height (m) 1.6 1.85 1.87 1.94 1.89 1.89 1.86 1.78 1.75 1.8 1.68 Weight (kg) 75 84 85 96 94 86 88 99 8 8 76 Body weight (kg) Same spread 1 9 8 7 1.6 1.7 1.8 1.9 Body height (m) Body weight (kg) Same scale 1 9 8 7 1 3 Body height (m) Right: The outlier not so easy to spot! 16/4/3 7 Scaling of variables Defining/Selecting the length of variable axes Recommended: To set variability along each axis equal to one (unit variance) x 3 K x n1 x n x nk N x x 1 16/4/3 8

Unit variance scaling (UV-scaling) PCA is scale dependent Variance of a variable <=> "importance Drawback? Asset? X measured values & "length" unit variance scaling ws x ik = x ik *ws k ; ws k is scaling weight of var. k ws => 1/s k All variables => s (x k ) = 1 (variance = 1) The scaling can be modified (advanced topic) 16/4/3 9 PCA -- Geometric Interpretation, 1 x 3 x x 1 For the matrix X we construct a space with K dimensions (we see, however, only three of these) Each variable has one co-ordinate axis with the length determined by scaling, usually unit variance 16/4/3 1

PCA -- Geometric Interpretation, x 3 x x 1 Each observation is represented by one point in K-space Hence, the data matrix X is a swarm of points in this space 16/4/3 11 PCA -- Geometric Interpretation, 3 x 3 measured values & "length" unit variance scaling meancentering x x 1 First we calculate the average of each variable. The vector of averages is a point in K- space. The average is subtracted. 16/4/3 1

PCA -- Geometric Interpretation, 4 x 3 x x 1 The mean-centering procedure corresponds to moving the co-ordinate system 16/4/3 13 PCA -- Geometric Interpretation, 5 x 3 PC1 x x 1 Projection of observation i The first principal component is the line in X-space that best approximates the data (least squares). The line goes through the average point. 16/4/3 14

PCA -- Geometric Interpretation, 6 x 3 PC1 PC x x 1 The second PC is represented by a line in X-space orthogonal to the first PC, also passing through the average point. The second PC improves the approximation of X as much as possible. 16/4/3 15 PCA -- Geometric Interpretation, 7 x 3 PC PC1 x x 1 Projection of observation i The two principal components form a plane in the X-space. This plane is a window into the multidimensional space, which can be visualised graphically. 16/4/3 16

PCA -- Geometric Interpretation, 8 Yellow points are the observed values. Blue points are their approximations. Projected locations on the model (line, plane, or hyperplane) are given by the scores (t). 16/4/3 17 PCA -- Geometric Interpretation, 9 x 3 PC α 1 α 3 α PC1 x x 1 The direction of, for example, PC1 (p 1 ) is given by the cosine of the angles α 1, α and α 3. These values indicate how the variables x 1, x and x 3 "load" into PC1. Hence they are called loadings. 16/4/3 18

PCA, overview of a data table (data set) X is modelled as x 3 X = 1* x + T*P + E PC α 3 α PC1 Each PC (score vector) is associated with a loading vector α 1 x Scores, (t) are co-ordinates in the (hyper)-plane (columns in T) x 1 Loadings, (p) define the orientation of the (hyper)-plane (rows in P') DModX, is the distance between the observations and the model plane (residual row SD) 16/4/3 19 Scores & Loadings The scores, t ia, are new variables that summarise the old ones The scores are sorted in descending importance, t 1, t, t 3,... Typically -5 principal components are sufficient to summarise a data table well The loadings, p ak, express how the old variables are linearly combined to form the scores; scores are combinations of the initial variables The loadings are used to interpret the scores. They unravel the magnitude (large/small correlation) and the manner (positive/negative correlation) in which the variables contribute to the scores (principal components) 16/4/3

Example - Overview (FOODS) Problem I --- Overview Problem: To investigate the food consumption pattern in Europe; the relative amount of twenty common products are given for 16 countries. Perform a multivariate analysis (PCA) to overview data!!! 16/4/3 1 Example - Overview (FOODS) FOODS.M1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] FOODS.M1 (PCA-X), PCA for overview p[comp. 1]/p[Comp. ] t[] 4 - -4 Sweden Finland Denmark Norway Portugal Austria Italy Spain Germany Belgium Ireland Switzerl FranceLuxembou Holland England p[].4.. -. -.4 Crisp_Brea Fro_Fish Fro_Ve Gr_Coffe In_Potat Margarine Tea Sweet Butter Jam Ti_Sou Olive_Oil Biscuits Garlic Oranges Ti_Fr Apples Yoghurt Pa_Soup Inst_Coffe -7-6 -5-4 -3 - -1 1 3 4 5 6 7 t[1] -.4 -....4 p[1] 16/4/3

Example - Overview (FOODS) FOODS.M1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. 3] FOODS.M1 (PCA-X), PCA for overview p[comp. 1]/p[Comp. 3] t[3] 4 - -4 Luxembou France Spain Belgium Germany Italy Switzerl Denmark Holland Sweden Norway Portugal Finland Austria England Ireland p[3].4.. -. -.4 Garlic Olive_Oil Gr_Coffe Oranges Yoghurt Apples In_Potat Fro_Fish Fro_Veg MargarineSweetn Crisp_Brea Ti_Fru Pa_Soup Biscuits Butter Inst_Coffe Ti_Soup Tea Jam -7-6 -5-4 -3 - -1 1 3 4 5 6 7 -.4 -....4 t[1] p[1] 16/4/3 3 Example - Overview (FOODS) DModX shows the distance to the model plane Ireland is modelled well by the third component DModX[](Norm) 1.6 1.4 1. 1..8.6 FOODS.M1 (PCA-X), PCA for overview DModX[Comp. ] D-Crit(.5) Ger Ita Fra Hol Bel Lux Eng 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 Obs ID (Primary) M1-D-Crit[] = 1.566 Ire Swe Por AusSwi FinSpa Den Nor DModX[3](Norm) 1.6 D-Crit(.5) 1.4 1. 1..8 FOODS.M1 (PCA-X), PCA for overview DModX[Comp. 3] Ger Ita Fra Hol Lux Swe Swi Fin Bel Eng Por Den Spa Aus Nor Ire 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 Obs ID (Primary) M1-D-Crit[3] = 1.68 A = A = 3 16/4/3 4

How to use PCA Example: LOWARP Problem: To develop a new polymer with a given profile of warp (low warp) and strength (high strength) The polymers consist of 4 constituents (ingredients) glas crtp mica amtp 17 polymers were made according to a mixture design and 14 responses were determined for each polymer Responses: wrp1-wrp8, st1-st6 16/4/3 5 Application: PCA on LOWARP responses 17 polymers, 14 responses; warp and strength measured 14 responses summarised by two principal components (projected onto a plane) 1 st PC 49% SS nd PC 7% SS Conclusion: - PC1 interpreted as mainly warp - PC interpreted as mainly strength p[].5.4.3..1. -.1 lowarp.m1 (PCA-Y), PCA of 14 responses p[comp. 1]/p[Comp. ] st5 st3 st1 st6 st4 st wrp wrp6 wrp5 wrp1 wrp3 wrp7 wrp4 wrp8 -. -.1..1..3.4 p[1] 16/4/3 6

Observation diagnostics strong and moderate outliers groups trends PCA - Diagnostics Variable diagnostics correlation contribution which variables are well explained Model diagnostics fit (R ) prediction (Q ), Cross-validation 16/4/3 7 PCA can be used to unravel Strong outliers (groupings, trends) Moderate outliers (groupings, trends) Observation diagnostics Strong outliers: Found in scores Detection tool: Hotelling s T a method to establish the normal area in the score plot Moderate outliers: Found in observation residuals Detection tool: DModX 16/4/3 8

x1 x3 PC P x C1 Observation diagnostics; Example Thickness: Quality control scheme for polymer disk manufacturing The objective was to produce disks with uniform thickness within given specification Problems stemmed from small but expensive increases in the number of disks discarded Nine thickness measurements were taken on the disks produced 7 9 3 1 4 5 6 8 16/4/3 9 Observation diagnostics - Strong Outliers x 3 thicknes.m1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] PC 3 155 x 1 x t[] 1-1 663 135 6 41 138 4 49 84 1819 31 333 17 9 4 18 1 45 34 35 48 37 75 5 8 67 85 13 11 16 9 88 4 45 53 57 55 6 58 1 65 86 74 435 38 54 5 6 16 69 71 94 79 96 98 93 14 17 1913 19 118 11 61 99 51 8 4647 95 91 89 44 3 9 3 56 59 6 63 64 881 187 18 11 68 778 15 9 97 83 77 113 114 115 116 119 144 166 17 168 18 184 13 15 17 1 133 18 16 151 136137 175 178 174 159 18 141 14 147 149 117 11 1 131 14 13 164 16 154 157 153 181 16161 145 176177 134 17 173 143 148 15 169 171 183 15 156 158 163 167 139 179 165 14 76 73 146 39 PC1 111 1 t[1] Outliers are serious and interesting, and easy to find Strong outliers are found in score plots 16/4/3 3

Strong outliers - detection tool Hotelling's T Hotelling s T is a multivariate generalisation of Student s t-test It provides a tolerance region for the data in a two-dimensional score plot, e.g., t 1 /t FOODS.M1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] thicknes.m1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] 3 155 t[] 4 - -4 Sweden Finland Denmark Norway Portugal Austria Italy Spain Germany Belgium Ireland Switzerl FranceLuxembou Holland England t[] 1-1 66135 3 6 41 138 4 49 84 1819 1 31 3337 9 45 34 35 48 13 11 16 9 88 37 75 5 8 67 4 45 53 57 6 55 58 65 43 38 4 18 1 5 6 16 46 5 54 61 51 8 47 44 3 9 3 56 59 6 63 6468 69 74 85 1 86 93 71 94 96 98 14 17 1913 19 118 11 119 14413 79995 91 881 1 89 87 18 11 778 15 9 97 83 77 113 114 115 17 116 166 17 168 18 184 1 133 15 18 16 151 136137 175 178 159 154 157 141 14 147 149 153 16161 117 11 1 131 14 13 145 134 17 174 18 164 16 181 176177 173 143 148 156 15 15 158 163 169 171 183 167 139 179 165 14 76 73 146 39-7 -6-5 -4-3 - -1 1 3 4 5 6 7 t[1] 111 1 t[1] 16/4/3 31 Observation diagnostics - Moderate Outliers Moderate outliers can be detected by inspecting the residual for each observation (DModX) x 3 Residual observation variation (SSOX) Σ k e ik Residual observation variance (SOX) k e ik / DF DModX, normalised distance [SOX/variance (E)] 1/ DModX, absolute distance [SOX] 1/ x 1 PC PC1 x Residual distance of observation i 16/4/3 3

Moderate Outliers - Detection tool DModX FOODS.M1 (PCA-X), PCA for overview DModX[Comp. 3] thicknes.m1 (PCA-X), PCA for overview DModX[Comp. ] DModX[3](Norm) 1.6 1.4 1. 1..8 D-Crit(.5) Ita Fra Hol Lux Swe Swi Fin Bel Eng Por Den Spa Aus Nor Ire DModX[](Norm) 4 3 1 1 3 D-Crit(.5) 111 155 Ger 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 Obs ID (Primary) 4 6 8 1 1 14 16 18 Num M1-D-Crit[3] = 1.68 No moderate outliers M1-D-Crit[] = 1.51 One to four moderate outliers Critical distance (DCritX) is derived from the approximate F-distribution of DModX 16/4/3 33 Interpretation of variables Variable correlations and model contribution can be seen in loading plots FOODS.M1 (PCA-X), PCA for overview p[comp. 1]/p[Comp. ] FOODS.M1 (PCA-X), PCA for overview p[comp. 1]/p[Comp. 3] p[].4.. -. -.4 Crisp_Brea Fro_Fish Fro_Ve Gr_Coffe In_Potat Margarine Tea Sweet Butter Jam Ti_Sou Olive_Oil Biscuits Garlic Oranges Ti_Fr Apples Yoghurt Pa_Soup Inst_Coffe p[3].4.. -. -.4 Garlic Olive_Oil Gr_Coffe Oranges Yoghurt Apples In_Potat Fro_Fish Fro_Veg MargarineSweetn Crisp_Brea Ti_Fru Pa_Soup Biscuits Butter Inst_Coffe Ti_Soup Tea Jam -.4 -....4 p[1] -.4 -....4 p[1] 16/4/3 34

Residuals of variables The residuals tell us the extent to which each variable is modelled (ranges from to 1) Residuals of matrix E pooled column-wise SSVX, residual variable variation Σ i e ik SVX, residual variable variance Σ i e ik /DF RVX (cum), explained variation 1- SSVX[A]/SSVX[] RVXadj(cum), explained variance 1- SVX[A]/SVX[] 16/4/3 35 Variable diagnostics - Well explained variables FOODS.M1 (PCA-X), PCA for overview RVXcum[Comp. 3] RVX and RVX adj can be used to assess which variables are well explained and which are not R increases with increased number of principal components RVX[3](cum) 1..8.6.4.. Gr_Coffe Inst_Coffe Tea Sweetner Biscuits Pa_Soup Ti_Soup In_Potat Fro_Fish Fro_Veg Apples Oranges Ti_Fruit Jam Garlic Butter Margarine Olive_Oil Yoghurt Crisp_Brea Var ID (Primary) 16/4/3 36

Model diagnostics - Validity vs model complexity Trade-off between fit and prediction ability Question: How can we determine the appropriate number of principal components to include in a model? 1 R Q Method: Cross-validation (CV); CV simulates the predictive power of a PC-model. R estimates goodness of fit Q estimates goodness of prediction A 16/4/3 37 Cross-validation, PCA Data are divided into G groups (SIMCA default is 7) and a model is estimated for the data devoid of one group The deleted group is predicted by the model partial PRESS (Predictive Residual Sum of Squares or Prediction Error SS) This is repeated G times; then all partial PRESS's are summed to PRESS If a new PC a enhances the predictive power compared with PC a-1 (i.e., PRESS < SS), the new PC a is kept in the model Cross validation is done in two phases and several deletion rounds: First removal of observations (rows) Then removal of variables (columns) 16/4/3 38

Model diagnostics - Evaluation of R and Q PRESS is the sum of squared differences between predicted and observed x-elements. PRESS = (x x $ ) ik ik PRESS can be transferred into a dimensionless quantity, Q, which resembles R Q = 1 - PRESS/SSX total R = 1 - SSX resid /SSX total Q >.5 Q >.9 Good (Depending on application) Excellent (Depending on application) Important: 1. R is always larger than Q. High R and high Q is good 3. The difference between R and Q should not be too large 16/4/3 39 Model Diagnostics - Example Lowarp Model overview Variable Overview lowarp.m1 (PCA-Y), PCA of 14 responses RX(cum) Q(cum) lowarp.m1 (PCA-Y), PCA of 14 responses RVX[3](cum) QVX[3](cum) 1. 1..8.8.6.6.4.4... Comp[1] Comp[] Comp[3] Comp[4] Comp[5] Comp No.. wrp1 wrp wrp3 wrp4 wrp5 wrp6 st1 st wrp7 st3 Var ID (Primary) st4 wrp8 st5 st6 16/4/3 4

PCA Summary Principal Component Analysis Modelling: Data table X is approximated by a least squares (hyper)-plane + residuals (E) X = 1*x' + T*P' + E Calculations: One PC at a time -- NIPALS All PC.s together -- SVD (can often not handle missing data) 16/4/3 41 Principal Component Analysis models the correlation structure of a data set and is used for: Conclusions Overview of a data set (data table): dominating variables trends outliers, groups, clusters similarities / dissimilarities observations scores variables loadings Summary of a data set (data table): scores latent variables principal properties loadings influence of variables Classification: A new observation is similar to the training observations if it is found within the tolerance volume of the model In processes, PCA is used for overview of data and for monitoring 16/4/3 4

Multivariate Data Analysis and Modelling Basic Course Chapter 3 Applications of PCA Contents Overview Quality control Assessment of drug exposure Classification 3 different types of IRIS 4 classes of wood chips from particleboard industry 16/4/3

Example Quality control (CUPRUM) Electrolytic production of Copper Boliden AB produces approximately 3 tonnes of Copper every day extremely pure (99.998 %) Copper impurity testing twice a day to ensure quality (TAI, Total Analysis Index) TAI is a weighted sum of 8 different impurities (PPM-level) 16/4/3 3 Example CUPRUM The data - 9 variables, 73 observations 8 measured variables (Ag-Se) 1 calculated variable (TAI) data sampled twice a day over one year giving 73 observations all variables were log-transformed The Copper industry uses only the TAI value to determine the quality and thereby the price. Copper products with TAI over 8. are discarded. Question: Can we do better with projection methods? 16/4/3 4

CUPRUM Time series plot of TAI Quality control limit corresponding to TAI = 8 Samples 111 (TAI = 8.1) and 3 (TAI = 7.8) have approximately the same TAI value Time series plot of TAI 14 1 1 111 195 167 8 338 3 399 611 577 TAI 8 6 4 4 6 Observation number 16/4/3 5 CUPRUM Scores and loadings of PC-model A PC-projection of the table was made. The 8-dimensional table (the TAI variable excluded) was thus projected onto a two-dimensional plane, showing 67% of the variability in the data cuprum.m1 (PCA-X), pca for overview all vars log-transformed t[comp. 1]/t[Comp. ] Samples 111 and 3 are situated far apart! The corresponding loading plot revealed two types of impurities cuprum.m1 (PCA-X), pca for overview all vars log-transformed p[comp. 1]/p[Comp. ] t[] 4 3 1-1 - -3-4 111 3 p[].4.. -. -.4 Type of impurity Impurity ksb kbi kas knikpb kse kag kte -5-5 -4-3 - -1 1 3 4 5 6 7 8 t[1] -.4 -....4 p[1] 16/4/3 6

CUPRUM Contribution plots Contribution plots zoom in on a single sample. Here, the variable profiles of samples 111 and 3 are shown. cuprum.m1 (PCA-X), PCA for overview log-transform Score Contrib(Obs 111 - Average), Weight=p1p cuprum.m1 (PCA-X), PCA for overview log-transform Score Contrib(Obs 3 - Average), Weight=p1p Score Contrib(Obs 111 - Average), Weight=p1p 3 1-1 Score Contrib(Obs 3 - Average), Weight=p1p 3 1-1 - kag kni kpb kbi ksb kas kte kse - kag kni kpb kbi ksb kas kte kse Var ID (Primary) Var ID (Primary) 16/4/3 7 Statistical process control (SPC) charts SPC uses quality and/or process data to monitor the process. Walther Shewhart (father of SPC) introduced the concept for defining the "normal" region of variation for one variable. Today: Multivariate SPC, MSPC. Illustrated by Hotelling s T and DModX. M3.Tsquare[] 1 TCrit(99%) TCrit(95%) 167 195 8 399 577 Target(M3) 1 3 4 5 6 7 3 338 611 DModX[] DCrit (.5) 1 1 3 4 5 6 7 16/4/3 8

The overview of the Copper data The PC-analysis CUPRUM - Summary The score plot showed that two samples (111 and 3) are not similar even though their TAI-values indicate this Using the loading plot, the two samples could be ascribed to two different types of impurities Summary The overview of the data provided by the PC-analysis is much more powerful in describing the impurities than the TAI-variable. The TAI-scale does not distinguish between different types of impurity combinations, and therefore leads to loss of information. 16/4/3 9 Example Assessment of drug exposure (Metabonomics) Metabonomics: monitoring of complex time-related metabolite profiles that are present in biofluids, e.g., urine, plasma, saliva, etc. Metabonomics.DS1 Metabonomics Observation 4 3 Proton-NMR spectra of urinary profiles of drug-exposed rats 1 9.98 9.78 9.58 9.38 9.18 8.98 8.78 8.58 8.38 8.18 7.98 7.78 7.58 7.38 7.18 6.98 6.78 6.58 4.14 3.94 3.74 3.54 3.34 3.14.94.74.54.34.14 1.94 1.74 1.54 1.34 1.14.94.74.54.34.14 Var ID (Primary) 16/4/3 1

Metabonomics The Data Rats exposed to chloroquine (an antimalarial) or amiodarone (an antiarrhythmic) Observations: N = 57 rats Variables: K = 194 variables ( 1 H-NMR shift regions) Six groups ( classes ): Control Sprague-Dawley, 1 rats, s Sprague-Dawley treated with amiodarone, 8 rats, sa Sprague-Dawley treated with chloroquine, 1 rats, sc Control Fisher, 1 rats, f Fisher treated with amiodarone, 1 rats, fa Fisher treated with chloroquine, 9 rats, fc 16/4/3 11 Metabonomics PCA to overview Two first components R X =.48 Q X =.38 6 4 Metabonomics.M1 (PCA-X), Overview with Pareto scaling t[comp. 1]/t[Comp. ] f fa fc s sa sc c t[] - s D R STRAIN U G a f -4-6 -6-5 -4-3 - -1 1 3 4 5 6 t[1] One outlier, rat 7, encircled Measurement error? Handling/environmental differences? Slow responder? 16/4/3 1

Metabonomics Contribution plot to reveal differences Metabonomics.M1 (PCA-X), Overview with Pareto scaling Score Contrib(Obs 4 - Obs 7), Weight=p[1]p[] How is rat 7 different from a normal sc-rat?. Chemical shift regions.58,.7 and 3. -5. Score Contrib(Obs 4 - Obs 7), Weight=p1p 1. 5..7.58 3. 9.98 9.58 9.18 8.78 8.38 7.98 7.58 7.18 6.78 4.14 3.74 3.34.94.54.14 1.74 1.34.94.54.14 Var ID (Primary) 16/4/3 13 Metabonomics Conclusions Multivariate analysis of NMR-data creates one or several maps (i.e, score plots, loading plots) that show trajectories of biochemical changes in biofluids induced by toxin exposure or disease Through this technology it is possible - (i) to detect target organs or pathways of dysfunction - (ii) to uncover likely chemical mechanisms of toxicity, and - (iii) to identify useful biomarkers indicative of onset, development, and decay of abnormal animal health conditions. Leading reference: Nicholson, J.K., Connelly, J., Lindon, J.C., and Holmes, E., Metabonomics: A Platform for Studying Drug Toxicity and Gene Function, Nature Reviews, ; 1:153-161. 16/4/3 14

Classification (IRIS) IRIS A classical data set in statistics Data: The data table contains petal (sw: kronblad) and sepal (sw: foderblad) lengths and widths of 5 specimens each of Iris setosa, Iris versicolor and Iris virginica. This data set was introduced by the great statistician Fisher as early as 1936. It is commonly known as "The Fisher Iris Data Training data K = 4 5 Iris Se. (1-5) 5 Iris Ve. (6-5) Prediction data K = 4 5 Iris Se. (76-1) 5 Iris Ve. (11-15) Objective: A multivariate model that classifies a new Iris specimen in the correct group according to its petal and sepal lengths and widths N = 75 5 Iris Vi. (51-75) N = 75 5 Iris Vi. (16-15) 16/4/3 15 IRIS: Step 1, Overview of training data (PCA) The PCA score plot shows Setosa well separated from Versicolor and Virginica The latter two classes are partly separated R =.96 (A = ) Q =.75 (A = ) t[] 1-1 - IRIS training.m1 (PCA-X), PCA entire training set t[comp. 1]/t[Comp. ] Se Se Se Se Se Se Se Se Se Se Se Se Se Se Se Se Se Se Se VeVi VeVe Ve Vi Vi Vi Vi Vi Vi Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Ve Ve Ve Vi Vi Ve Ve Ve Vi -4-3 - -1 1 3 4 t[1] Ve Vi Vi 16/4/3 16

IRIS: Step 1, Overview of data The loading plot shows that Setosa specimens are smaller (shorter and slimmer) than Virginica and Versicolor samples. The variable Se_Wi is mainly responsible for the within class separation of samples. p[] 1..8.6.4.. -. -.4 -.6 -.8-1. IRIS training.m1 (PCA-X), PCA entire training set p[comp. 1]/p[Comp. ] Se_Wi -.6 -.4 -....4.6 p[1] Se_Le Pe_W Pe_L 16/4/3 17 IRIS: Step 1, A look at the raw data Sepal Length Sepal Width Petal Length Petal Width / Setosa / Min 4.3.3 1..1 / Max 5.8 4.4 1.9.6 / Versicolor / Min 4.9. 3. 1. / Max. 7. 3.4 5.1 1.8 / Virginica / Min 4.9. 4.5 1.4 / Max. 7.9 3.8 6.9.5 Conclusion: Setosa is easy to separate from Virginica and Versicolor 16/4/3 18

IRIS: Step, PC modelling of Ve/Vi Question How do we separate Virginica and Versicolor? First attempt A PC model with only Virginica and Versicolor? Conclusion Some but not complete separation t[] 1-1 - IRIS training.m (PCA-X), Setosa excluded t[comp. 1]/t[Comp. ] Ve Ve Ve Ve Ve Ve Ve Ve Ve Vi Ve Ve Ve Ve Ve Ve Vi Ve Vi Vi Ve Vi Ve Ve Vi Vi Vi Vi Vi Ve Ve Vi Vi Vi Vi Ve Ve Vi Vi Vi Ve Vi Vi Vi Vi Vi -4-3 - -1 1 3 4 t[1] 16/4/3 19 IRIS: Step 3, SIMCA A separate PC model is made for each class (Se/Ve/Vi) Then all prediction data (75 obs) are subjected to each model Assignment of class membership is based on a comparison of DModX and Dcrit D crit 16/4/3

IRIS: Prediction (classification) by Setosa model 4 of 5 Setosa s are correctly classified; Ve/Vi fundamentally different IRIS training.m3 (PCA-Class(1)), Setosa, PS-IRIS prediction DModXPS[Comp. 1] 15. DModXPS[1](Norm) 1. 5. Setosa. D-Crit(.5) 76 77 78 79 8 81 8 83 84 85 86 87 88 89 9 91 9 93 94 95 96 97 98 99 1 11 1 13 14 15 16 17 18 19 11 111 11 113 114 115 116 117 118 119 1 11 1 13 14 15 16 17 18 19 13 131 13 133 134 135 136 137 138 139 14 141 14 143 144 145 146 147 148 149 15 Obs ID (Primary) M3-D-Crit[1] =.18 16/4/3 1 IRIS: Prediction (classification) by Versicolor model Ve prediction set observations correctly classified; however, some Vi samples are false positives IRIS training.m4 (PCA-Class()), Versicolor, PS-IRIS prediction DModXPS[Comp. 1] 7. 6. DModXPS[1](Norm) 5. 4. 3.. 1. D-Crit(.5) Versicolor. 76 77 78 79 8 81 8 83 84 85 86 87 88 89 9 91 9 93 94 95 96 97 98 99 1 11 1 13 14 15 16 17 18 19 11 111 11 113 114 115 116 117 118 119 1 11 1 13 14 15 16 17 18 19 13 131 13 133 134 135 136 137 138 139 14 141 14 143 144 145 146 147 148 149 15 Obs ID (Primary) M4-D-Crit[1] =.18 16/4/3

IRIS: Prediction (classification) by Virginica model Vi prediction set observations correctly classified; however, most Ve samples are also inside Dcrit of Vi-model IRIS training.m5 (PCA-Class(3)), Virginica, PS-IRIS prediction DModXPS[Comp. ] 1. DModXPS[](Norm) 1. 8. 6. 4. Virginica.. D-Crit(.5) 76 77 78 79 8 81 8 83 84 85 86 87 88 89 9 91 9 93 94 95 96 97 98 99 1 11 1 13 14 15 16 17 18 19 11 111 11 113 114 115 116 117 118 119 1 11 1 13 14 15 16 17 18 19 13 131 13 133 134 135 136 137 138 139 14 141 14 143 144 145 146 147 148 149 15 Obs ID (Primary) M5-D-Crit[] =.38 16/4/3 3 IRIS: Coomans plot derived from Vi/Ve models Named after the Belgian chemometrician Danny Coomans (~ 198) DModX of two models are plotted in a scatter plot Four zones of diagnostic interest are created (see plot) M5.DModXPS+[](Norm) 13 1 11 1 9 8 7 6 5 4 3 1 IRIS training M4.DModXPS+[Comp. 1]/M5.DModXPS+[Comp. ] Only Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Ve Vi Ve Ve Vi D-Crit(.5) Vi Vi Ve Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi Vi 1 3 4 5 6 7 Both Vi and Ve D-Crit(.5) Se M4.DModXPS+[1](Norm) M5-D-Crit[] =.38 M4-D-Crit[1] =.18 Se Se Se SeSe Se Se Se Se Se Only Vi SeSe Neither Vi nor Ve 16/4/3 4

IRIS - Summary The conclusions that may be drawn are thus the following: (i) Setosa specimens are quite different from Versicolor and Virginica observations. (ii) There is an overlap between the Versicolor and Virginica classes, and they cannot be completely separated. However, it is possible to predict if an unknown sample is (a) definitely Versicolor; (b) definitely Virginica; (c) definitely neither; or (d) Virginica or Versicolor using the SIMCA methodology. 16/4/3 5 Industrial example: NIR_Chip NIR measurements from particleboard industry NIR data in the range 4-498 nm (15 spectral variables) were measured on four types of wood chips differing in particle size and moisture content 14 (4*35) training set observations and 78 prediction set samples Objective: To study whether the wood chips could be distinguished from each other using NIR and SIMCA 16/4/3 6

NIR_Chip: Step 1 PCA for overview The two first components explain 94% of the spectral variation; Some separation among the four classes is seen NIRChipT.M1 (PCA-X), PCA for overview t[comp. 1]/t[Comp. ] NIRChipT.M1 (PCA-X), PCA for overview DModX[Comp. ] Moisture t[].4.. -. DModX[](Norm). 1.5 1. D-Crit(.5) 6 88 139 -.4.5-1 1 t[1] Particle size 4 6 8 1 1 14 Num M1-D-Crit[] = 1.163 16/4/3 7 NIR_Chip: Step SIMCA Four classspecific PCA models were computed Complexity always A = 3 R X in the range.86 to.94 16/4/3 8

NIR_Chip: Classification of prediction set samples Class 1 model NIRChipT.M (PCA-Class(1)), Class 1, PS-NIRChipS DModXPS+[3](Norm) 1 DModXPS+[3](Norm) 8 6 4 D-Crit(.5) 141 14 143 144 145 146 147 148 149 15 151 15 153 154 155 156 157 158 159 16 161 16 163 164 165 166 167 168 169 17 171 17 173 174 175 176 177 178 179 18 181 18 183 184 185 186 187 188 189 19 191 19 193 194 195 196 197 198 199 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 Obs ID (Primary) M-D-Crit[3] = 1.37 Observations 141-161 are classified as close to Class 1 16/4/3 9 NIR_Chip: Classification of prediction set samples Class model NIRChipT.M3 (PCA-Class()), Class, PS-NIRChipS DModXPS+[3](Norm) 5 DModXPS+[3](Norm) 15 1 5 D-Crit(.5) 141 14 143 144 145 146 147 148 149 15 151 15 153 154 155 156 157 158 159 16 161 16 163 164 165 166 167 168 169 17 171 17 173 174 175 176 177 178 179 18 181 18 183 184 185 186 187 188 189 19 191 19 193 194 195 196 197 198 199 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 Obs ID (Primary) M3-D-Crit[3] = 1.37 Observations 16-166 are classified as close to Class 16/4/3 3

NIR_Chip: Classification of prediction set samples Class 3 model NIRChipT.M4 (PCA-Class(3)), Class 3, PS-NIRChipS DModXPS+[3](Norm) 8 7 DModXPS+[3](Norm) 6 5 4 3 1 D-Crit(.5) 141 14 143 144 145 146 147 148 149 15 151 15 153 154 155 156 157 158 159 16 161 16 163 164 165 166 167 168 169 17 171 17 173 174 175 176 177 178 179 18 181 18 183 184 185 186 187 188 189 19 191 19 193 194 195 196 197 198 199 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 Obs ID (Primary) M4-D-Crit[3] = 1.37 Observations 167-175, 177, and 19-1 are classified as close to Class 3. Samples 176, 178-191, and -18 are also classified as rather similar to class 3. 16/4/3 31 NIR_Chip: Classification of prediction set samples Class 4 model NIRChipT.M5 (PCA-Class(4)), Class 4, PS-NIRChipS DModXPS[3](Norm) 4 DModXPS[3](Norm) 3 1 D-Crit(.5) 141 14 143 144 145 146 147 148 149 15 151 15 153 154 155 156 157 158 159 16 161 16 163 164 165 166 167 168 169 17 171 17 173 174 175 176 177 178 179 18 181 18 183 184 185 186 187 188 189 19 191 19 193 194 195 196 197 198 199 1 3 4 5 6 7 8 9 1 11 1 13 14 15 16 17 18 Obs ID (Primary) M5-D-Crit[3] = 1.37 Observations 3-9 are classified as close to class 4. Samples and 1-18 are classified as rather close to class 4 16/4/3 3

Conclusions of NIR_Chip investigation NIR characterisation coupled with multivariate data analysis is useful for on-line discrimination of four types of starting material in the particleboard industry. The correct class memberships are as follows: Observations 141-161 class 1 Observations 16-166 class Observations 167-1 class 3 Observations -18 class 4. Classification results ranged from good to excellent for the prediction data. The worst classification was for class 3 samples. This study hints at how multivariate characterisation for classification of raw materials can be carried out when the same starting material is delivered in different batches, or supplied by different manufacturers. This approach is common practice in the pharmaceutical and particleboard industries. 16/4/3 33

Multivariate Data Analysis and Modelling Basic Course Chapter 4 Partial Least Squares Projections to Latent Structures (PLS) Relating X to Y Contents Partial Least Squares Projection to Latent Structures (PLS) Notation Scaling of Variables Geometric Interpretation Algebraic Solution Example Diagnostics Outliers Residuals Cross-validation Predictions Conclusions 1/1/3

Quantitative Modelling, PLS Find relationships between sets of multivariate data X and Y Predict one set from other for new observations x i Applications: Process modelling and optimisation Chemical composition Quality Physical measurements Biol. Activity Chemical structure Reactivity Properties Biol. Activity Multivariate calibration Signals (spectra) Concentrations Energy content Age; Taste... 1/1/3 3 Example; Whisky Correlation of Sensory & Analytical Data in Flavour Studies into Scotch Malt Whisky; Swan & Howie, 1984 PCA of sensory data of 14 Scotch malt whiskies PCA of GC data of 14 Scotch malt whiskies 1/1/3 4

Example; Whisky Whisky - PLS predictions for test set Whisky - PLS predictions for test set Predicted Feinty 18 16 14 1 45 9 1 6 8 1 7 3 11 13 1 1 1 1 14 16 18 Actual Feinty Prediect Estery 8 6 4 18 16 8 5 4 11 1 1 3 6 13 9 7 14 14 16 18 4 6 8 Actual Estery 1 1/1/3 5 K = number of X variables M = number of Y variables N = number of observations A = number of PLS components Notation K M N N T = matrix of X-scores with columns t 1,.., t A (vectors) P = matrix of X-loadings with columns p 1,.., p A (vectors) W = matrix of X-weights with columns w 1,.., w A (vectors) U = matrix of Y-scores with columns u 1,.., u A (vectors) C = matrix of Y-weights with columns c 1,.., c A (vectors) 1/1/3 6

Scaling of variables K/M x 3 measured values & "length" unit variance scaling N x x 1 Defining the length of variable axes (X- and Y-spaces) Usually, unit variance scaling is initially used to set each axis length (length one) 1/1/3 7 PLS -- Geometric Interpretation, 1 x 3 inputs outputs y 3 K M observations N X N Y x y x 1 y 1 For each matrix, X and Y, we construct a space with K and M dimensions, respectively Each X- and Y-variable has one coordinate axis with the length defined by its scaling, typically unit variance 1/1/3 8

PLS -- Geometric Interpretation, Same observation x 3 y 3 x y x 1 y 1 Each observation is represented by one point in the X-space and one in the Y-space As in PCA, the initial step is to calculate and subtract the averages; this corresponds to moving the coordinate systems 1/1/3 9 PLS -- Geometric Interpretation, 3 x 3 y 3 x y x 1 y 1 Same observation The mean-centering procedure implies that the origin of each coordinate system is repositioned 1/1/3 1

PLS -- Geometric Interpretation, 4 x 3 Comp 1 (t 1 ) Comp 1 (u 1 ) y 3 x y x 1 y 1 Projection of observation i The first PLS-component is a line in the X-space and a line in the Y-space, calculated to a) approximate the point-swarms well in X and Y and also b) provide a good correlation between the projections (t 1 and u 1 ) Directions are w 1 and c 1 and co-ordinates along these vectors are t 1 and u 1, respectively. 1/1/3 11 PLS -- Geometric Interpretation, 5 The projection coordinates, t 1 and u 1, in the two spaces, X and Y, are connected and correlated through the inner relation u i1 = t i1 + h i (h i is a residual) The slope of the dotted line is 1. 1/1/3 1

PLS -- Geometric Interpretation, 6 x 3 y 3 Comp 1 (t 1 ) Comp 1 (u1 ) Comp (t ) Comp (u ) x y x 1 y 1 The second PLS component is represented by lines in the X- and Y-spaces. X-lines are orthogonal. Y-lines may or may not be orthogonal. These lines, with directions w and c and projection co-ordinates t and u, improve the approximation and correlation as much as possible. 1/1/3 13 PLS -- Geometric Interpretation, 7 u 1 u t 1 t The second projection coordinates (t and u ) correlate, but usually less well than the first pair of latent variables When the correlation is better between t and u than between t 1 and u 1 this indicates a strong structure in X that is not present in (related to) Y 1/1/3 14

PLS -- Geometric Interpretation, 8 x 3 Comp 1 (t 1 ) Comp 1 (u 1 ) y 3 Comp (t ) Comp (u ) x y x 1 y 1 The PLS components form planes in the X- and Y-spaces The variability around the X-plane is used to calculate a tolerance interval within which new observations similar to the training set will be located. This is of interest in classification and prediction. 1/1/3 15 PLS -- Geometric Interpretation, 9 u 1 u Repeated plotting of successive pairs of latent variables will give a good appreciation of the correlation structure u 3 t 1 u A t t 3 t A 1/1/3 16

PLS, Overview X Y = 1*x' + T*P' + E = 1*y' + U*C' + F = 1* y' + T* C' + G (because U = T + H) (inner relation) PLS differences to PCA Projection of X that both approximates X well, and correlates well with Y Projection of X that is an optimal approximation of X (least squares fit) 1/1/3 17 PLS, Parameter properties For each component: 1) t are linear combinations of X with weights w - t is a summary of the X-variables that are correlated with Y ) u are linear combinations of Y with weight c - u is a summary of the Y-variables 3) w are the covariances between the x's and u - Columns of X highly correlated with Y have high weights 4) At Convergence for the Orthogonality: - p is computed so that t*p' is the best approximation of X" - t*p' is removed from X for the next component 1/1/3 18