Linear Discriminant Analysis



Similar documents
Viewing Ecological data using R graphics

Prof. Nicolai Meinshausen Regression FS R Exercises

Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees

T-test & factor analysis

Introduction to Principal Components and FactorAnalysis

Didacticiel - Études de cas

Multivariate Analysis of Variance (MANOVA): I. Theory

Dimensionality Reduction: Principal Components Analysis

Computational Assignment 4: Discriminant Analysis

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

How To Understand Multivariate Models

Table of Useful R commands

DISCRIMINANT FUNCTION ANALYSIS (DA)

Common factor analysis

Getting Started with R and RStudio 1

How to calculate an ANOVA table

Session 7 Bivariate Data and Analysis

Multivariate Analysis of Variance (MANOVA)

Big Data and Scripting. Plotting in R

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Introduction to Principal Component Analysis: Stock Market Values

Statistical Models in R

Overview of Factor Analysis

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Multivariate Statistical Inference and Applications

Additional sources Compilation of sources:

Introduction to General and Generalized Linear Models

Week 5: Multiple Linear Regression

Principal Component Analysis

Factor Analysis. Chapter 420. Introduction

Teaching Multivariate Analysis to Business-Major Students

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Tutorial 3: Graphics and Exploratory Data Analysis in R Jason Pienaar and Tom Miller

Chapter 6: Multivariate Cointegration Analysis

Psychology 205: Research Methods in Psychology

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

Chapter 13 Introduction to Linear Regression and Correlation Analysis

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Elementary Statistics Sample Exam #3

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler

Impact of Working Capital Management on Firms Performance: Evidence from Non-Financial Institutions of KSE-30 index

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

Multivariate Analysis of Ecological Data

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Graphical Representation of Multivariate Data

Tutorial for proteome data analysis using the Perseus software platform

Iris Sample Data Set. Basic Visualization Techniques: Charts, Graphs and Maps. Summary Statistics. Frequency and Mode

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

How To Use Statgraphics Centurion Xvii (Version 17) On A Computer Or A Computer (For Free)

Chapter 2 Exploratory Data Analysis

Partial Least Squares (PLS) Regression.

Data Mining: Algorithms and Applications Matrix Math Review

Data Mining and Visualization

Statistical Models in R

Least Squares Estimation

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Multivariate Normal Distribution

2. Simple Linear Regression

Regression Analysis (Spring, 2000)

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Visualization of textual data: unfolding the Kohonen maps.

SPSS Guide: Regression Analysis

Visualizing Data. Contents. 1 Visualizing Data. Anthony Tanbakuchi Department of Mathematics Pima Community College. Introductory Statistics Lectures

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

MULTIPLE REGRESSION EXAMPLE

We extended the additive model in two variables to the interaction model by adding a third term to the equation.

Pearson s Correlation

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Exploratory data analysis for microarray data

Tutorial 2: Descriptive Statistics and Exploratory Data Analysis

Manifold Learning Examples PCA, LLE and ISOMAP

Simple Linear Regression, Scatterplots, and Bivariate Correlation

SPSS and AMOS. Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong

E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F

Data Exploration Data Visualization

Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine

Algebra 1 Course Information

STA 4273H: Statistical Machine Learning

Data analysis process

TIPS FOR DOING STATISTICS IN EXCEL

DATA INTERPRETATION AND STATISTICS

Multivariate Analysis of Variance (MANOVA)

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA

Geostatistics Exploratory Analysis

Descriptive Statistics

Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Simple Linear Regression Inference

R Graphics II: Graphics for Exploratory Data Analysis

Linear Threshold Units

Factor Analysis. Sample StatFolio: factor analysis.sgp

Department of Economics

Transcription:

Fiche TD avec le logiciel : course5 Linear Discriminant Analysis A.B. Dufour Contents 1 Fisher s iris dataset 2 2 The principle 5 2.1 Linking one variable and a factor.................. 5 2.2 Linking a set of p variables and a factor.............. 6 3 Illustration with the Iris dataset 7 3.1 Using discrimin of the ade4 package............... 7 3.2 Using lda of the MASS package................... 8 3.3 Comparison between the two functions............... 10 4 More information: tests and allocation 11 4.1 Testing the eigenvalues........................ 11 4.2 Individual allocations........................ 13 5 Your turn! 15 6 Conclusion 15 References 15 1

1 Fisher s iris dataset The data were collected by Anderson [1] and used by Fisher [2] to formulate the linear discriminant analysis (LDA or DA). The dataset gives the measurements in centimeters of the following variables: 1- sepal length, 2- sepal width, 3- petal length, and 4- petal width, this for 50 flowers from each of the 3 species of iris considered. The species considered are Iris,, and. data(iris) names(iris) [1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species" dim(iris) [1] 150 5 1 One can build the histograms by species and by variable: 1 http://cs-people.bu.edu/mdassaro/pp3/ Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 2/16 Compilé le 2009-04-30

0.0 0.4 0.8 4.5 6.0 7.5 Sepal.Length Sepal.Width 0.0 0.4 0.8 4.5 6.0 7.5 Sepal.Length 0.0 0.4 0.0 0.6 1.2 2.0 3.0 4.0 0.0 0.4 0.8 1.2 2.0 3.0 4.0 Sepal.Width 0.0 1.0 0.0 0.6 1.2 1 3 5 7 Petal.Length 0.0 0.4 0.8 1 3 5 7 Petal.Length 0.0 0.3 0.6 0.0 1.5 3.0 0.5 1.5 2.5 Petal.Width 0.0 1.0 0.5 1.5 2.5 Petal.Width 0.0 1.0 4.5 6.0 7.5 2.0 3.0 4.0 1 3 5 7 0.5 1.5 2.5 Sepal.Length Sepal.Width Petal.Length Petal.Width par(mfcol = c(3, 4)) for (k in 1:4) { j0 <- names(iris)[k] br0 <- seq(min(iris[, k]), max(iris[, k]), le = 11) x0 <- seq(min(iris[, k]), max(iris[, k]), le = 50) for (i in 1:3) { i0 <- levels(iris$species)[i] x <- iris[iris$species == i0, j0] hist(x, br = br0, proba = T, col = grey(0.8), main = i0, xlab = j0) lines(x0, dnorm(x0, mean(x), sd(x)), col = "red", lwd = 2) } } One can display the bivariate scatterplots. library(ade4) par(mar = c(0, 0, 0, 0)) pan1 <- function(x, y,...) { xy <- cbind.data.frame(x, y) s.class(xy, iris$species, include.ori = F, add.p = T, clab = 1.5, col = c("blue", "black", "red"), cpoi = 2, csta = 0.5) } pairs(iris[, 1:4], panel = pan1) Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 3/16 Compilé le 2009-04-30

Sepal.Length 2.0 3.0 4.0 0.5 1.5 2.5 4.5 5.5 6.5 7.5 2.0 3.0 4.0 Sepal.Width Petal.Length 1 2 3 4 5 6 7 4.5 5.5 6.5 7.5 0.5 1.5 2.5 1 2 3 4 5 6 7 Petal.Width One can display the 3-dimensional scatterplots. library(scatterplot3d) par(mfrow = c(2, 2)) mar0 = c(2, 3, 2, 3) scatterplot3d(iris[, 1], iris[, 2], iris[, 3], mar = mar0, color = c("blue", "black", "red")[iris$species], pch = 19) scatterplot3d(iris[, 2], iris[, 3], iris[, 4], mar = mar0, color = c("blue", "black", "red")[iris$species], pch = 19) scatterplot3d(iris[, 3], iris[, 4], iris[, 1], mar = mar0, color = c("blue", "black", "red")[iris$species], pch = 19) scatterplot3d(iris[, 4], iris[, 1], iris[, 2], mar = mar0, color = c("blue", "black", "red")[iris$species], pch = 19) Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 4/16 Compilé le 2009-04-30

iris[, 3] 1 2 3 4 5 6 7 4 5 6 7 8 iris[, 1] 2.0 2.53.03.5 4.0 4.5 iris[, 2] iris[, 4] 0.0 0.5 1.0 1.5 2.0 2.5 1 2 3 4 5 6 7 2.0 2.5 3.0 3.5 4.0 4.5 iris[, 2] iris[, 3] iris[, 1] 4 5 6 7 8 1 2 3 4 5 6 7 0.0 0.51.0 1.5 2.02.5 iris[, 4] iris[, 2] 2.0 2.5 3.0 3.5 4.0 4.5 4 5 6 0.0 0.5 1.0 1.5 2.0 2.5 7 8 iris[, 1] iris[, 3] iris[, 4] The ability to discriminate species using this graph approach varies with the ID of the variables and the number of variables considered. Moreover, we can t plot more than 3 variables at the same time. Looking for a variable or a combination of variables which separate the groups is discriminating. Two main objectives can be distinguished: 1. a descriptive discrimination to answer the question: Which variables separate the groups? 2. a predictive discrimination to solve the following problem: Let s say I sample a new individual (e.g., a new flower). Which group does it belong to? These two aims lead to different ways of programming the discriminant analysis: 1. discrimin from the ade4 package; 2. lda from the MASS package. 2 The principle 2.1 Linking one variable and a factor To study a group effect (the factor), one can compute a one-way analysis of variance. We previously saw (cf Within and Between PCA) that such procedure allows: to answer the null hypothesis: the g population means are equal to decompose the total sum of square distances to the general mean into between and within sums of squares: c = b + w Let s divide the previous formula by n (or n-1). One can say that the total variance c/n can be divided into: Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 5/16 Compilé le 2009-04-30

- the variance linked to the factor called between variance b/n, - the variance not linked to the factor called residual variance or within variance w/n. 1. From a descriptive point of view, one can compute the correlation ratio: b/c. The correlation ratio varies between 0 and 1. If near 1, the continuous variable and the groups are linked, i.e. the means differ. If near 0, there is no link between the continous variable considered and the groups, i.e. all means are equal. 2. From a predictive point of view, a test (analysis of variance) can be computed based on the weighting ratio b/w. 2.2 Linking a set of p variables and a factor Let s call X the data frame containing a set of p variables measured on n individuals. We saw that principal component analyses seek linear combinations of variables maximizing the total inertia, i.e. the total variance. The solution is obtained by the diagonalization of the covariance matrix X T 0 DX 0 Q where X 0 is the X matrix centred, D and Q the weightings of rows and columns respectively. Let s call C the total covariance matrix. C can be expressed as a function of: - a covariance matrix of the table containing the means per group, also called between covariance matrix B - a covariance matrix of the table containing the distances between the individuals and their group means, also called within covariance matrix W. The equation of the one-way analysis of variance can be extended to the total covariance matrix: C = B + W A discriminant analysis looks for combinations of variables y maximizing the between covariance matrix divided by the total covariance matrix (or divided by the within covariance matrix) under one condition which depends of the approach. 1. If the approach is descriptive: the between covariance matrix is divided by the total covariance matrix BC 1 and the constraint is that the total variance of y equals 1. 2. If the approach is predictive: the between covariance matrix is divided by the within covariance matrix BW 1 and the constraint is that the within variance of y equals 1. The two processes give the same discriminant functions (nearly a constant) and the eigenvalues are linked by a simple relation. Let s call λ k an eigenvalue of BC 1 and µ k the corresponding eigenvalue of BW 1. µ k = λ k λ k = µ k 1 λ k 1 + µ k Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 6/16 Compilé le 2009-04-30

3 Illustration with the Iris dataset In the Iris dataset, iris[,1:4] contains the measures on petals and sepals and iris$species is the categorical variable with the three species: Iris,, and. 3.1 Using discrimin of the ade4 package This process is the descriptive approach of the linear discriminant analysis (DA). To compute a DA in the ade4 package, one uses the discrimin function. Before computing a DA, a classical principal component analysis (dudi.pca) is performed on the continuous variables to get the table of normed variables, the weightings of rows and columns. Then, the chosen categorical variable is defined in the discrimin function. library(ade4) pca1 <- dudi.pca(iris[, 1:4], scannf = FALSE) dis1 <- discrimin(pca1, iris$species, scannf = FALSE) names(dis1) [1] "eig" "nf" "fa" "li" "va" "cp" "gc" "call" dis1 Discriminant analysis call: discrimin(dudi = pca1, fac = iris$species, scannf = FALSE) class: discrimin $nf (axis saved) : 2 eigen values: 0.9699 0.222 data.frame nrow ncol content 1 $fa 4 2 loadings / canonical weights 2 $li 150 2 canonical scores 3 $va 4 2 cos(variables, canonical scores) 4 $cp 4 2 cos(components, canonical scores) 5 $gc 3 2 class scores plot(dis1) Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 7/16 Compilé le 2009-04-30

A.B. Dufour Petal.Width d = 1 d = 1 Sepal.Width Sepal.Length Canonical Petal.Length weights Sepal.Width Petal.Width Sepal.Length Petal.Length Cos(variates,canonical variates) Eigenvalues Scores and classes d = 0.5 x1 RS4 RS1 RS3 RS2 Cos(components,canonical variates) Class scores Six plots are displayed. 1. Canonical weights The first scatterplot (top left) represents the coefficients of the linear discriminant functions on the two first axes of the DA. Their total variances equal 1 and their between variances are maximal. The 4 used variables are the normed columns of the PCA. 2. cos(variates, canonical variates) The second scatterplot (just below the first one) represents the covariances between the 4 variables and the two first axes of the DA. 3. Eigenvalues is the screeplot of the eigenvalues describing the contribution of each axis to the inertia. 4. Scores and Classes This plot shows the projections of the individuals onto the plane defined by the axes of the DA. Groups are displayed by ellipses where the centres are the means (between variances) and the ellipses the within variances. 5. cos(components, canonica variates) shows the projection of the four axes kept by the normed PCA onto the two axes from the DA. 6. Class scores This plot shows the position of the group means on the two first axes of the DA. 3.2 Using lda of the MASS package We are here interested in the predictive approach associated to the linear discriminant analysis (DA). To compute a DA in the MASS package, one uses the Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 8/16 Compilé le 2009-04-30

lda function. There is only one step here (as opposed to the two steps procedure seen previously (dudi.pca/discrimin)) and information on the continuous variables (in a object matrix) and the categorical variable considered need to be simultaneously provided. library(mass) dis2 <- lda(as.matrix(iris[, 1:4]), iris$species) names(dis2) [1] "prior" "counts" "means" "scaling" "lev" "svd" "N" "call" dis2 Call: lda(as.matrix(iris[, 1:4]), grouping = iris$species) Prior probabilities of groups: 0.3333333 0.3333333 0.3333333 Group means: Sepal.Length Sepal.Width Petal.Length Petal.Width 5.006 3.428 1.462 0.246 5.936 2.770 4.260 1.326 6.588 2.974 5.552 2.026 Coefficients of linear discriminants: LD1 LD2 Sepal.Length 0.8293776 0.02410215 Sepal.Width 1.5344731 2.16452123 Petal.Length -2.2012117-0.93192121 Petal.Width -2.8104603 2.83918785 Proportion of trace: LD1 LD2 0.9912 0.0088 plot(dis2) LD2 5 0 5 5 0 5 10 LD1 Only one scatter plot is displayed: the individual positions on the two discriminant axes. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 9/16 Compilé le 2009-04-30

3.3 Comparison between the two functions We previously said that the linear functions are defined by a nearly constant value. discrimin provides a linear combination of the normed variables with the fa coefficients. The matricial product is provided in w1, and if we have a look at the first 10 individuals and the last 10 individuals, we get: w1 <- as.vector(scalewt(iris[, 1:4]) %*% dis1$fa[, 1]) round(w1[1:10], dig = 4) [1] 1.4135 1.2499 1.3132 1.1946 1.4259 1.3504 1.2646 1.3335 1.1503 1.2875 round(w1[140:150], dig = 4) [1] -0.9124-1.1665-0.8952-0.9657-1.1916-1.2006-0.9898-0.9082-0.8710-1.0321 [11] -0.8211 lda provides a linear combination of the initial variables with the scaling coefficients. The matricial product is provided in w2, and if we have a look at the first 10 individuals and the last 10 individuals, we get: w2 <- as.vector(as.matrix(iris[, 1:4]) %*% dis2$scaling[, 1]) round(w2[1:10], dig = 4) [1] 5.9567 5.0236 5.3847 4.7081 6.0272 5.5968 5.1075 5.5002 4.4554 5.2380 round(w2[140:150], dig = 4) [1] -7.3089-8.7582-7.2107-7.6126-8.9011-8.9525-7.7501-7.2847-7.0728-7.9913 [11] -6.7883 The comparison leads to the following proportional link: plot(w1, w2, pch = 20) abline(lm(w2 ~ w1)) w2 10 5 0 5 1.5 1.0 0.5 0.0 0.5 1.0 1.5 w1 discrimin gives a linear combination of total variance equals to 1, var(w1) * 149/150 [1] 1 Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 10/16 Compilé le 2009-04-30

maximizing the between variance (equal to the first eigenvalue). summary(lm(w1 ~ iris[, 5]))$r.squared [1] 0.9698722 dis1$eig [1] 0.9698722 0.2220266 lda gives a linear combination of within variance equals to 1, tapply(w2, iris[, 5], var) 0.7181898 1.0736485 1.2081617 mean(tapply(w2, iris[, 5], var)) [1] 1 maximizing the same between variance. summary(lm(w2 ~ iris[, 5]))$r.squared [1] 0.9698722 Eigenvalues of both analyses are linked by the relation µ k = eigval1 <- dis1$eig eigval1 [1] 0.9698722 0.2220266 eigval2 <- eigval1/(1 - eigval1) eigval2 [1] 32.1919292 0.2853910 eigval2/sum(eigval2) [1] 0.991212605 0.008787395 dis2$svd^2/sum(dis2$svd^2) [1] 0.991212605 0.008787395 λ k 1 λ k 4 More information: tests and allocation 4.1 Testing the eigenvalues Both procedures (discrimin and lda) allow to test the existence of a true difference between groups, based on randomizations. The null hypothesis is that each group is a random sample of a multinormal distribution. If the discriminant value (i.e., the eigenvalue) associated to the discriminant function is large enough, the null hypothesis is rejected. In other words, the generalisation of the one-way ANalysis Of VAriance (ANOVA) is the one-way Multivariate ANalysis Of VAriance (MANOVA). The comparison of the group means for one variable becomes the comparison of group means for a vector of variables. ANOVA µ 1 = µ 2 =... = µ g µ 1 1 µ 2 1. µ p 1 MANOVA µ 1 2 = µ 2 2. =... = µ p 2 µ 1 g µ 2 g. µ p g Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 11/16 Compilé le 2009-04-30

where µ k is the mean of the variable for the k population and µ j k is the mean of the j variable for the k population. measures <- as.matrix(iris[, 1:4]) resm <- manova(measures ~ iris$species) There are several ways to test the significativity of the eigenvalues (all based on the MANOVA approach): 1. Pillai s test. summary(resm, test = "Pillai") Df Pillai approx F num Df den Df Pr(>F) iris$species 2 1.192 53.466 8 290 < 2.2e-16 *** Residuals 147 --- Signif. codes: 0 The Pillai criteria is the sum of the eigenvalues provided by the dicrimin function. sum(eigval1) [1] 1.191899 2. Wilks s test. summary(resm, test = "Wilks") Df Wilks approx F num Df den Df Pr(>F) iris$species 2 0.023 199.145 8 288 < 2.2e-16 *** Residuals 147 --- Signif. codes: 0 The Wilks criteria is the product of the within variances provided by the dicrimin function. prod(1 - eigval1) [1] 0.02343863 3. Hotelling-Lawley s test. summary(resm, test = "Hotelling-Lawley") Df Hotelling-Lawley approx F num Df den Df Pr(>F) iris$species 2 32.48 580.53 8 286 < 2.2e-16 *** Residuals 147 --- Signif. codes: 0 The Hotelling-Lawley criteria is the sum of the eigenvalues provided by the lda function. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 12/16 Compilé le 2009-04-30

sum(eigval2) [1] 32.47732 4. Roy s test. summary(resm, test = "Roy") Df Roy approx F num Df den Df Pr(>F) iris$species 2 32.19 1166.96 4 145 < 2.2e-16 *** Residuals 147 --- Signif. codes: 0 The Roy criteria is the greater eigenvalue provided by the lda function. max(eigval2) [1] 32.19193 If the multinormality is not acceptable (and the challenge is to define what s acceptable or not), one can compute a non parametric version of Pillai s test. plot(randtest.discrimin(dis1)) Histogram of sim Frequency 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 0.20 0.25 0.30 sim 4.2 Individual allocations The lda functions allows to answer the following question: knowing the measures of a new individual, can we predict the group it belongs to? To understand the process, one can take an example using the Iris data. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 13/16 Compilé le 2009-04-30

We extract by randomization 50 individual iris from the dataset, create a table containing these individuals tabref and a factor containing the name of the species they belong to espref. echa <- sample(1:150, 50) tabref <- iris[echa, 1:4] espref <- iris[echa, 5] We create a table tabsup with the 100 other iris and a factor containing the names of the species they belong to espsup. tabsup <- iris[-echa, 1:4] espsup <- iris[-echa, 5] We compute a linear discriminant analysis using the 50 references (using tabref espref). lda0 <- lda(tabref, espref) lda0 Call: lda(tabref, espref) Prior probabilities of groups: 0.40 0.26 0.34 Group means: Sepal.Length Sepal.Width Petal.Length Petal.Width 5.010000 3.380000 1.470000 0.255000 5.969231 2.846154 4.323077 1.369231 6.670588 2.988235 5.700000 2.029412 Coefficients of linear discriminants: LD1 LD2 Sepal.Length 0.1124365 0.0746533 Sepal.Width 2.3258368 2.6838519 Petal.Length -1.7955507-0.3309531 Petal.Width -3.5351688 1.4259364 Proportion of trace: LD1 LD2 0.9968 0.0032 We predict the allocations of the 100 other iris and provide a contingency table dealing with the known species information and the allocations. espestim <- predict(lda0, tabsup)$class table(espestim, espsup) espsup espestim 30 0 0 0 36 2 0 1 31 We can predict the allocation, analyse the predictions (very good with this dataset) but there is no specific test to calculate an error of misclassification. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 14/16 Compilé le 2009-04-30

5 Your turn! The data were proposed by Manly [3]. The skulls dataframe has 150 rows (egyptian skulls) and 4 columns (anthropometric measures). The four variables are the maximum breadth (V1), the basibregmatic height (V2), the basialveolar length (V3) and the nasal height (V4). All measurements are expressed in millimeters. The measurements are made on 5 groups (30 skulls per group). The groups are defined as follows : 1 - the early predynastic period (circa 4000 BC) 2 - the late predynastic period (circa 3300 BC) 3 - the 12th and 13th dynasties (circa 1850 BC) 4 - the Ptolemiac period (circa 200 BC) 5 - the Roman period (circa 150 BC). The group vector is obtained using the gl function (look at the help to better understand this useful function) and the levels function to associate a name to each modality. fac <- gl(5, 30) levels(fac) <- c("-4000", "-3300", "-1850", "-200", "+150") 6 Conclusion The linear discriminant analysis is well-known and well-described especially due to the old statistical debate between exploratory and confirmatory methods. Its use depends on the situation, data and objectives. Always remember that statistical solutions are defined by ecological questions - meaningless results are often linked to badly formulated questions. provides several methods to deal with the discriminant analysis such as discrimin associated to the exploratory point of view and lda to the confirmatory point of view. method discrimin lda linear combination of variables fa LD variance of the linear combination total variance equals 1 within variance equals 1 maximized criteria BC 1 BW 1 References [1] E. Anderson. The irises of the gaspe peninsula. Bulletin of the American Iris Society, 59:2 5, 1935. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 15/16 Compilé le 2009-04-30

[2] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179 188, 1936. [3] B.F.J. Manly. Randomization and Monte Carlo methods in biology. Chapman and Hall, London, 1991. Logiciel R version 2.8.1 (2008-12-22) course5.rnw Page 16/16 Compilé le 2009-04-30