Statistics for Business Decision Making

Similar documents
Common factor analysis

Multivariate Analysis (Slides 13)

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Introduction to Principal Components and FactorAnalysis

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Factor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business

Statistics in Psychosocial Research Lecture 8 Factor Analysis I. Lecturer: Elizabeth Garrett-Mayer

FACTOR ANALYSIS NASC

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Factor Analysis. Sample StatFolio: factor analysis.sgp

Principal Component Analysis

Exploratory Factor Analysis

FACTOR ANALYSIS. Factor Analysis is similar to PCA in that it is a technique for studying the interrelationships among variables.

Dimensionality Reduction: Principal Components Analysis

Rachel J. Goldberg, Guideline Research/Atlanta, Inc., Duluth, GA

4. There are no dependent variables specified... Instead, the model is: VAR 1. Or, in terms of basic measurement theory, we could model it as:

Data Mining: Algorithms and Applications Matrix Math Review

2. Linearity (in relationships among the variables--factors are linear constructions of the set of variables) F 2 X 4 U 4

Factor Analysis. Factor Analysis

Topic 10: Factor Analysis

Factor Analysis. Chapter 420. Introduction

Factor Analysis Example: SAS program (in blue) and output (in black) interleaved with comments (in red)

Factor analysis. Angela Montanari

Exploratory Factor Analysis Brian Habing - University of South Carolina - October 15, 2003

Factor Rotations in Factor Analyses.

Least Squares Estimation

DISCRIMINANT FUNCTION ANALYSIS (DA)

5.1 CHI-SQUARE TEST OF INDEPENDENCE

Multiple regression - Matrices

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Overview of Factor Analysis

DATA ANALYSIS II. Matrix Algorithms

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Simple Linear Regression Inference

T-test & factor analysis

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Data analysis process

Introduction to Principal Component Analysis: Stock Market Values

How to report the percentage of explained common variance in exploratory factor analysis

Introduction to Matrix Algebra

SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011

PRINCIPAL COMPONENT ANALYSIS

Revenue Management with Correlated Demand Forecasting

Exploratory Factor Analysis: rotation. Psychology 588: Covariance structure and factor models

Component Ordering in Independent Component Analysis Based on Data Power

5.2 Customers Types for Grocery Shopping Scenario

Statistical Machine Learning

STATISTICA Formula Guide: Logistic Regression. Table of Contents

A Brief Introduction to Factor Analysis

Multiple Linear Regression in Data Mining

Quadratic forms Cochran s theorem, degrees of freedom, and all that

D-optimal plans in observational studies

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

A Brief Introduction to SPSS Factor Analysis

To do a factor analysis, we need to select an extraction method and a rotation method. Hit the Extraction button to specify your extraction method.

Similarity and Diagonalization. Similar Matrices

1 Example of Time Series Analysis by SSA 1

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Least-Squares Intersection of Lines

Psychology 7291, Multivariate Analysis, Spring SAS PROC FACTOR: Suggestions on Use

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Partial Least Squares (PLS) Regression.

IBM SPSS Direct Marketing 23

Multivariate Analysis

SAS Software to Fit the Generalized Linear Model

A Solution Manual and Notes for: Exploratory Data Analysis with MATLAB by Wendy L. Martinez and Angel R. Martinez.

IBM SPSS Direct Marketing 22

Linear Algebra Review. Vectors

Exploratory Factor Analysis

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

October 3rd, Linear Algebra & Properties of the Covariance Matrix

A Beginner s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

MULTIPLE-OBJECTIVE DECISION MAKING TECHNIQUE Analytical Hierarchy Process

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

The Science and Art of Market Segmentation Using PROC FASTCLUS Mark E. Thompson, Forefront Economics Inc, Beaverton, Oregon

Additional sources Compilation of sources:

Subspace Analysis and Optimization for AAM Based Face Alignment

Principal Component Analysis

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA

Univariate Regression

Factorial Invariance in Student Ratings of Instruction

[1] Diagonal factorization

Multivariate Analysis of Ecological Data

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

What Are Principal Components Analysis and Exploratory Factor Analysis?

Fitting Subject-specific Curves to Grouped Longitudinal Data

Extending the debate between Spearman and Wilson 1929: When do single variables optimally reproduce the common part of the observed covariances?

CHAPTER 4 EXAMPLES: EXPLORATORY FACTOR ANALYSIS

Multivariate Normal Distribution

Exploratory Factor Analysis

PATTERNS OF ENVIRONMENTAL MANAGEMENT IN THE CHILEAN MANUFACTURING INDUSTRY: AN EMPIRICAL APPROACH

Chapter 6: Multivariate Cointegration Analysis

Chapter 6. Orthogonality

Orthogonal Diagonalization of Symmetric Matrices

Statistics 104: Section 6!

Multivariate Analysis of Variance (MANOVA)

Multivariate Statistical Inference and Applications

Transcription:

Statistics for Business Decision Making Faculty of Economics University of Siena 1 / 62

You should be able to: ˆ Summarize and uncover any patterns in a set of multivariate data using the (FM) ˆ Apply factor analysis to business decision-making situations ˆ Analyze and interpret the output of a 2 / 62

Data reduction ˆ (FA) is a multivariate statistical technique of data reduction ˆ Starting point: a large dataset with many correlated variables X 1,X 2,...,X k. Interdependence among the variables is explored. Due to their correlation, the information content of a given variable may overlap with the information content of any other variable, thus producing a double counting of the same information in the original dataset ˆ Through FA a smaller set of new unobserved variables (the common factors) is identied that can be used to explain the interrelationships among the original variables. 3 / 62

How do the factors explain the association among the original variables To say that the factors explain the associations among the original variables means that the original variables are assumed to be conditionally independent given the factors. In other words, any correlation between each pair of measured (manifest) variables arises because of their mutual association with the common factors. 4 / 62

Aim of The denition and interpretation of a smaller number (m < k) of new variables F 1,F 2,...,F m (called factors, often to be thought of as latent constructs) that capture the statistical information contained in the original variables ˆ Advantage: reduction in the complexity of the data, greater simplicity in describing the observed phenomenon ˆ Disadvantage: loss in information plus the introduction of an error component Trade-o: how much loss in the original information are we disposed to accept just to achieve a more parsimonious data summary? Usually the stronger the correlations among the original variables the smaller the number of factors needed to adequately summarize the information 5 / 62

Exploratory FA vs. Conrmatory FA Exploratory starts from observed data to identify unobservable and underlying factors, unknown to the researcher but expected to exist from theory Conrmatory the researcher wants to test one or more specic underlying structures, specied prior to the analysis. This is frequently the case in psychometric studies 6 / 62

Latent Variable s (LVM) may be classied within the framework of Latent Variable s (LVM). LVM are used to represent the complex relations among several manifest variables by simple relations between the variables and an underlying latent structure. is a Latent Variable where both manifest and latent variables are measured on a metrical scale 7 / 62

in Many steps are involved: 1 Identify the main attributes used to evaluate a product/service (for a toothpaste these may be the benets provided in preventing plaque and tartar, freshening the breath, keeping the gums healthy, keeping the mouth clean, etc.) 2 Collect data from a random sample of potential customers on their ratings of all the product attributes (for example on a Likert scale ranging from 1 to 5) 3 Run a factor analysis for nding a set of underlying factors that summarize the respondents attitude towards that product/service 4 Use the new smaller set of factors to either construct perceptual maps and other product positioning services or to simplify subsequent analysis of the data (through regression models or clustering methods) 8 / 62

Example 1 - Attitude and consumer behaviour towards supermarkets Original variables: items that measure consumer's attitudes towards supermarkets Aim: ˆ convenience in reaching the store ˆ product prices ˆ store location ˆ sales promotion ˆ width of aisle in the store ˆ store athmosphere and decoration ˆ store size 1 to summarize the original dataset into a smaller number of dimensions (through FA) 2 to evaluate the eect of the summary dimensions on the choice of the preferred kind of supermarkets (through logit regression). Being the factors uncorrelated, multicollinearity is not a matter of concern 9 / 62

Example 2 - Buying behaviour towards local products Original variables: a set of attitudinal statements relating to dierent aspects of consumers' buying behaviour towards local products ˆ production methods ˆ appearance of a special label ˆ use of no chemical adds ˆ help of local economy ˆ price, quality and nutrition value ˆ environmental and health protection ˆ external appearance ˆ attractiveness of packing ˆ freshness and taste ˆ prestige and curiosity Aim: 1 to identify a smaller number of underlying factors that aect consumers buying behaviour towards local products (through FA) 2 to use the new factors for grouping consumers with similar patterns into homogeneous clusters based on their buying behaviour (through cluster analysis) 10 / 62

Linear Each observed variable X j is linearly related to m common factors F 1,F 2,...,F m and a unique component ε j X 1 = γ 11 F 1 + γ 12 F 2 +... + γ 1m F m + ε 1 X 2 = γ 21 F 1 + γ 22 F 2 +... + γ 2m F m + ε 2... X j = γ j1 F 1 + γ j2 F 2 +... + γ jm F m + ε j... X k = γ k1 F 1 + γ k2 F 2 +... + γ km F m + ε k X j (j = 1,2,...,k) is the original (standardized) variable F h (h = 1,2,...,m) denotes the unobserved common factor γ j1,γ j2,...,γ jm are the factor loadings of X j on the common factors ε j is the residual or unique (as opposed to common) component. It measures the error committed when the original data are summarized by m factors 11 / 62

Comments on the variables in the model ˆ The standardization of the original variables is needed when they are not measured in the same units (and also when they are on very dierent scales). If they are not standardized, the variables with the larger variances would have a greater weight in the method of the factor model. ˆ The variables must be quantitative. For qualitative variables, dierent methods of data reduction must be applied (correspondence analysis, multidimensional scaling) 12 / 62

assumptions ˆ A1. Linearity of the relationship ˆ A2. E [F h ] = 0; Var [F h ] = 1; Cov(F h,f s ) = 0 h,s = 1,2,...,m; s h ˆ A3. E [ε j ] = 0; Cov(ε j,ε t ) = 0 j,t = 1,2,...,k; t j ˆ A4. Cov(ε j,f h ) = 0 j = 1,2,...,k; h = 1,2,...,m 13 / 62

Comments on assumptions A1. Linear models are widely used in statistical data analysis A2. Since the factors are not observable, we might as well think of them as measured in standardized form. Being uncorrelated, each factor has its own information content that does not overlap with the information content of the other factors A3. The unique term can be considered as the error term in a linear regression model since it represents that part of an observed variable not accounted for by the common factors. The homoskedasticity is not required A3 and A4 imply that the correlation between any two observed variables is due solely to the common factors 14 / 62

Consequences of assumptions: variances The variances of the observed variables are functions of: ˆ the factor loadings (γ- coecients) ˆ the variances of the unique terms. Var(X j ) = 1 = γ 2 j1var(f 1 ) +... + γ 2 jmvar(f m ) + Var(ε 2 j ) = = γ 2 j1 +... + γ 2 jm + Var(ε 2 j ) = = m γjh 2 h=1 } {{ } communality + Var(ε 2 j ) } {{ } uniqueness (1) 15 / 62

Communality and uniqueness The communality of an observed variable is the proportion of its variance that is explained by the common factors. The larger the communality, the more successful the factor model can be in explaining the variable. The uniqueness (or specic variance) is the part of the variance of X j that is not accounted by the common factors but it's due to the unique component. 16 / 62

Consequences of assumptions: covariances The covariances between the observed variables are only functions of the factor loadings: Cov(X j,x t ) = γ j1 γ t1 + γ j2 γ t2 +... + γ jm γ tm = m h=1 γ jh γ th (2) The covariances between observed variables and factors are expressed by the factor loadings: Cov(X j,f h ) = γ jh (3) 17 / 62

FM in matrix X : (n k) matrix of k original variables F : (n m) matrix of m factors X = FΓ + E (4) Γ : (k m) rectangular matrix of factor loadings whose generic element is {γ jh } j=1,...,k;h=1,...,m E : (n k) matrix of k unique components 18 / 62

X, F and E matrices x 11 x 12 x 1k x 21 x 22 x 2k X =...... = ( ) X 1 X 2... X k x n1 x n2 x nk E = ( ) ε 1 ε 2... ε k F 11 F 12 F 1m F 21 F 22 F 2m F =...... = ( ) F 1 F 2... F m F n1 F n2 F nm (5) (6) (7) 19 / 62

Γ matrix γ 11 γ 12 γ 1m γ 21 γ 22 γ 2m Γ =...... γ k1 γ k2 γ km is the matrix of factor loadings γ jh (j = 1,...,k;h = 1,...,m) is the loading of X j on F h. It is a measure of the correlation between the j-th variable and the h-th factor. The Γ matrix tells us which variables are mainly related to the dierent factors by detecting the strength and the sign of these links. (8) 20 / 62

Communalities of the squared factor loadings F 1 F h F m Communality m X 1 γ11 2 γ1h 2 γ1m 2 γ1h 2 h=1 X j γ 2 j1 γ 2 jh γ 2 jm X k γ 2 k1 γ 2 kh γ 2 km The sum by row gives the communality. With reference to the j-th row, m h=1 m γjh 2 h=1 m γkh 2 h=1 γ 2 jh is the communality of X j, that is the share of variance of X j explained by all the m factors. 21 / 62

Theoretical Variance-Covariance Matrices In the light of the model assumptions Σ = Var(X) = ΓΓ + Ψ (9) Σ : (k k) var-cov matrix of original variables; symmetric, unit variances on the main diagonal, covariances o-diagonal Var(X j ) = 1 = m h=1 Cov(X j,x t ) = γ 2 jh + Var(ε 2 j ) (10) m h=1 γ jh γ th (11) Ψ : (k k) var-cov matrix of unique components; diagonal, variances on the main diagonal, zero covariances 22 / 62

Observed vs. Theoretical variances On the one hand we have the observed variances and covariances of the X variables. The observed var-cov matrix contains k (k 1) 2 distinct values (the elements above the diagonal) On the other, the variances and covariances implied by the factor model. The theoretical var-cov matrix contains km parameters (only the factor loadings since the specic variances are functions of them) The model is useful for reducing the complexity if km < k (k 1) 2 that is if m < k 1 2 23 / 62

Three stages 1 estimating the factor loadings γ jh (initial solution) as well as the communalities 2 trying to simplify the initial solution through a process known as factor rotation. After the rotation the nal factor solution is supposed to be more easily interpreted. Interpretation is useful to derive a meaningful label for each of the factors 3 estimating the factor scores so that these can be used in subsequent analyses in place of the original variables 24 / 62

Estimation - First stage If the model's assumptions are true, we should be able to estimate the loadings γ jh and the communalities so that the resulting estimates of the theoretical variances and covariances are close to the observed ones. Most common methods: ˆ Principal components method ˆ Maximum likelihood method 25 / 62

Principal components The principal component variables y 1,y 2,...,y k are dened to be linear combinations of the original variables X 1,X 2,...,X k that are uncorrelated and account for maximal proportions of the variation in the original data, i.e., y 1 accounts for the maximum amount of the variance among all possible linear combinations of X 1,X 2,...,X k (that is, it conveys the maximum informative contribution about the original variables) y 2 accounts for the maximum of the remaining variance subject to being uncorrelated with y 1 and so on. 26 / 62

Principal Components Method: Given X : (n k) matrix of k original variables and Σ : (k k) var-cov matrix of original variables, the rst principal component to be extracted is a linear combination of X j of the following kind: y 1 = v 11 X 1 + v 12 X 2 +... + v 1k X k (12) or y 1 = Xv 1 (13) where y 1 is the (n 1) vector of the values of the rst principal component v 11 v 1 = v 12... is the (k 1) vector of the coecients of the linear v 1k combination v 1 has to be estimated in such a way that Var(y 1 ) = max under the constraint v 1 v 1 = 1 27 / 62

First principal component The solution of the constrained maximization problem (that is the vector v 1 that maximizes the variance of the rst principal component subject to the constraint) is the rst eigenvector of Σ matrix. Moreover, Var(y 1 ) = λ 1, where λ 1 is the rst eigenvalue of Σ. It holds that Σv 1 = λ 1 v 1 (14) Since the total variability of the original variables (i.e. the sum of their variances) is equal to k (remember: they are standardized variables, each one has a variance equal to one), the ratio λ 1 k gives the share of total variability that is explained by the rst principal component 28 / 62

Second principal component The second principal component is y 2 = Xv 2 where v 2 is estimated in such a way that Var(y 2 ) = max under the constraints v 2 v 2 = 1 and Cov(y 1,y 2 ) = 0. v 2 is the second eigenvector of Σ matrix. Moreover, Var(y 2 ) = λ 2, where λ 2 is the second eigenvalue of Σ. The ratio λ 2 k gives the share of total variability that is explained by the second principal component 29 / 62

i-th principal component The i-th principal component is y i = Xv i where v i is estimated in such a way that Var(y i ) = max under the constraints v i v i = 1 and Cov(y i,y l ) = 0 (l = 1,2,...,i 1). v i is the i-th eigenvector of Σ matrix whereas for the corresponding eigenvalue λ i it holds that Var(y i ) = λ i. The ratio λ i k gives the share of total variability that is explained by the i-th principal component. The cumulative ratio λ 1+λ 2 +...+λ i k measures the share of total variability that is explained by the principal components up to i-th 30 / 62

Extraction of all the principal components The method could in principle stop only when the number of extracted components equal the number of initial variables. Y = XV (15) where Y : (n k) matrix of principal components; Y = ( y 1 y 2... y k ) V : (k k) matrix of eigenvectors of Σ; V = ( v 1 v 2... v k ) 31 / 62

Covariance matrix of principal components λ 1 0 0 0 λ 2 0 L = Cov(Y) =........ 0 0 λ k (16) where λ 1 λ 2... λ k and k λ i = k i=1 y 1 shows the greatest information content, y 2 shows the second greatest information content,... Each principal component brings an information content which is not greater than the one brought by the previous principal component the k principal components explain 100% of the original variability However, in order for the method to produce actually a data reduction, the number of extracted components should be lesser than the original data dimension (m < k). 32 / 62

The choice of the number of components to be retained The number of principal components can be either directly specied or determined through a statistical/heuristic criterion. In the former case, the can be repeated with a dierent number of components and the solutions can be then compared according to goodness-of-t statistics in order to choose the one that best describes the data. 33 / 62

The choice of the number of components to be retained In the latter case, examples of heuristic criteria are: 1 to extract and retain only those components whose associated eigenvalues exceed one (one is the mean value of the eigenvalues) 2 to retain those components that explain a given share - usually higher than 70-75% - of the original variability (a 30% loss of variability can be usually accepted against a reduction in the data dimensions) 3 to use the scree plot (the plot of the eigenvalues -y axis - against the order of extraction - x axis); the extraction should be stopped when the plot becomes at (the elbow rule) 34 / 62

Reading the FA output Eigenvalue λ i Dierence λ i λ i+1 Proportion λ i k i λ j j=1 Cumulative proportion k 1 5.363 3.789 0.536 0.536 2 1.574 0.248 0.157 0.693 3 1.326 0.439 0.133 0.826 4 0.887 0.347 0.089 0.915 5 0.540 0.332 0.054 0.969 6 0.208 0.132 0.021 0.990 7 0.076 0.055 0.008 0.998 8 0.021 0.016 0.002 1.000 9 0.005 0.005 0.000 1.000 10 0.000-0.000 1.000 Based on the rule of eigenvalues greater than the average, three factors may be retained. The cumulative proportion of variance explained by three factors is 82.6%. 35 / 62

Scree plot 36 / 62

From principal components to factor loadings Once we have retained the rst m principal components, Y : (n m) matrix of p principal components; Y = ( ) y 1 y 2... y m V : (m m) matrix of eigenvectors of Σ; V = ( ) v 1 v 2... v m λ 1 0 0 0 λ 2 0 L = Cov(Y) =...... 0 0 λ m The matrix of (initial) factor loadings is Γ = ΣVL 1/2 37 / 62

Interpretation of factor solution s are articial constructs. Meaning is assigned to a factor through the subset of observed variables that have high loadings on that factor. The interpretation of the factors could be an easy task if every one of them was strongly correlated with a limited number of original variables and weakly correlated with the remaining variables (the higher the loadings of a few variables on one factor the more interpretable the factor). 38 / 62

Statistical relevance of a factor loading Rule of thumb: with a sample size of n = 200 units, a reasonable threshold for a factor loading to be relevant is 0.40. It rises to 0.55 with n = 100 and to 0.75 with n = 50. Usually the initial factors show average correlations with many original variables. The initial factor solution can then be rotated with the purpose of creating new factors that are associated with few original variables and for this reason are more interpretable than the initial ones. 39 / 62

Aim of the rotation The factor rotation takes advantage of a property of factor model: there exists an innite number of set of values for the factor loadings yielding the same covariance matrix as that of the original model. Any new set of loadings is produced by a rotation of the initial solution. Let the initial factor solution represent a m dimension hyperplane: each original variable corresponds to a point whose coordinates are its loadings on the m factors. With the aim of getting more interpretable factors, the aim of the rotation is to nd new coordinate axes where every point-variable is as close as possible to one of the new axes. 40 / 62

of the Orthogonal vs. oblique rotation ˆ Orthogonal rotation methods: the factors remain mutually uncorrelated ˆ Oblique rotation methods: the factors become correlated 41 / 62

Orthogonal rotation methods ˆ Varimax method ensures that only one or a few observed variables have large loadings on any given factor. The aim is to maximize the variability of the columns of the initial loading matrix. The rotated factor loadings will be very close either to one (in absolute value) or to zero, which facilitates the matching of the variables to a given factor ˆ Quartimax method ensures that each variable has large loadings only on one or a few factors. The objective is to maximize the variability of the rows of the initial loading matrix. Several variables may result strongly related to the same factor ˆ Equamax method (something in between the two previous methods) 42 / 62

From factor loadings to factor scores Let Γ 0 = ΣV 0 L 1/2 indicate the rotated loading matrix. The matrix of factor scores is then derived as F = XV 0 L 1/2. The principal components after the rotation are rescaled in order for them to have unit variance 43 / 62

Example 1 - Supermarkets - Rotated Items F 1 (setting) F 2 (position) F 3 (price) Communality Convenience in going to score 0.139 0.845 0.025 0.734 Product price 0.084 0.178 0.834 0.734 Store location 0.076 0.873 0.059 0.771 Sales promotion 0.269 0.094 0.764 0.665 Width of aisle in the store 0.841 0.037 0.122 0.723 Store athmosphere and decoration 0.830 0.114 0.016 0.702 Store size 0.791 0.123 0.062 0.645 % of variance 30.378 22.085 18.595 Cumulative % of variance 30.378 52.463 71.058 44 / 62

Example 1- Use of and Results The three factor scores resulting from the factor analysis are then used as independent variables for a logit regression analysis. Dependent variable: Store Preference (Binary Choice: e.g. Supermarkets in a Department store vs. Stand-alone Supermarkets) The results can be used to elaborate management strategies: when interested in expanding supermarket outlets in department stores, the factors which most inuence the probability of preferring the department stores should be the primary focus. 45 / 62

Example 2 - Local Products - Eigenvalue λ i Dierence λ i λ i+1 Proportion λ i k i λ j j=1 Cumulative proportion k 1 5.484 3.789 0.323 0.323 2 1.964 0.248 0.115 0.438 3 1.557 0.439 0.092 0.530 4 1.257 0.347 0.074 0.604 5 1.083 0.332 0.064 0.668 6 0.798 0.132 0.047 0.715 7 0.793 0.055 0.047 0.762 8 0.681 0.016 0.040 0.802............... 5 factors explaining 66.8% of the total variance were extracted that represent the key consumption dimensions 46 / 62

Example 2 - Rotated Loadings 1: Topicality Original variables Loading Production methods 0.824 Appearance of a special label 0.725 Products with chemical adds 0.677 Help to the local economy 0.650 Price 0.575 High value 0.562 2 : Quality and Health Issues Original variables Loading Quality 0.832 Health protection 0.703 Environmental protection 0.680 Nutrition value 0.459 47 / 62

Example 2 - Rotated Loadings 3: Appearance Original variables Loading Appearance 0.877 Attractiveness of product's packing 0.834 4: Freshness and Taste Issues Original variables Loading Freshness of the product 0.723 Taste of the product 0.612 Interest about the product being clean 0.570 5: Curiosity and Prestige Original variables Loading Curiosity 0.862 Prestige 0.859 48 / 62

Example 2 - Input for a segmentation analysis By replacing the original 17 variables with the 5 factors a segmentation analysis has been performed (through cluster analysis) with the aim of identifying homogeneous groups of consumers. Two groups result that have been named according to their behaviour patterns towards local products as ˆ Consumers inuenced by curiosity, prestige and freshness of the product as well as by marketing issues (attractiveness of the packing of the product, the appearance of the product generally) ˆ Consumers interested in the topicality of the product, in product's certication and environment protection. They pay attention to the ingredients of the product as well as to its price 49 / 62

Observed data - Beach resorts The following are based on: Bracalente B., Cossignani M., Mulas A. (2009), Statistica aziendale, Mc-Graw Hill On a sample of beach resorts, the price of several beach facilities has been observed Variable name Description bed_d chair_d umb2beds_d bed_a bed_w umb+2beds_w paddle_h Bed per day Chair per day Umbrella and two beds per day Bed (only afternoon) Bed per week Umbrella and two beds per week Paddle boat per hour 50 / 62

FA output Eigenvalue λ i Dierence λ i λ i+1 Proportion λ i k i λ j j=1 Cumulative proportion k F 1 4.351 3.287 0.622 0.622 F 2 1.064 0.443 0.152 0.774 F 3 0.621 0.002 0.089 0.862 F 4 0.619 0.432 0.088 0.951 F 5 0.187 0.066 0.027 0.978 F 6 0.121 0.084 0.017 0.995 F 7 0.037-0.005 1 The rst two eigenvalues are greater than one. The corresponding factors explain 77.4% of the original variability. Two factors are extracted. 51 / 62

Scree Plot 52 / 62

Loading matrix - Initial solution Variable F 1 F 2 Communality bed_d 0.9588 0.1094 0.9588 2 + 0.1094 2 = 0.9313 chair_d 0.9251 0.0831 0.9251 2 + 0.0831 2 = 0.8627 umb2beds_d 0.8662-0.3390 0.8662 2 + ( 0.3390) 2 = 0.8652 bed_a 0.7799 0.1148 0.7799 2 + 0.1148 2 = 0.6214 bed_w 0.7684 0.0482 0.7684 2 + 0.0482 2 = 0.5928 umb+2beds_w 0.7492-0.3277 0.7492 2 + ( 0.3277) 2 = 0.6686 paddle_h 0.2567 0.8987 0.2567 2 + 0.8987 2 = 0.8735 For all the observed variables, the proportion of variance accounted for by the common factors (the communality) is very high, from 59.3% to 93.1%. The rst factor is positively related to the prices of beds, umbrellas and chair. The second factor accounts for the price of paddle boat 53 / 62

Loading Plot - Initial solution 54 / 62

Loading matrix after rotation Variable F 1 F 2 bed_d 0.9198 0.2917 chair_d 0.8919 0.2594 umb2beds_d 0.9152-0.1661 bed_a 0.7432 0.2626 bed_w 0.7448 0.1950 umb+2beds_w 0.7982-0.1775 paddle_h 0.0792 0.9313 After the rotation, the rst factor shows strong (positive) correlations with the rst six original variables. The second factor is strongly associated with the last variable 55 / 62

Loading Plot after rotation 56 / 62

Retailer Customers A retailer asks a sample of customers about their monthly income and consumption expenditure (in thousands of euro) and their opinion (score from 0 to 10) on three sections of the store (meat, sh and frozen food). Can the ve original variables be summarized by a smaller number of factors? How many factors are needed and what percentage of the original variability they explain? How can the resulting factors be interpreted? Eigenvalue λ i Dierence λ i λ i+1 Proportion λ i k i λ j j=1 Cumulative proportion k F 1 3.0217 1.7060 0.604 0.604 F 2 1.3157 0.9266 0.263 0.868 F 3 0.3891 0.1263 0.078 0.945 F 4 0.2628 0.2521 0.053 0.998 F 5 0.0107-0.002 1 57 / 62

Loading matrix - Initial solution Variable F 1 F 2 Communality Unexplained income 0.7911-0.6016 0.9877 0.0123 consumption 0.7869-0.6087 0.9896 0.0104 q_meat 0.7768 0.4035 0.7662 0.2338 q_sh 0.6691 0.5735 0.7766 0.2234 q_froz 0.8519 0.3025 0.8172 0.1828 58 / 62

Loading plot - Initial solution Loading plot 59 / 62

Loading matrix after rotation Variable F 1 F 2 Communality Unexplained income 0.1683 0.9795 0.9877 0.0123 consumption 0.1604 0.9818 0.9896 0.0104 q_meat 0.8433 0.2346 0.7662 0.2338 q_sh 0.8805 0.0369 0.7766 0.2234 q_froz 0.8294 0.3597 0.8172 0.1828 60 / 62

Loading plot after rotation Loading plot after rotation 61 / 62

Bartholomew D.J. (1987), Latent Variable s and, Charles Grin & Company Ltd., London. Bracalente B., Cossignani M., Mulas A. (2009), Statistica aziendale, Mc-Graw Hill Tryfos P. (1998), Methods for Business and Forecasting: Text and Cases, John Wiley & Sons. 62 / 62