Functional Analysis of Real World Truck Fuel Consumption Data



Similar documents
BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS

How To Cluster

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

Data Exploration Data Visualization

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

CS Introduction to Data Mining Instructor: Abdullah Mueen

Exploratory Data Analysis

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Time series clustering and the analysis of film style

Descriptive statistics Statistical inference statistical inference, statistical induction and inferential statistics

Multivariate Analysis of Ecological Data

Diagrams and Graphs of Statistical Data

Exercise 1.12 (Pg )

Data Mining: Algorithms and Applications Matrix Math Review

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Clustering & Visualization

Hierarchical Cluster Analysis Some Basics and Algorithms

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

We discuss 2 resampling methods in this chapter - cross-validation - the bootstrap

A Short Tour of the Predictive Modeling Process

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Tutorial for proteome data analysis using the Perseus software platform

Local outlier detection in data forensics: data mining approach to flag unusual schools

Multivariate Normal Distribution

Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data

Environmental Remote Sensing GEOG 2021

Exploratory data analysis (Chapter 2) Fall 2011

EXPLORING SPATIAL PATTERNS IN YOUR DATA

Common Tools for Displaying and Communicating Data for Process Improvement

An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis]

How To Check For Differences In The One Way Anova

The Scientific Data Mining Process

Principal Component Analysis

FUNCTIONAL EXPLORATORY DATA ANALYSIS OF UNEMPLOYMENT RATE FOR VARIOUS COUNTRIES

Cluster Analysis: Advanced Concepts

Appendix 1: Time series analysis of peak-rate years and synchrony testing.

Volvo Parts Corporation Volvo Trip Manager. Contents

Module 4: Data Exploration

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Statistical Models in Data Mining

Demographics of Atlanta, Georgia:

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Unsupervised Data Mining (Clustering)

Geostatistics Exploratory Analysis

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Interpreting Data in Normal Distributions

HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

Interactive Logging with FlukeView Forms

Fitting Subject-specific Curves to Grouped Longitudinal Data

Tutorial 3: Graphics and Exploratory Data Analysis in R Jason Pienaar and Tom Miller

Dimensionality Reduction: Principal Components Analysis

. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns

Supervised Feature Selection & Unsupervised Dimensionality Reduction

A Demonstration of Hierarchical Clustering

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Making Sense of the Mayhem: Machine Learning and March Madness

Component Ordering in Independent Component Analysis Based on Data Power

Normality Testing in Excel

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Analysing Questionnaires using Minitab (for SPSS queries contact -)

Data Exploration and Preprocessing. Data Mining and Text Mining (UIC Politecnico di Milano)

Linear Threshold Units

Data, Measurements, Features

Server Load Prediction

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3

Going Big in Data Dimensionality:

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Standardization and Its Effects on K-Means Clustering Algorithm

the points are called control points approximating curve

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Overview Accounting for Business (MCD1010) Introductory Mathematics for Business (MCD1550) Introductory Economics (MCD1690)...

There are a number of different methods that can be used to carry out a cluster analysis; these methods can be classified as follows:

Engineering Problem Solving and Excel. EGN 1006 Introduction to Engineering

VISUALIZING HIERARCHICAL DATA. Graham Wills SPSS Inc.,

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Exploratory data analysis for microarray data

Neural Networks Lesson 5 - Cluster Analysis

Data Preparation and Statistical Displays

Gerard Mc Nulty Systems Optimisation Ltd BA.,B.A.I.,C.Eng.,F.I.E.I

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

2. Simple Linear Regression

Social Media Mining. Data Mining Essentials

Data Mining and Visualization

Descriptive Statistics

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

College Readiness LINKING STUDY

2002 IEEE. Reprinted with permission.

Manifold Learning Examples PCA, LLE and ISOMAP

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Distances, Clustering, and Classification. Heatmaps

Transcription:

Technical Report, IDE0806, January 2008 Functional Analysis of Real World Truck Fuel Consumption Data Master s Thesis in Computer Systems Engineering Georg Vogetseder School of Information Science, Computer and Electrical Engineering Halmstad University

Functional Analysis of Real World Truck Fuel Consumption Data School of Information Science, Computer and Electrical Engineering Halmstad University Box 82, S-01 18 Halmstad, Sweden January 2008

Acknowledgement If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family anatidae on our hands. Douglas Adams (1952-2001) Thanks to my family, especially my mother Eva and friends. ii

Abstract This thesis covers the analysis of sparse and irregular fuel consumption data of long distance haulage articulate trucks. It is shown that this kind of data is hard to analyse with multivariate as well as with functional methods. To be able to analyse the data, Principal Components Analysis through Conditional Expectation (PACE) is used, which enables the use of observations from many trucks to compensate for the sparsity of observations in order to get continuous results. The principal component scores generated by PACE, can then be used to get rough estimates of the trajectories for single trucks as well as to detect outliers. The data centric approach of PACE is very useful to enable functional analysis of sparse and irregular data. Functional analysis is desirable for this data to sidestep feature extraction and enabling a more natural view on the data. iii

Contents Acknowledgement Abstract List of Figures List of Tables ii iii vi viii 1 Introduction 1 1.1 Background................................. 1 1.2 Motivation and Novelty........................... 2 1. Related Work................................ 4 1.4 Limitations................................. 5 1.5 Outline.................................... 6 2 Methods 7 2.1 General Statistical Methods........................ 7 2.1.1 PCA................................. 7 2.1.2 Hierarchical Clustering....................... 9 2.1. Validation Methods......................... 9 2.1.4 Diagrams.............................. 10 2.2 Functional Data Analysis.......................... 10 2. Principal Components Analysis through Conditional Expectation... 11 The Vehicle Application and Data Description 1.1 Data..................................... 1.1.1 Impurities in the Truck Data................... 14.1.2 Data structure........................... 16.2 Approach.................................. 19 iv

4 Results 21 4.1 Basic Data Analysis............................ 21 4.1.1 Data Binning............................ 21 4.1.2 Feature Extraction......................... 24 4.1. Function Fitting.......................... 26 4.2 Application of PACE............................ 28 4.2.1 Baseline PACE Results....................... 28 4.2.2 Number of Principal Components................. 4 4.2. Error Assumptions in PACE.................... 6 4.2.4 Different Kernel Functions..................... 8 4.2.5 Variances.............................. 9 4.2.5.1 Model Variance...................... 40 4.2.5.2 Data Variance...................... 40 4. Prediction.................................. 42 4.4 Outlier Detection.............................. 44 4.5 Expansion.................................. 47 5 Discussion 50 6 Conclusion 51 Bibliography 5 List of Abbreviations 55 v

List of Figures.1 Fuel Consumption between Observations................. 14.2 Fuel consumption plot generated from the raw data........... 15. Histograms of the original and the cleaned data............. 17.4 Fuel consumption plot generated from the clean data.......... 17.5 Scatter plot and histograms........................ 18.6 Histogram of the distance between observations............. 19 4.1 Distribution and mean/variance of binned data............. 22 4.2 Boxplots of binned data.......................... 2 4. Outlier detection based on feature extraction............... 25 4.4 Straight line fitting............................. 27 4.5 Plot of mean function and principal components............. 29 4.6 Scree Plot.................................. 0 4.7 Smoothed covariance matrix........................ 1 4.8 Reconstructed curves versus mean function and raw observations of selected trucks................................. 2 4.9 Reconstructed curves and raw measurements for all trucks........ 4.10 Reconstructed traces of misfitted trucks.................. 4.11 Comparison of reconstructed trajectories with differing number of PCs 5 4.12 Reconstructed trajectories without measurement error assumed.... 7 4.1 A comparison of µ with different smoothing kernels........... 8 4.14 A comparison of PCs with different smoothing kernels......... 9 4.15 Distribution of all mean curves...................... 41 4.16 Graph of all mean curves.......................... 41 4.17 Trucks with a high influence on the results of PACE........... 42 4.18 Data variance................................ 4 vi

4.19 Normal Distribution Plots of the PC scores................ 45 4.20 Histograms of the probability of trucks.................. 46 4.21 Samples of truck probability........................ 46 4.22 PACE Results of Speed Data....................... 47 4.2 PACE Results on Seasonal Fuel Consumption.............. 48 4.24 Selected trucks from the Seasonal Fuel Consumption Data....... 49 vii

List of Tables 4.1 MSE of PACE with 8 principal components............... 4 4.2 MSE of PACE with PCs......................... 5 4. MSE of PACE with 4 PCs......................... 5 4.4 MSE of PACE with 29 PCs........................ 6 4.5 MSE of PACE with 8 PCs and error cut-off............... 7 viii

1 Introduction 1.1 Background The original idea for analyzing this data came from Volvo Parts AB, one of the main business units of Volvo Group AB. The role of Volvo Parts is to provide solutions and tools to the after-market, which includes vehicle electronics diagnostic tools. When a truck is in the workshop, the vehicle electronics data is read out from the truck using diagnostics tools from Volvo Parts and transmitted to a central database. This data, which is collected from the vehicles electronics systems is called logged vehicle data (LVD) and is collected from sensors within the truck. Several electronic subsystems supply information for LVD, which can include data from the electronic suspension, the transmission, and most importantly from the Engine Electric Control Unit. The current main use of LVD is seemingly just basic analysis, e.g. remote diagnostics of faulty components and simple statistics. One of the problems with analysing LVD is the relative lack of observations. The source of this lack of information is the data retrieval process. The procedure is a time consuming process, making it a cost factor for the workshops. The time consumption affects the adoption rate of this procedure in the field negatively, which leads to the data composition detailed in Section.1. The basic idea behind the problems detailed in this thesis is to expand the usefulness of the data for Volvo Parts, retrieving additional new information from it and provide means to access this information. This is done by using recent advanced statistical 1

1. Introduction 2 techniques. As a starting point to the application of these techniques, the analysis of the fuel consumption data contained in LVD was suggested. Fuel consumption data is very interesting from a statistical point of view. This interest stems from being a major cost factor, as well as being influenced by a high number of other factors, such as: Usage patterns of the operator, i.e. the driving style and habits Maintenance of the truck Gross Combination Weight usage, i.e. the cargo of the truck Environment, i.e. hilliness, road condition, etc. The influence of these and more factors make this data a good indicator. But the mass of influences also makes exact determination of the underlying cause impossible. Additionally, some of these influences might cancel each other out, thus removing information. If it is possible to extract information from fuel consumption data, then it should work for the rest of the data too. 1.2 Motivation and Novelty From LVD, it should be possible to extract information on hidden trends, i.e. the principal components (see Section 2.1.1) that are common to all similar trucks. Based on these components, it should be possible to determine if a truck is unrelated to other trucks, i.e. a outlier and to predict future developments in fuel consumption, when the trucks behavior is similar to that of other vehicles. It is very easy to take the last observation of each truck in a group of similar trucks to determine abnormal fuel consumption, but it is hardly possible to calculate underlying trends or other information from these facts. To discover information like trends or outliers from LVD, the data of a truck has to include not only the last observation available, but also past ones. These requirements, multiple observations of a truck and a set of similar trucks lead to the irregular and sparse structure of the data used in this thesis. The data is described in more detail in Section.1.

1. Introduction The analysis of this data can be done in at least two ways. The most obvious choice in methodology would be the use of multivariate statistics, but for several reasons detailed below, the central methodology for this thesis is functional statistics. Functional statistics focuses on analysing the data as functions, rather than a set of discrete values 1. Multivariate statistics are a set of methods which work on more than one variable at a time. Some examples for these methods are regression analysis, principal components analysis and artificial neural networks. Principally, functional statistics are also part of this set, as both have multiple variables as input. However, the focus on handling the input variables as continuous functions rather than arbitrary variables separates those two fields. As the observation of trucks in the workshop is not happening regularly, i.e. the observations can not be fitted to a grid, it is difficult to incorporate all information from the input into variables for use in multivariate statistics. Therefore, features like mean, variance, duration of all observations, date of first observation, odometer count at the last observation, etc. have to be extracted from the data to be able to do analysis. Inevitably, the extraction of this knowledge leads to information loss, which is problematic on this already sparse data. The process of discovery and selection of important features for multivariate analysis is very difficult and time consuming. It is crucial to extract and select the best and most important features from the data to minimize the data loss and maximize the information content of the features for the success of all further steps in analysis. Feature extraction creates an additional layer of data processing and introduces a large number of tunable knobs. Functional Data Analysis (FDA) on the other hand, preserves the information in the data present and does not need feature extraction at all. Furthermore, it facilitates a more natural handling of the data, describing not only more or less abstract features of the data, but a function which resembles the data. The choice of using functional over multivariate data analysis is also motivated by the ability to analyze the functional properties of the data, e.g. derivatives of the data. Additionally, FDA does not introduce a high number of additional parameters, unlike multivariate analysis. 1 A more detailed description on this collection of methods can be found in Section 2.2.

1. Introduction 4 However, multivariate analysis has an advantage over FDA when a high number of different functions have to be analysed at the same time. FDA has problems in visualizing this higher dimensional data, as well as the necessity of having a high amount of data for each dimension (curse of dimensionality). The most important step in FDA is the transformation of the discrete data to a functional basis. Again, the irregular and sparse nature of the data makes this transformation difficult. For being able to perform FDA on this data, a method called Principal Components Analysis through Conditional Expectation (PACE) is applied. The foundation of PACE is the assumption that a smooth function is underlying the sparse data. Under this assumption, it is possible to use even irregular data for the discovery of principal components. The main novel aspect of this thesis is the application of FDA and PACE to automotive data. Previously it has successfully been applied to biological data, economic processes, bidding in online auction houses, but not automotive data. PACE itself is highly interesting to be applied to the data at hand, because it is able to work on it without the need for feature extraction or regular observations. The methods used in this work can be used to describe the actual fuel consumption of the observed trucks in customer hands. This means the methods applied to LVD are driven by data and not by a model. 1. Related Work General sources of information on data analysis related to this work are The Elements of Statistical Learning [1], Functional Data Analysis [2] and Nonparametric Functional Data Analysis []. The single most important paper related to this work is Functional Data Analysis for Sparse Longitudinal Data [4], which proposed the method PACE and applied it to yeast cell cycle gene expression data and to longitudinal CD4 cell percentages. The percentage is used as a maker for the progress of AIDS in adults.

1. Introduction 5 Functional Data Analysis for Sparse Auction Data [5] combines the PACE approach with linear regression to predict closing prices of online auctions. The most related of the few public papers on fuel consumption in heavy trucks is Heavy Truck Modeling for Fuel Consumption Simulations and Measurements [6]. This work deals with building a simulation model of fuel consumption. Another paper, which discusses methods to reduce idle fuel consumption in North American long distance trucks and highlights typical driver behavior is Analysis of Technology Options to Reduce the Fuel Consumption of Idling Trucks[7] Additional information on doing PCA on sparse and irregular data can be found in Principal component models for sparse functional data[8] and Sparse Principal Component Analysis[9]. More related to PACE is Properties of principal component methods for functional and longitudinal data analysis[10]. Another paper which is related to the estimation of Functional Principal Component Scores is [11]. Knowledge relating to linear regression analysis for longitudinal data can be found in [12]. 1.4 Limitations The scope of this thesis is to research the possibilities for the application of FDA methods to the sparse and irregular automotive data from LVD. It is outside of the scope of this thesis to establish a conclusive theory about a true long term fuel consumption model of all truck engines. The conclusive, globally valid model is impossible because of a relatively low number of individuals in the data, as well as a limited observation duration and possible differences in usage patterns of the trucks, i.e. vehicles with a high mileage in a limited time span do not necessarily exhibit a similar fuel consumption to low mileage trucks in the same time span.

1. Introduction 6 1.5 Outline The next chapter Methods describes crucial used methods. This includes underlying basic methods as well as the foundations of FDA and PACE. The chapter Application provides a description of the data used in this thesis and includes information on the interplay of the proposed methods and the data. Chapter 4 provides comprehensive information on the results. The last two chapters, Conclusion and Discussion, wrap up the results from this thesis and provide an outlook on possible continuations of the research.

2 Methods This chapter is divided into three parts. General Statistical Methods describes nonfunctional methods which are fundamental to this work. Functional Data Analysis provides an introduction into this field. The final part, Principal Components Analysis through Conditional Expectation gives an overview of this crucial method. 2.1 General Statistical Methods This section introduces general statistical concepts used in this thesis and a number of tools to visualize data and test results. 2.1.1 Principal Component Analysis One of the constitutional methods for analysing LVD is the Karhunen-Loève transformation, universally known as Principal Component Analysis (PCA). PCA is also the foundation to Functional Principal Component Analysis (FPCA)[1, 1]. Basically, PCA is a method to explore data by finding the most important ways the variables in the data differ from another. It can compress the data by discovering a low number of linear combinations of input variables which contribute most to the variability of the input. These linear combinations are found by constructing a linear basis for the data where the retained variability is maximal. 7

2. Methods 8 Mathematically speaking, the goal is to reduce or compress high dimensional data X to lower dimensional data Y. To do this reduction, a number of algorithms are available, here, a method involving the calculation of the covariance is described. The first step is to calculate the mean vector µ for each variable: µ i = 1 K i x ij, i = 1... N K i j=1 where N denotes the number of variables and K i the number of observations in one variable. Subsequently, µ is removed from every observation in X, which is subsequently denoted as X X. In the next step the covariance matrix cov(x X) has to be calculated. Covariance is a measure how two variables vary together. If those two variable vary in the same way (i.e. same prefix), the covariance will be positive. If, on the other hand, the two variables have different prefixes, the covariance will be negative. A covariance matrix is the result of calculating the covariance for all members of two vectors. The resulting matrix gives the grade of correlation between the input vectors. To find a mapping M that is able to transform the high dimensional data into low dimensional data, M that maximizes M T cov(x X)M has to be found. It can be shown that the best (variance maximizing) mapping is formed by the eigenvectors of the covariance matrix. Hence, PCA has to solve the eigenproblem to get the transformation matrix. cov(x X)M = λm The eigenproblem has to be solved d times with different principal eigenvalues λ to get the principal eigenvectors (or principal components). The low dimensional representation Y can then be computed by simple multiplication: Y = (X X)M

2. Methods 9 2.1.2 Hierarchical Clustering Hierarchical clustering is a relatively simple method [1] to segment data into related groups. Clustering is used within this thesis for testing if differing clusters of trucks can be found from extracted features. Hierarchical clustering needs a dissimilarity measure between the elements. The standard for measuring the dissimilarity is the euclidean distance, which is also used in this thesis. When the distance between all possible pairs of elements is calculated, the clusters can be built. For building these clusters, there are two different approaches: The agglomerative approach, which starts with as many clusters as there are individuals. The divisive method starts with one big cluster which is then split into smaller clusters. Agglomerative methods are guaranteed to have a monotonic increasing level of dissimilarity between merged clusters, growing with the level of merging. This property is not guaranteed to divisive approaches. The second choice for building the clusters is to decide on the measurement for the distance between two clusters. Single Linkage The link between the clusters is defined by the smallest distance between elements in the two clusters. Complete Linkage The link is defined by the largest distance between elements in the two clusters, the opposite of the first method. Average Linkage Uses the average distance between all pairs of elements in both clusters. 2.1. Validation Methods A number of methods to validate the results and to estimate variation were used in the scope of this thesis. These include brief usage of bootstrap, jackknife and various cross validation methods, such as k-fold and leave-one-out[1]. Bootstrapping is the process of randomly picking a samples from given observations where a single observation can be chosen multiple times. The goal of a bootstrap is to approximate the distribution from these samples.

2. Methods 10 Jackknifing can be used to estimate the bias and standard error. Jackknife is very similar to k-fold and leave-one-out cross validation, as it systematically removes one or more observations from a sample and then recalculates the results as often as there are possible readouts. 2.1.4 Diagrams A number of special diagrams were used to illustrate some results of this thesis. Those diagrams are dendrograms, boxplots and scree plots [1, 2]. Dendrograms are tree diagrams which are used to illustrate the result of a clustering algorithm. An example for such a diagram is Figure 4.. On the vertical axis the distance between clusters is plotted. A horizontal line denotes a split between classes at this specific distance measure. This implies that a split at a higher distance value has a higher dissimilarity between the split classes, as opposed to a lower distance value split. Boxplots describe groups of data such as binned data through five statistical properties. A boxplot example can be seen in Figure 4.2. The box represents the lower and the upper quartile, showing where half of the data is contained. The line in this box illustrates the median of data in this group. The whiskers attached to this box extend to the furthest data point, up to a maximum of 1.5 the distance between the quartiles. Data points outside of this boundary are usually marked with a cross, indicating a possible outlier. Scree plots give an indication of the relevance of a principal component (eigenfunction) by indicating the accumulated eigenvalue up to the n-th principal component. This plot can be used to select a suitable number of eigenfunctions. An example for a scree plot is Figure 4.6. 2.2 Functional Data Analysis Functional data analysis (FDA) [2, ] is a collection of methods which enable the investigation of data in a functional form. Functional data is the idea of looking at a

2. Methods 11 set of observations not as a vector in discrete time, but as a continuous function. The analysis of functions rather than discrete samples inherits advantages over multivariate analysis. An advantage of this property is that the rate of change or derivatives of these functions can easily be calculated and analysed. FDA also includes variants of multivariate methods like PCA. Functional PCA, like normal PCA, not only provides a method for dimensionality reduction, but also characterizes the main modes of variation from a mean function. To perform FDA on discretely sampled data, the data has to be converted to a continuous, functional format. This means a function has to be fitted to the sampled data points. It is not feasible to convert every dataset to a functional form. Especially in the case of sparse and irregular observations, this task is very difficult, but central to the success of functional data analysis. Usually, the methods used to convert data into a functional format are interpolation and smoothing, or more generally function fitting. A very simple method to do this conversion would be a least squares fit of a first order polynomial (a straight line). Usually, a more flexible method is used for this step, namely spline interpolation. Depending on the underlying data, other fits like Fourier functions are possible. FDA is easily applicable if the measurements were done with a regular spacing, and the data is complete over the observation duration. In the opposite case, it is very difficult to estimate the complete trajectory, when only a single subject is taken into calculation. 2. Principal Components Analysis through Conditional Expectation Principal Components Analysis through Conditional Expectation (PACE) is a derivative of functional principal components analysis for sparse longitudinal data, proposed in the paper Functional Data Analysis for Sparse Longitudinal Data by Yao, Müller and Wang[4].

2. Methods 12 PACE is an algorithm for extracting the principal components from irregular and sparse data. It also provides an estimation of individual smooth trajectories of the data. PACE assumes that the data is randomly located with a random number of observations per subject. Furthermore it assumes that data is determined by a underlying smooth trajectory. The first step in PACE is the estimation of the smooth mean function µ, by using a local linear line smoother on all measurements combined into one pool of data. The choice of the smoothing parameter, or bandwidth is done automatically[14] or by hand in this step. The covariance surface can then be calculated like a regular covariance matrix. This raw covariance surface is stripped of the variance (the first diagonal). This raw matrix is then smoothed utilizing a local linear surface smoother. The bandwidth is chosen by leave-one-curve-out cross-validation. The smoothing step is necessary to fill in for missing observations. The estimation of these two model components share the same smoothing kernel. The choice of a smoothing kernel is discussed in Chapter 4. From these model components, it is possible to calculate the estimates of the eigenvalues and eigenfunctions, i.e. the functional principal components of sparse and irregular data. The last step is the calculation of the functional principal component scores. Those scores describe how much of a principal component is retained in a single subject. However, the conventional method of using numerical integration to recover the Principal Component (PC) scores leads to biased results; because of sparse and irregular data. In this step, the conditional expectation comes into play. It provides the best prediction of the PC scores if the measurement error is Gaussian, or the best linear prediction otherwise. PACE is discussed in detail by Yao, Müller and Wang [4].

The Vehicle Application and Data Description The purpose of this chapter is to outline the connection between the methods proposed in Chapter 2 and the application of those methods on the Volvo data..1 Volvo Truck Data The original data received from Volvo Parts AB consists of 2027 observations of 267 trucks. It was collected between June 2004 and May 2007 in North America. All trucks have the same engine and are configured as articulate truck for long distance transports on smooth roads. The gross combination weight (GCW), which includes the weight of the towed trailer and the truck itself is 6 tons, the US federal GCW limit. Data is retrieved when a truck is in a workshop that is equipped to read out the onboard electronics and performs this procedure. It is then sent to the Volvo Headquarter in Gothenburg for storage and analysis. The data from each observation contains only informations from one of the trucks onboard electronic systems, the Engine Control Unit (ECU). From these data, two variables are mainly relevant for this thesis: Total distance driven Total amount of fuel consumed 1

. The Vehicle Application and Data Description 14 4.5 4 Incremental Fuel Mileage [km/l].5 2.5 2 1.5 1 0.5 0 0 1 2 4 5 6 7 Distance Driven [km] Figure.1: This figure shows the distribution of the fuel consumption, when the fuel mileage is calculated only between two observations. The outliers visible in this figure can be explained by a high amount of idling between two close observations. When the fuel mileage is calculated accumulative, those outliers do not occur. These variables are not reset when the ECU was read out in the workshop and therefore behave accumulative. Using these variables as a basis to calculate the fuel consumption per distance or time has an averaging effect on itself as it includes all former mileage data. This is necessary because of the unevenly distributed data. If a truck was read out twice within a very short span of time, the fuel consumption in this interval is possibly vastly different from the normal fuel consumption behavior of the truck, possibly because the truck was not moved very far withhin this time span, but idling for some time. The outliers caused by this effect can be seen in Figure.1. These outliers are the reason for not using the difference in fuel amounts between two observations as a calculation basis in this thesis. The accumulative approach allows those outliers to remain in the dataset..1.1 Impurities in the Truck Data The raw data retrieved from the trucks contains irregular observations or changes in the truck data which result in some cases in a removal of specific observations or the whole truck from the data set. See Figure.2 for a plot of the raw fuel consumption

. The Vehicle Application and Data Description 15 4.5 Fuel Mileage [km/l] 2.5 2 1.5 1 0 1 2 4 5 6 7 Distance Driven [km] Figure.2: Fuel consumption plot generated from the raw data. The lines are linear interpolations between the observations. data. Incomplete Observations A truck is missing one of more variables that would be required for analysis. The observations from this individual can not be used for the calculations. Physically impossible changes in accumulative variables Between two observations of a single truck, accumulative variables changed to a smaller value. This means that a later observation in time has a smaller number of total driving distance than an earlier measurement for example. This is physically impossible, but observable if the ECU has been replaced or the contents of the ECU were erased during a software update. This criteria applies to 44 trucks. Although it is possible to use a subset of the observations from each of these trucks. This was not done, because the quality of the measurement might have been compromised and the manual effort of cleaning the data is a time consuming task for very few usable measurements. Empty and Duplicated Observations Some observations do not contain any new information, but only seem to be resubmits of earlier or empty observations with a different time stamp. These particular observations are removed from the

. The Vehicle Application and Data Description 16 final data, but the remaining observations of this truck are used. Phenomena like these might occur, when the data aquisition process in the workshop was interrupted, or a transmission error occurred. Early Observations These observations are too early in the life of the truck to give a meaningful information. The removal of these observations is motivated by the unusual fuel consumption of a truck in this state. The unusual fuel consumption is caused by a high number of short trips the truck has to travel before it can be put into regular service. Examples are drives to paint shops or truck customizers as well as transfers to the customer. The number of observations purged when this criteria is set to remove all measurements below 10000 km is 150, when all measurements before 1000km are deleted, the number of observations drops by 100. See Figure.. From the 269 initial individual trucks, 56 trucks are removed. In terms of observations, from originally 2027 1 observations 120 remained in the data set, when the lower border for observations is set to 1000km. See Figure.4 for a plot of the cleaned fuel consumption data. The most visible change to Figure.2 is the lower number of outliers at roughly 0 kilometers, which is mostly an effect of the removal of very early observations..1.2 Data structure Some properties of the data make the task of analyzing inherently difficult. Most of these properties stem from the sparsity of the data. Sparseness in this case means that every truck has been observed on average just 7.405 times with a standard deviation of 2.408 observations. The sparseness of the data is visualized in Figure.5. The data is not fully observed. The observations of a single truck often are not scattered over a very long distance in time or driven distance, but measured only within a short span. The average duration between the first observation of a truck and the last one, where measurements are taken, is 17841 kilometers with a standard deviation of 114208 kilometers. The mean focus of the observations 1 Excluding incomplete observations, as they are not usable at all.

. The Vehicle Application and Data Description 17 250 Raw Data Cleaned Data 200 Number of Observations 150 100 50 0 0 1 2 4 5 6 7 8 9 Distance Driven [km] Figure.: This comparison shows the number of observations on the raw data versus the cleaned data. The overall reduction in the number of observations as well as the lower amount of observations at the beginning is noticeable. 4.5 Fuel Mileage [km/l] 2.5 2 1.5 1 0 1 2 4 5 6 7 Driven Distance [km] Figure.4: Fuel consumption plot generated from the clean data. Note the lack of outliers at the beginning of the data.

. The Vehicle Application and Data Description 18 2.8 Fuel Mileage [km/l] 2.4 2.2 2 0 1 2 4 5 6 7 8 9 Driven Distance [km] Figure.5: The scatter plot in this figure highlights the sparse and irregular distribution of the data. The histograms describe the distribution of the observations along the axes. is at 022 kilometers deviating by 1609 kilometers, which means that most of the trucks are not observed from the beginning, but observed later on in their life-cycle. The density of measurements varies. This implies that the placement of measurements is irregular throughout the duration of their observation. As the trucks are independent of each other, the times when observations happen are not correlated with each other. For a visual representation of the irregular duration between the measurements, see Figure.6. This figure indicates a non-normal distribution. The average distance between observations is 52020 kilometers with a standard deviation of 61858 kilometers. Unsupported curvature. The irregular placement and the sparsity of variables causes this property to occur. If a part of a curve has a high curvature, which can be approximated by d2 y dx 2 or ( d2 y dx 2 ) 2. When this is the case, the relative resolution of the data at the point of the high curvature should also be high to enable a good estimation of the underlying function [2].

. The Vehicle Application and Data Description 19 50 00 Number of Observations 250 200 150 100 50 0 0 0.5 1 1.5 2 2.5.5 4 Distance Driven between Observations [km] Figure.6: This figure shows the distribution of distances between two observations of the same truck..2 Approach The first part in analyzing truck data, which is described in section 4.1, is to establish results with basic multivariate analysis as a basis where the results of functional analysis can be compared to. This part shows pitfalls and difficulties when applying standard multivariate methods to the data. The first possible way for multivariate analysis is feature extraction. It is a difficult task to find relevant features to extract. A simple statistical feature will be extracted from the data to be able to give an idea how feature extraction works. The second possibility for multivariate analysis is to put the observations into bins. This is done in order to be able to align the data onto a vertical grid. The second way is necessary, because it is very hard to visualize and convert to the original data format from the extracted features. However, binning cannot easily be used for outlier detection. Usually, some of the bins are likely to have only a low number of observations which makes outlier determination in this bin very difficult. If the bins are made larger, multiple or even all observations of a single truck might be put into a single bin. This leads to increased difficulty in differentiating between normal and outlying observations.

. The Vehicle Application and Data Description 20 These steps should lead to two results: A simple outlier detection, based on a clustering of the extracted features and a variance and mean estimation for the data, based on the binned data. The task of estimating fuel consumption behavior for a single truck, outside of its observation duration using the extracted features is very hard. This is because the mapping between the values of the features and a function is not available. Additionally, information from other, similar trucks is not taken into consideration. The last step in Basic Analysis (Sect. 4.1) is a demonstration of the main problem of applying FDA on the data at hand: The difficulty of fitting a function to a single truck. The main task of this thesis is to apply the PACE algorithm to the data (Sect. 4.2), and to try out the various options within the PACE algorithm. In this section, the results of PACE in general will be assessed, the difference between PACE with different options in regard of the PACE generated functions as well as general statistical properties, such as the mean function. The first advantage in using the PACE algorithm in comparison to the basic methods is the lack of need to pre-process data, i.e. to extract features or otherwise process the data. This non-parametric input of the data is complimented by a number of options to tune the algorithm itself for various needs (amount of information retained, if the input data has measurement errors, etc.). The next step is to try out a number of methods which can be applied to the results of PACE. For example to calculate the possibility of the fuel consumption of a particular truck, given all the other trucks. PACE enables the user to analyse the sparse and irregular data at hand, enabling the use of additional techniques from FDA, whereas using only multivariate data analysis or normal FDA on the same data is very difficult to do and does not incorporate the information gathered from the other trucks. PACE makes outlier detection, estimation of the function outside the observation duration and the gathering of common statistical properties, like mean and variance in functional form, from sparse and irregular data a lot easier or even possible.

4 Results 4.1 Basic Data Analysis The aim of this section is to provide an overview of basic multivariate analysis possibilities with the available data. Functional methods are applied from Section 4.2. 4.1.1 Data Binning One approach, as described in the previous chapter, is the creation of a vertical grid for the data domain followed by binning the data into a limited number of buckets along the time or distance axis, similar to creating a histogram. If there is more than one observation of a truck in one of these bins, an average of these measurements is put into the bin. This has to be done to avoid biasing in case of dense observations of a truck within a short timespan. The size and the quantity of the bins is crucial for binning. With the data at hand, 25 bins were used, which results in a size of 6087 kilometers per bin. In Figure 4.1 the number of observations per bin, as well as an estimation of the mean function and the variance of the data can be seen. In Figure 4.2 a boxplot of the binned data and one of the results of bootstrapping [1] the mean value per bin (10000 bootstrap samples) are illustrated. 21

4. Results 22 140 Histogram Mean and Deviation 120 2.9 100 2.8 Observations 80 60 Fuel Mileage [km/l] 2.7 2.5 40 2.4 20 2. 0 0 5 10 15 20 25 Bins 2.2 0 Distance Driven [km] Figure 4.1: The histogram depicts the number of observations per bin. Especially the first and the last few bins have a very small number of observations, which leads to the abnormal results in these bins in the mean and standard deviation figure on the right. This figure shows the mean as well as the standard deviation estimated from the binned data.

4. Results 2 Binned Data Boxplot Bootstrapped Mean Boxplot Values.1 2.9 2.8 2.7 2.5 2.4 2. 2.2 2.1 5 10 15 20 25 Bin Values 2.8 2.75 2.7 5 2.55 2.5 2.45 2.4 2.5 5 10 15 20 25 Bin Figure 4.2: The figures show boxplots for the binned data (left) and bootstrapped mean values (right). The left boxplot is a simple plot of the raw binned data, providing an easy visualization. The right boxplot is generated by bootstrapping the mean of each bin 10000 times. Bootstrapping should give an idea of how much the mean can vary, if new data has the same distribution as the data at hand.

4. Results 24 4.1.2 Feature Extraction The features which are retrieved from all observations of a single truck, are used to construct a simple outlier detector with hierarchical clustering. The goal of this simple outlier detector is to find trucks, whose mean is deviating significantly from the mean of the entire data. A single extracted feature was used in this case: T ruck = (µ T ruck µ All ) 2 The data was then clustered with a hierarchical algorithm, using average distance linking. The outlying classes were subjectively selected by looking at the resulting dendrogram. For the results, see Figure 4..

4. Results 25 0.25 Dendrogram.1 Plot 0.2 2.9 Class Distance 0.15 0.1 Fuel Mileage [km/l] 2.8 2.7 2.5 2.4 0.05 2. 2.2 1 5 2 4 7 6 Class 2.1 0 Distance Driven [km] Figure 4.: Results of outlier detection based on feature extraction. The left figure shows the dendrogram of the clustering algorithm. This figure shows that the class 6 is an extreme outlier, whereas the classes and 7 are also quite different from the main part of the data. The basis for this classes being outliers is a vastly different mean from the rest of the data. In the other figure, the outlying clusters are highlighted. The extreme outlier is marked red, the normal outliers are marked green and the normal data is colored blue. The class has 5 members, whereas the other outlier classes have just 1 member. The classes 1 and 5 have 114 respective 52 members. Class 2 has 27 members, whereas 4 has 1 members.

4. Results 26 4.1. Function Fitting Finding a plausible function that is fitting the data of the trucks well is difficult, because of the open-ended nature of the measurements. If a set of observations have a defined start and an end of their measurements, i.e. the data is fully observed, it is easy to interpolate the data in between, even if the data within this span is sparse. This property of the data at hand is also discussed in Section.1. If the set of data is not fully observed, it is almost impossible to get a reliable fit outside the observation span of a single entity. This reliable fit outside of this span is necessary for performing FDA on this data, as FDA needs the same set of basis functions, or in the case of spline interpolation, the same knots for all functions to work. It was not possible to get a good fit on this data with splines, where all of the knots are distributed the same for all truck entities. Also, polynomial fits, i.e. the approximation of the data with low (< 5) order did not result in a stable fit for the available data. The most reliable fit under these conditions were generated by fitting a linear function to the fuel consumption observations. These results in fitting the sparse and irregular data motivate the idea of combining the observations by the means of PACE, to be able to get better fits from the reconstructed trajectories. The results of fitting a straight line to the data can be seen in Figure 4.4.

4. Results 27.5.5 Fuel Mileage [km/l] 2.5 Fuel Mileage [km/l] 2.5 2 2 1.5 Distance Driven [km] 1.5 Distance Driven [km] Figure 4.4: On the left, all fitted straight lines are shown. The right figure shows the mean straight line along with the standard deviation of the slope and the offset (blue) and the standard deviation of just the offset (dashed). The main problem with this straight line fit are a number of fits with high gradients, which are not valid outside their observation span. However, the mean line shows a slight increase in fuel economy, just like the mean curve from PACE (Figure 4.5).

4. Results 28 4.2 Application of PACE The goal of this section is to elaborate on the application of the PACE method on the truck data, focusing only on fuel consumption per kilometer over the distance axis. Along with the results of this first application, some options available for a fine-tuning of the method will be presented and a general estimate of variability will be given. 4.2.1 Baseline PACE Results The data in use for this initial run of the PACE method is the cleaned set, with all the trucks removed which have less than 2 observations. Additionally every observation, that happened before a threshold of 10000 km has been removed. The PACE method has some interchangeable sub-methods. For the baseline results, mostly the same parts as in the original method described in [4] were used. Thus, the kernel used for smoothing the mean function is the Epanechnikov kernel [4] and the input data is assumed to contain measurement errors. A small discrepancy to the original method is the choice of using Fraction of Variance Explained 1 (FVE), instead of the Akaike Information Criterion [1] (AIC) to select the number of PCs. The FVE threshold is set at 95 % of variance explained. Regarding Figure 4.5, the smoothed mean curve should be taken with a grain of salt, especially the variance plots and the measurement density plots in Figure. should be considered. The number of PCs selected by FVE is 8, which accounts for 96.57 % of the total variation. The scree plot (Section 2.1.4) of the principal components from this analysis can be seen in Figure 4.6. The first, strong principal component is almost a straight line, which is basically shifting the mean from its starting point closer to the position of the measurements. The second and the fourth principal component seem to serve partially as corrective for trucks with a higher initial fuel economy than the average truck. The smoothed covariance matrix generated and used by PACE is visualized in Figure 4.7 by a color-matrix. 1 The sum of the eigenvalues of a certain number of eigenfunctions divided by the sum of all eigenvalues has to exceed a certain threshold. The first number of PCs which exceeds this threshold is subsequently used.

4. Results 29 Smooth mean curve 4 x 10 Principal Components 55.72 % 11.88 % 8.65 % 4.0 % Fuel Mileage [km/l] 2.55 2.5 2 1 0 1 2 2.45 0 Distance Driven [km] 0 Distance Driven [km] Figure 4.5: The smooth mean function generated by PACE (left) is the basis for all other results. The four most significant PCs (right) are the strongest ways in which the individual trucks vary. The legend quantifies the strength of the PCs.

4. Results 0 100 96.57 90 Scree Plot Fraction of variance explained (%) 80 70 60 50 40 0 20 10 0 0 5 8 10 15 20 25 0 5 40 45 50 Number of principal components Figure 4.6: The scree plot, which highlights the trade-off between the number of PCs used versus the variance retained. The use of more than 10 PCs makes little sense, as the Fraction of Variance Explained (FVE) is not improving much.

4. Results 1 9 x 105 Smoothed Covariance Matrix 8 0.08 Distance Driven [km] 7 6 5 4 0.06 0.04 0.02 0 2 0.02 1 0 0 10 Distance Driven [km] 0.04 Figure 4.7: The smoothed covariance matrix generated by PACE. (The diagonal, which is the variance, has been removed prior to smoothing.) The main part of the matrix shows a small positive covariance (green).

4. Results 2.2 Vehicle # 14 Vehicle # 106 Vehicle # 92 Vehicle # 72 Vehicle # 4 Fuel Consumption [km/l] 2.8 2.4 2.2 2 Figure 4.8: These plots exhibit the mean curve(red), the corresponding original observations(green) and the reconstructed curve(blue). Vehicle 14 and 106 have high values on all major PC scores, under opposite prefixes. Number 92 has the lowest PC scores overall; Trucks 72 and 4 have average PC scores. High PC scores lead to extreme values, especially on the strong first PC. From the estimated PCA scores, the mean function µ and the principal component functions, the individual traces of the trucks can be reconstructed, which should give a rough estimate on the behavior of the truck. A number of selected reconstructions can be viewed in Figure 4.8 and a collection of all traces and the original measurements can be seen in Figure 4.9. As a next step, for an analysis of the results, the goodness-of-fit of the original measurements versus the reconstructed traces is assessed. To estimate the goodness-of-fit, the mean squared error [1] between the discrete observation and the estimated reconstruction is considered. However, the irregular measurement intervals are making assessment of the results difficult. In Figure 4.10 some examples of bad fits are explained. Just taking the mean of the mean square error (MSE) of all observations of one truck is prone to skewing, as well as just summing up the MSE for each single truck. A more sensible approach to

4. Results.2 Fuel Mileage [km/l] 2.8 2.4 2.2 2 0 1 2 4 Distance Driven [km] 5 6 7 8 Figure 4.9: This graph shows all reconstructed traces (gray) and original measurements (blue). Note how the traces tend to follow the observations, especially when the relative occurrence of observations is low..2 Vehicle # 7 Vehicle # 102 Vehicle # 106 Vehicle # 202 Fuel Consumption [km/l] 2.8 2.4 2.2 2 Figure 4.10: As described in the text, these figures depict misfitted trucks. Vehicle #7 and #106 show trucks which provide bad fits, whereas #102 is a truck which is only identifiable as misfit when median mean square error (MSE) is applied. Truck #202 is a counter-example, where the misfit is more noticeable when the mean MSE is used.