CLASSIFYING SERVICES USING A BINARY VECTOR CLUSTERING ALGORITHM: PRELIMINARY RESULTS



Similar documents
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Clustering (2) Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining

STATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Chapter 6. The stacking ensemble approach

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

0.1 What is Cluster Analysis?

Performance Metrics for Graph Mining Tasks

Gerry Hobbs, Department of Statistics, West Virginia University

Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data

Clustering UE 141 Spring 2013

Social Media Mining. Data Mining Essentials

How To Cluster

Distances, Clustering, and Classification. Heatmaps

An Overview of Knowledge Discovery Database and Data mining Techniques

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Chapter 7. Cluster Analysis

Data, Measurements, Features

Data Mining - Evaluation of Classifiers

Unsupervised learning: Clustering

Machine Learning using MapReduce

There are a number of different methods that can be used to carry out a cluster analysis; these methods can be classified as follows:

Standardization and Its Effects on K-Means Clustering Algorithm

D-optimal plans in observational studies

Neural Networks Lesson 5 - Cluster Analysis

IBM SPSS Direct Marketing 23

International Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013

Movie Classification Using k-means and Hierarchical Clustering

IBM SPSS Direct Marketing 22

Information Architecture Planning Template for Health, Safety, and Environmental Organizations

Statistical Machine Learning

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

STATISTICA. Financial Institutions. Case Study: Credit Scoring. and

Facebook Friend Suggestion Eytan Daniyalzade and Tim Lipus

REFLECTIONS ON THE USE OF BIG DATA FOR STATISTICAL PRODUCTION

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

The Science and Art of Market Segmentation Using PROC FASTCLUS Mark E. Thompson, Forefront Economics Inc, Beaverton, Oregon

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

Random forest algorithm in big data environment

The Scientific Data Mining Process

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Additional sources Compilation of sources:

Summary Data Mining & Process Mining (1BM46) Content. Made by S.P.T. Ariesen

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

Cluster Analysis. Alison Merikangas Data Analysis Seminar 18 November 2009

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

3. INNER PRODUCT SPACES

Multiple Linear Regression in Data Mining

Example: Document Clustering. Clustering: Definition. Notion of a Cluster can be Ambiguous. Types of Clusterings. Hierarchical Clustering

Hierarchical Cluster Analysis Some Basics and Algorithms

Notes on Factoring. MA 206 Kurt Bryan

Data Mining: Algorithms and Applications Matrix Math Review

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

Strategic Online Advertising: Modeling Internet User Behavior with

EXPERIMENTAL ERROR AND DATA ANALYSIS

Measurement Information Model

Multiple Kernel Learning on the Limit Order Book

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

Novel Automatic PCB Inspection Technique Based on Connectivity

Clustering on Large Numeric Data Sets Using Hierarchical Approach Birch

Trends in Interdisciplinary Dissertation Research: An Analysis of the Survey of Earned Doctorates

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Teaching Multivariate Analysis to Business-Major Students

K-Means Clustering Tutorial

Available online at Available online at Advanced in Control Engineering and Information Science

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Knowledge Discovery from patents using KMX Text Analytics

SECTION 1-6 Quadratic Equations and Applications

Cluster Analysis using R

CLUSTERING FOR FORENSIC ANALYSIS

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Feature Subset Selection in Spam Detection

Chapter ML:XI (continued)

PREDICTING STUDENTS PERFORMANCE USING ID3 AND C4.5 CLASSIFICATION ALGORITHMS

Time series clustering and the analysis of film style

KATE GLEASON COLLEGE OF ENGINEERING. John D. Hromi Center for Quality and Applied Statistics

Marketing Mix Modelling and Big Data P. M Cain

Chapter 20: Data Analysis

How To Identify Noisy Variables In A Cluster

A Capability Model for Business Analytics: Part 2 Assessing Analytic Capabilities

Microsoft Azure Machine learning Algorithms

Lean Certification Program Blended Learning Program Cost: $5500. Course Description

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

A comparison of various clustering methods and algorithms in data mining

Offline sorting buffers on Line

Analysis of JOLTS Research Estimates by Size of Firm

Transcription:

CLASSIFYING SERVICES USING A BINARY VECTOR CLUSTERING ALGORITHM: PRELIMINARY RESULTS Venkat Venkateswaran Department of Engineering and Science Rensselaer Polytechnic Institute 275 Windsor Street Hartford, CT 06120 USA (+1) 860 548 2458 venkav3@rpi.edu John Maleyeff Lally School of Management & Technology Rensselaer Polytechnic Institute 275 Windsor Street Hartford, CT 06120 USA (+1) 860 548 7870 maleyj@rpi.edu ABSTRACT A new classification approach is explored where service systems are grouped by dimensions of performance important to customers. Service systems are coded as binary vectors and Ward s Algorithm is used to group these systems into eight clusters, using the simple matching metric to measure distances between vectors. The resulting clusters were analyzed. Across the clusters, similar types of customers (i.e., internal vs. external) and similar process characteristics were evident. Hence, this clustering approach generates sets of services that differ from classifications based purely on process characteristics. The implications of this result for leaders of service innovation efforts are discussed. KEYWORDS: Cluster analysis, Service operations, Service marketing, Innovation INTRODUCTION Innovation is often accomplished by adapting ideas, processes, and techniques successful in one situation to solve problems or make improvements in a seemingly unrelated situation. With respect to service innovation, the challenge would be to identify hidden patterns that exist within multiple services, even those that appear unrelated. For example, emergency room trauma center teams of physicians, nurses, and technicians learned to effectively and quickly treat patients by incorporating methods used by pit crews at automobile racing events [1]. Preliminary results of an ongoing research project are presented below. This research attempts to create sets of services that would be deemed similar because, within each set, customers have similar needs. For example, both trauma center patients and automobile racers need fast and expert service, with little need for other dimensions of performance that customers of other services would consider important. Once the sets are created, their characteristics are explored to determine

whether or not leaders or innovation teams would gain a better understanding of how innovation could be achieved. BACKGROUND Prior work in classifying service systems is plentiful, most of it contained in the service marketing literature. For the sake of brevity, a very brief background is presented. Numerous attempts have been made to classify services in an effort to provide some understanding of the special challenges faced by service managers. A popular scheme separates services into four types: the service factory, the service shop, the mass services, and the professional service [2]. But it is not clear that the classification schemes offered in the past will be helpful to managers who wish to manage or improve customer satisfaction, because they tend to be based on the structure of the service rather than on the need or wants of customers. For example, Verma showed that only 4 of 22 important management challenges are affected by the differences in this classification scheme [2]. This research presented below uses a mathematical approach to cluster services based on the dimensions of performance deemed important by customers of each service. METHODOLOGY The approach to classifying services based on performance dimensions uses a binary vector clustering algorithm. The work began with the creation of a data set consisting of 168 services. Each service was analyzed by a professional employee of the organization who was very familiar with the activities associated with the delivery of the service and had access to customers of the service. The services selected were not random, but did consist of a cross section of various service types, albeit biased towards service contained within technologically sophisticated organizations. A mixture of customer types existed within the database. Many of the services were primarily for internal customers, many served external customers exclusively, and some served both internal and external customers. No single analyst studied more than one service. All of the analysts were working professionals, enrolled in a part-time graduate management program on the Hartford, Connecticut campus of Rensselaer Polytechnic Institute, in a course called Service Operations Management. Each analyst asked several customers of the service to list strengths and weaknesses of the process, and list key performance dimensions important to customers. The resulting reports followed a standard template that allowed for easy tabulation of key results. To ensure quality and consistency in the data, the authors studied the data generated from each report and at times modified the resulting list of performance dimensions. The resulting database that was input to the clustering algorithm consisted of 168 records and 9 fields, one field for each of 9 potential dimensions of performance. A binary code was used to signify whether or not the dimension was important to customers (1=important, 0=unimportant). The following dimensions were specified: (1) empathy (e.g., courtesy, professionalism); (2) knowledge (of service providers); (3) communication (providers with customers); (4) speed (e.g., responsiveness, turnaround time); (5) usefulness (e.g., comprehension, completeness, flexibility);

(6) quality (e.g., accuracy, consistency); (7) tangible (a physical good); (8) convenience (e.g., availability, ease); and (9) security (e.g., information, financial, personal). Clustering Algorithm The problem then becomes one of clustering 168 binary vectors (one per service) into groups containing like vectors. To do this, a metric must be developed to gauge the closeness of each pair of binary vectors. Several different distance metrics have been proposed in the literature. We have used the simple matching metric. This metric is described as follows: given two binary vectors V₁ and V₂, let B denote the number of digits where V₁ and V₂ agree. The intervector distance, D(V₁,V₂) is equal to 1-B/L. We note that 0 D(V₁,V₂) 1 and that D(V₁,V₂) is 0 when V₁ = V₂ and 1 when V₁ and V₂ are complements. Ward s Algorithm is a well-known and widely used algorithm for grouping binary vectors into clusters. We have used the version of this algorithm wherein the user specifies the target number of clusters. The algorithm is agglomerative and begins by placing each vector in its own separate cluster. Thus, in present case, the method began with 168 clusters. Then, clusters are successively merged in a systematic way until the requisite number of clusters is obtained. We next describe how clusters are selected for merging. The algorithm computes a medoid for every cluster. This is a member of the cluster (not necessarily unique) that has the smallest sum of distances (based on the simple matching metric) to other members [3]. Thus, a medoid is the binary vector analogous to the familiar centroid of a cluster of points on a plane. However, a medoid (unlike a centroid) is necessarily a member belonging to the cluster. Next, to determine which pair of clusters to merge, the algorithm considers all pairs for merging and selects the pair with least variance (calculated as the sum of squares of the distances averaged over the number of members in this tentative cluster). Any two clusters under consideration are temporarily merged, a medoid determined, and the sum of squares of distances to all members from this medoid computed. At each stage, in selecting pairs for merging with minimum variance, the algorithm seeks to merge clusters so that the resulting clusters are round (i.e., they have members that tend to be equally distant from the medoid of that cluster). The algorithm terminates when the requisite number of clusters has been generated. RESULTS After some trial and error, the target number of clusters was specified to be 8. This level of discrimination was chosen because fewer clusters appeared to contain dissimilar services and more clusters would provide a less than useful classification scheme. It is important to note that the clusters generated by Ward s Algorithm are known to be fairly immune to the ordering of the input data. The authors verified this characteristic by running the algorithm using a number reordered data sets. The numbers of services within each cluster group (numbered 1-8 in the tables that follow) were 13, 16, 16, 33, 32, 32, 11, and 12, respectively. Table 1 provides a summary of the 8 clusters by showing, for each cluster, the percentage of services that indicated each potential dimension as important to that service. In the table, the dimensions are abbreviated (Emp is empathy, Knw is

knowledge, Cmc is communication, Spd is speed, Use is usefulness, Qua is quality, Tan is tangibles, Cnv is convenience, and Sec is security). For example, the first row shows that, for the 13 services included within Cluster #1, each had empathy as an important dimension, 11 of the 13 services (85%) had knowledge as an important dimension, none of the 13 services had communication as an important dimension, etc. Table 1: Clusters and Associated Dimensions Cluster Emp Knw Cmc Spd Use Qua Tan Cnv Sec 1 100% 85% 0% 100% 69% 92% 8% 100% 8% 2 19% 0% 0% 88% 0% 94% 13% 94% 0% 3 81% 100% 100% 100% 31% 88% 13% 88% 0% 4 6% 21% 70% 94% 88% 94% 6% 100% 6% 5 13% 34% 0% 91% 84% 94% 25% 0% 0% 6 16% 53% 100% 97% 84% 100% 22% 0% 3% 7 9% 45% 9% 100% 100% 73% 100% 100% 0% 8 75% 67% 8% 100% 0% 100% 58% 8% 0% To explore the usefulness of the resulting classification scheme, and to compare this scheme to a scheme based on process characteristics alone, a number of statistical analyses were performed. Perhaps the most important of these analyses compared the clusters with another classification scheme that was based on process characteristics, rather than customer dimensions. Details on this scheme may be obtained from the authors. In Table 2, the process-oriented classifications are abbreviated (A=analysis, C=consultation, E=evaluation, G=gathering, P=planning, and T=troubleshooting). For example, of the 13 services contained in Cluster #1, one service was classified as an analysis process, 4 services were classified as a consultation process, one service was classified as an evaluation process, etc. As implied by the diversity of process types within each cluster and supported by a chi-square statistical analysis, no relationship was evident between these two classification schemes (p=0.235). An example of two similar processes that were assigned to different clusters will help to explain this result. This process was one that involved the testing of material. The algorithm assigned one testing process to cluster 4 and a second testing process to cluster 5. Both testing services included quality, speed, and usefulness as important dimensions, but the service classified in cluster 4 also listed convenience and communication. Therefore, the material testing service assigned to cluster 5 had customers who expected more interaction with the service provider than did the material testing service assigned to cluster 4. Table 2: Clusters and Associated Service Process Classification Cluster A C E G P T 1 1 4 1 1 4 2 2 2 3 2 1 6 2 3 0 2 4 0 5 5 4 8 2 10 5 7 1 5 3 5 5 9 6 4 6 3 2 11 3 7 6 7 1 3 1 1 2 3 8 2 1 3 1 3 2

Table 3 shows the fraction of services in each cluster whose customers were primarily internal or primarily external, and the average number of functions through which the service flowed in each cluster. For example, 53.8% (7 of the 13) of the services in Cluster #1 served primarily internal customers and 46.2% (6 of the 13) of the services in Cluster #1 served primarily external customers. In some clusters, some services served internal and external customer in about equal measure. In these cases, the internal and external fractions will not add to one. Also, in cluster #1, an average of 5.2 departments or functions that took part in delivering the service. An analysis of variance concluded that the number of functions did not vary across clusters (p=0.164). A chi-square analysis showed that the prevalence of internal or external customers did not vary across clusters at a 5% level of significance (p=0.093). Significance at a 10% level for this test may indicate that a statistically significant, but weak in magnitude, relationship exists relative to the prevalence of internal customers across clusters. Table 3: Clusters and Characteristics Cluster Internal External Functions 1 0.538 0.462 5.2 2 0.563 0.313 3.9 3 0.625 0.188 5.9 4 0.818 0.091 4.3 5 0.719 0.250 5.8 6 0.594 0.250 5.3 7 0.545 0.091 4.8 8 0.583 0.333 3.9 IMPLICATIONS The main result of this exploratory investigation is that a difference exists between a classification scheme based on process characteristics and a scheme based on customer preferences. This result has implications for leaders of service improvement or service innovation teams. It also supports an earlier conclusion by Maleyeff [5] who argued that, based on characteristics unique to service systems, improvement efforts should start by focusing on the information being provided to customers of interval services rather than the physical manifestation of that information. For example, he suggests that rather than focus an improvement project on speeding up the flow of a payment invoice, project teams should first ensure that the information contained on the invoice is useful, clearly printed, unambiguous in meaning, and accurate. A secondary implication could be stated as a word of caution to leaders who may focus the improvement or innovation of services based exclusively on process improvements alone. Many service improvement methodologies, such as those contained in the Lean Six Sigma toolbox [6], are process-based, such as mistake proofing, process standardization, or visual workflow control. For example, it would appear that a dimension such as empathy may be ignored by these project teams. In the case of the emergency room trauma team learning from pit crews, perhaps the

innovation was successful because the customers need have similar dimensions (e.g., speed and competency). FUTURE WORK This research has some limitations. Because binary data is much less powerful than continuous data, perhaps a similar analysis that incorporated dimensions measured on a continuous scale should be undertaken. The precision and reliability of the data used here can also be questioned, due to the multiple analysts and the potential for mischaracterization of customer preferences. This limitation can easily be overcome in future analyzes. Future research could also investigate if these other well-known metrics (besides the simple matching metric) would generate clusters similar to the clusters obtained above. Finally, a more thorough analysis of best number of clusters may prove useful. REFERENCES [1] Nicholson, Kieran, Hospital teams find vroom to improve by changing race-car tires. Denver Post, April 16, 2004, p. B1. [2] Verma, Rohit, An empirical analysis of management challenges in service factories, service shops, mass services and professional services. International Journal of Service Industry Management, 2000, 11(1), 8-25. [3] Guralnik, V. and Karypis, G., "A Scalable Algorithm for Clustering Protein Sequences." in Workshop on Data Mining in Bioinformatics, 2001, 73-80. [4] Luke, Brian T., Agglomerative Linkages. http://fconyx.ncifcrf.gov/lukeb/cllink.html. [5] Maleyeff, John, Exploration of Internal Service Systems using Lean Principles. Management Decision, 2006, 44(5), 674-689. [6] Maleyeff, John, Improving Service Delivery in Government Using Lean Six Sigma. IBM Center for The Business of Government, Washington, DC, 2007.