Conjoint Analysis: Marketing Engineering Technical Note 1
|
|
|
- Bruce Freeman
- 9 years ago
- Views:
Transcription
1 Conjoint Analysis: Marketing Engineering Technical Note 1 Table of Contents Introduction Conjoint Analysis for Product Design Designing a conjoint study Using conjoint data for market simulations Transforming preferences to choices Maximum utility rule Share-of-utility rule Logit choice rule Alpha rule Additional Considerations in Simulations Computing contribution instead of market share Segmenting customers based on their preferences Choice-based conjoint analysis Summary References Introduction Conjoint analysis is a technique for measuring, analyzing, and predicting customers responses to new products and to new features of existing products. It enables companies to decompose customers preferences for products and services (typically provided as descriptions or visual images) into part-worth (or utilities) associated with each level of each attribute of the product. They can then recombine the part-worths to predict customers preferences for any possible combination of attribute levels, and the likely market share or revenue that a new product is likely to achieve when introduced into a market in which other competing products may already be available. They can also use conjoint analysis to determine the optimal product concept or to identify market segments that value a particular product concept highly. 1 This technical note is a supplement to Chapter 6 of Principles of Marketing Engineering, by Gary L. Lilien, Arvind Rangaswamy, and Arnaud De Bruyn (2007). (All rights reserved) Gary L. Lilien, Arvind Rangaswamy, and Arnaud De Bruyn. Not to be re-produced without permission.
2 There is a vast and growing literature on conjoint analysis. Here we provide an overview of the core analytical aspects of full profile conjoint analysis, and we briefly mention other approaches, such as Hierarchical Bayes conjoint analysis. The interested reader can explore the referenced articles for further details and enhancements. Conjoint Analysis for Product Design For commodity products with a single attribute (e.g., price), it is not that difficult to come up with a rank-order of the available products in terms of that single attribute. For example, everyone should prefer a lower-priced product, when all products are identical. However, how do we generate such preference orders if products have multiple attributes (conjoined attributes), as most products do? This is the original issue that motivated the development of conjoint measurement. Since then, conjoint measurement has gradually evolved into a comprehensive approach, called conjoint analysis, for measuring and understanding consumer preferences, and using the resulting measures for simulating market reactions to potential new products. There are three stages in a typical conjoint study: (1) Design of a data collecting instrument, (2) Collecting data from consumers, and (3) Analyzing the data and simulating market response. Here, we focus on the first and third stages, namely, design and simulations Designing a conjoint study: Conjoint Analysis starts with the premise that a product category could be described as a set of attributes. For example, pizzas could be considered to have the following attributes: size, brand, type of crust, topping, amount of cheese, type of sauce, price, etc. Every pizza could then be described as a combination of levels of those product attributes; for example, large Papa John's thick crust pepperoni pizza with extra cheese and tomato sauce, priced at $ The objective of the design stage is to specify a set of product bundles for which we obtain customers' overall evaluations, in such a way that those evaluations could then be decomposed into the part-worth value that each customer attaches to each level of each attribute. To develop such a design is not a simple task. For example, if there are 6 attributes, each with four possible levels, then we could create 4 6 (= 4096) different products. It is not reasonable to ask each customer to evaluate all of those bundles. Instead, in this case, if a 2
3 customer rates as few as 25 product bundles, that is sufficient for estimating the part-worths for the attribute levels. One way to select the product bundles is to ensure that they satisfy an orthogonality constraint. This means that across the selected product bundles, each level of an attribute combines in roughly the same proportion with the levels of other attributes. In other words, if we select any two attributes A and B, then the probability of finding the attribute level B i in a product bundle is the same irrespective of the particular attribute level of A j found in that product bundle. One of the common methods of finding such orthogonal combinations is through the "Addelman" designs (Addelman, 1962). Many software packages can create such designs automatically. Knowledgeable users can create alternative designs that account for both "main effects" and interactions, or create non-orthogonal designs that are still efficient in obtaining information from the study participants (e.g., adaptive conjoint analysis). According to Wittink and Cattin (1989), commercial applications of conjoint analysis used a median number of 16 product profiles for obtaining respondent evaluations. The number of independent parameters to be estimated is equal to N ( i 1) 1 i= 1 n (1) where N is the number of attributes and n i is the number of levels of attribute i. For each product attribute we can arbitrarily set the lowest utility value (say, equal to zero). We can also arbitrarily set the maximum total utility from any product (say, equal to 100). In some circumstances orthogonal designs can result in unrealistic products, such as when respondents perceive some of the attributes used in the study to be correlated automobile horsepower (hp) and gas mileage (mpg) typically have a high negative correlation, but orthogonal designs could result in hypothetical products that combine high hp with unrealistically high levels of mpg. If a product is unrealistic in an orthogonal combination, there are several possible remedies: (1) We can combine the attributes and develop a new set of levels for the combined attribute. (For example, hp and mpg might be combined into a performance attribute with high performance associated with high hp and low mpg, and low performance associated with low hp and high mpg.) (2) We can replace unrealistic 3
4 products by substituting other combinations (perhaps generated randomly, but not duplicating the retained combinations). While this approach compromises orthogonality, it will rarely affect the estimated utility functions significantly if we replace only a few bundles (say, less than five percent). (3) We can select other orthogonal combinations (although this remedy requires special expertise). An additional consideration in developing a suitable design is the exact nature of the data collection instrument. There are several options here, including, (1) asking respondents to sort and rank-order a set of cards, each containing a description of a product bundle, (2) asking respondents to rate each product bundle, say on a scale of 0 to 100, to reflect their likelihood of buying that product, (3) presenting respondents a sequence of product bundles, two at a time, and asking them to assign 100 points between them, and (4) offering respondents a sequence of sets of product bundles and asking them to choose one product from each set. Each of these data collection options has an associated set of costs and benefits. Additional details about the foregoing aspects of conjoint analysis are available in a number of published sources, including Green, Krieger and Wind (2001) and Hauser and Rao (2003). Using conjoint data for market simulations: Depending on the exact nature of the data collected, there are various options for analyzing the data and creating a part-worth function for each respondent. The simplest approach is to use dummy variable regression with ratings or rank-order data (data collection options 1 and 2 listed in the previous section). R = K M k k= 1 m= 1 a ikm x jkm + ε, (2) where j R = a particular product or concept included in the study design; = the ratings provided by respondent i for product j; (Alternatively, the rankings could be reversed so that higher numbers represent stronger preference, and then used as if they are similar to interval-scaled ratings); a ikm = part-worth associated with the mth level (m=1, 2, 3,..., M k ) of the 4
5 kth attribute; M k = number of levels of attribute k; K = number of attributes; x jkm = dummy variables that take on the value 1 if the mth level of the kth attribute is present in product j and the value 0 otherwise; and = error terms, assumed to be normal distribution with zero mean and variance equal to 2 for all i and j. To facilitate interpretation, the a ikm s obtained from regression can be rescaled so that the least preferred level of each attribute is set to zero and the maximum preferred product combination is set to 100, producing results that are more easily interpreted. Letting a ~ ikm s denote the estimated (rescaled) part-worths, the utility u of a product j to customer i is equal to u = K M k k = 1 m= 1 a ~ x, (3) ikm jkm Note that product j can be any product that can be designed using the attributes and levels in the study, including those that were not included in the estimation of the part-worths in Eq. (2). A major reason for the wide use of conjoint analysis is that once partworths ( a ~ 's) are estimated from a representative sample of respondents, it ikm is easy to assess the likely success of a new product concept under various simulated market conditions. One might ask: What market share would a proposed new product achieve in a market with several specific existing competitors? To answer this question we first specify all existing products as combinations of levels of the set of attributes under study. If more than one competing product has identical attribute levels, we need to include only one representative in the simulation. Transforming preferences to choices: To complete the simulation design we must specify a choice rule to transform part-worths into the product choices that customers are most likely to make. The three most common choice rules are maximum utility, share of utility, and logit. 5
6 Maximum utility rule: Under this rule we assume that each customer chooses from the available alternatives the product that provides the highest utility value, including a new product concept under consideration. This choice rule is most appropriate for high-involvement purchases such as cars, VCR's, and other durables that customers purchase infrequently. We can compute the market share for a product by counting the number of customers for whom that product offers the highest utility and dividing this figure by the number of customers in the study. In computing overall market shares it may sometimes be necessary to weight each customer s probability of purchasing each alternative by the relative volume of purchases that the customer makes in the product category: I wi p i= 1 m j = J, I (4) w p j = 1 i= 1 i where I J = number of customers participating in the study; = the number of product alternatives available for the customer to choose from, including the new product concept; m j = market share of product j; w i = the relative volume of purchases made by customer i, with the average volume across all customers indexed to the value 1; and p = proportion of purchases that customer i makes of product j (or equivalently, the probability that customer i will choose product j on a single purchase occasion). Share of utility rule: This rule is based on the notion that the higher the utility of a product to a customer, the greater the probability that he or she will choose that product. Thus each product gets a share of a customer s purchases in proportion to its share of the customer s preferences: 6
7 p = u u j for jin the set of products J, (5) where u is the estimated utility of product j to customer i. We then obtain the market share for product i by averaging p across customers (weighting as in Eq. (4) if necessary). This choice rule is particularly relevant for low-involvement, frequently purchased products, such as consumer packaged goods. This choice rule is widely applied in conjoint studies and often provides good estimates of market shares. However, as Luce (1959) notes, this rule requires that utilities be expressed as ratio-scaled numbers, such as those obtained from constant-sum scales where the customer allocates a fixed number of points (say, 100) among alternatives. Unfortunately, data from most conjoint studies do not satisfy this requirement. Logit choice rule: This rule is similar to the share-of-utility rule, except that the underlying theoretical rationale is different. To apply the share-ofutility model, we assume that the utility functions are basically accurate but an element of randomness occurs in translating utilities into choice. In applying the logit choice rule we assume that the computed utility values are mean realizations of a random process, so that the brand with the maximum utility varies randomly, say from one purchase situation to the next. The choice rule then gives the proportion of times that product j will have the maximum utility: p = j e u e u for jin the set of products J, (6) Both the share-of-utility and the traditional logit rules share a questionable property known as IIA (independence from irrelevant alternatives). The choice probabilities from any subset of alternatives depend only on the alternatives included in the set and are independent 7
8 of any alternatives not included. This property implies that if, for example, you prefer light beers to regular beers, then adding a new regular beer (an irrelevant alternative) to your choice set would nevertheless lower your probability of choosing a light beer, a counterintuitive result. How should we select among these choice rules? The maximum utility rule (also called the first choice rule) is simple and elegant, and choices predicted by using this rule are not affected by positive linear transformations to the utility function. This rule is particularly relevant for high-ticket items and in product categories where customers are highly involved in the purchase decisions. However, this rule predicts more extreme market shares, i.e., it has a tendency to produce market shares closer to zero and one than the other choice rules. Also, it is less robust small changes in utility values of products can drastically change their market shares. On the other hand, the market share predictions made by the share-of-preference and logit choice rules are sensitive to the scale range on which utility is measured. The market share predictions of the share-of-utility rule will change if one adds a constant value to the computed utility of each product, but they are unaltered if all utility values are multiplied by a constant. Market share predictions of the logit choice rule are not altered if one adds a constant to the utilities, but will change if one multiplies all utilities by a constant. Thus, each rule has its advantages and limitations. One way to choose among these three rules is this: First, for each rule, compute the predicted market shares of just the existing products. Then use the choice rule that produces market shares that are closest (in the sense of least squares) to the actual market shares of these products (this assumes that we are using a representative sample of customers for the study). This approach can be formalized using yet another choice rule, called the alpha rule, proposed by Green and Krieger (1993). We describe this rule next. Alpha rule: This rule is a weighted combination of the maximum utility rule and the share-of-preference rule, where the weight is chosen to ensure that the market shares computed in the simulation are as close as possible to the actual market shares of the existing products. Specifically, we choose an alpha (a) in the following formula to maximally recover the observed market shares of existing products: 8
9 p = u j α α u (7) To determine the best value of a, we minimize the entropy representing the extent of departures of computed markets shares of existing products from their actual observations: m j Entropy = m j ln j ) ( ) (8) m j α where j is an index to represent an existing product, m j is the actual market share for product j, and mˆ j is the computed market share of product j for any given α. More details about this procedure can be found in Green and Krieger (1993). Orme and Huber (2000) have recently proposed another rule called randomized first choice that shows promise as an alternative to the four choice rules that we have outlined. Conceptually, it combines the ideas inherent in the logit choice rule and the alpha rule. Additional Considerations in Simulations Computing contribution instead of market share: Products that deliver high market shares need not necessarily result in high profitability for the company. Market share computations do not take into account the costs of manufacturing each product profile. A simple (and rough) way to measure incremental contribution of a product bundle (price unit variable costs) is to first define a base product bundle and its contribution margin. Next, we can specify the incremental variable costs (positive or negative) for each attribute level, compared to the attribute level of the base product bundle. Finally, we can set the revenue index potential for the base product at 100 and measure the revenue index of every other product with respect to this base level. We show this below with a numerical example: Unit contribution Market share (as per any Normalization of base product selected choice rule) factor = 100 Suppose the unit contribution of the base product is $2 and the market share 9
10 as per the maximum utility rule is 25 percent, then the normalization factor is two. Now, if the incremental contribution of another product bundle is $1 as compared to the base product, and its computed market share per the maximum utility rule is 40 percent, then its revenue index is ($2 - $1) 40 2 = 80. Note that any additional fixed costs have to be added separately, outside the model. Note also that this computation ignores the price potential (i.e., what the market might be willing to pay) for all product bundles, except the base product. These simplifications suggest that the base product has to be selected carefully to ensure that interpretations of the revenue index are both meaningful and appropriate for the given context. Segmenting customers based on their preferences: In our discussion so far, we have focused on individual-level conjoint analysis and predicted market behavior by aggregating individual customer choices. That is, we implicitly grouped all customers into one segment. An important question is this: How should we conduct market simulations if the market consists of distinct groups of customers? There are several options: 1. Post-hoc segmentation: We could use traditional cluster analysis of the part-worth data to identify segments of customers with differing preference structures (see technical note on segmentation and targeting at 2. Latent class segmentation: This approach is most useful when we don t know (or cannot make a good guess about) the number of segments (see technical note on segmentation and targeting at however, this approach requires larger sample sizes than traditional cluster analysis. 3. Hierarchical Bayes: We assume that customers come from one or more populations (i.e., from a finite mixture of populations), but that customers in each population have different part-worth functions determined according to a specified distribution (e.g., multivariate normal). We can then estimate the "posterior point estimates" of each respondent's part-worth function, conditional on that respondent belonging to a given population (segment). These part-worth estimates may be used in simulations to determine the expected market share for any specified product in any segment, and across the overall population. Hierarchical Bayes methods are computer-intensive, but software 10
11 packages are now available to simplify their practical applications. In a comprehensive simulation study comparing latent class method with Hierarchical Bayes, Andrews, Ansari, and Currim (2002) find that both have comparable out-of-sample prediction accuracy, although the Hierarchical Bayes had somewhat better with the in-sample data. Choice-based conjoint analysis: Another approach to conjoint analysis is choice-based conjoint, originally proposed by Louviere and Woodworth (1983). Here, customers are presented with several sets of product profiles and are asked to choose the product in each set that they prefer the most. Exhibit 1 shows an example of a choice task with four profiles in a set. Thus, we directly measure customer choices, rather than measuring their preferences, and then converting the preferences to choices by using choice rules. The basic premise is that when we measure customer choices, the research process more closely resembles what people will actually do in the marketplace. The sets of profiles presented to customers are carefully selected according to experimental design criteria. The resulting choice data can then be analyzed using the multinomial logit model (see technical note on Choice Modeling at Several studies comparing the relative performance of the traditional full-profile conjoint and choice-based conjoint seem to indicate that both models tend to predict equally well (see Elrod, Louviere, and Krishnakumar 1992). The choice-based conjoint has the advantage of offering statistical tests of attribute weights and market shares. However, it is an aggregate model that does not offer direct measures of utility functions at the individual level, making it difficult to incorporate segmentation analyses of the part-worth data. Hierarchical Bayes approaches can also be used with choice-based conjoint analysis. This approach enables us to ask subsets of questions from a larger sample of respondents (thereby reducing respondent fatigue) but also, at the same time, obtain useful posterior point estimates of the entire part-worth function for each respondent. Further details about these methods can be found in Allenby and Ginter (1995) and Johnson (2000). 11
12 EXHIBIT 1 A typical choice-based conjoint task for respondents. Note that respondents can opt not to choose any of the four profiles presented. Source: Cohen Summary Conjoint analysis is a customer preference measurement and analysis technique that is widely used in several areas of marketing. It is particularly useful for designing new products that are likely to perform well in the marketplace, and for determining the "optimal" changes to make to existing products (e.g., changes to price, changes to product features) to improve their market performance. There are three basic stages in a conjoint analysis: (1) designing the study, (2) getting data from customers, and (3) conducting simulations to determine good product designs. We described some of the technical aspects that are relevant for the first and third stages. Conjoint analysis is an active area of research, and new methods and enhancements are continuing to be developed, primarily because of advances in modeling and computing technologies. 12
13 References (This is not the complete set of references. Others need to be added from our master list). Addelman, Sidney, 1962, Orthogonal Main Effects Plans for Asymmetrical Factorial Experiments, Technometrics 4, Allenby, Greg M., and Ginter, James L., 1995, Using extremes to design products and segment markets, Journal of Marketing Research, Vol. 32, No. 4 (November), pp Andrews, Rick L., Ansari, Asim, and Currim, Imran S., 2002, Hierarchical Bayes Versus Finite Mixture Conjoint Analysis Models: A Comparison of Fit, Prediction, and Partworth Recovery, Journal of Marketing Research, Vol. XXXIV (February), Elrod, Terry; Louviere, Jordan J.; and Krishnakumar, S. Davey, 1992, An empirical comparison of ratings-based and choice-based conjoint models, Journal of Marketing Research, Vol. 24 (August), pp Green, Paul E., and Krieger, Abba M., 1993, Conjoint analysis with product-positioning analysis, in Handbook of MS/OR in Marketing, Jehoshua Eliashberg and Gary Lilien, eds., Elsevier Publishing, The Netherlands, pp Green, Paul E.; Krieger, Abba M.; and Wind, Yoram (Jerry), 2001, Thirty years of conjoint analysis: Reflections and prospects, Interfaces, Vol. 31, No. 3, Part 2 of 2 (May-June), pp. S56-S73. Hauser, John R. and Rao, Vithala R., 2003, Conjoint Analysis, Related Modeling, and Applications, in Market Research and Modeling: Progress and Prospects: A Tribute to Paul E. Green (International Series in Quantitative Marketing), Yoram (Jerry) Wind and Paul E. Green (Editors), ISQM, International Series in Quantitative Marketing, Kluwer Academic Publishers. Johnson, Richard M., 2000, Understanding HB: An Intuitive Approach, Sawtooth Software Research Paper Series, 13
14 Louviere, Jordan J., and Woodworth, George, 1983, Design and analysis of consumer choice or allocation experiments: A method based on aggregate data, Journal of Marketing Research, Vol. 20 (November), pp Luce, R. Duncan, 1959, Individual Choice Behavior, John Wiley and Sons, New York. Orme, Bryan, and Huber, Joel, 2000, Improving the value of conjoint simulations, Marketing Research, Vol. 12, No. 4, pp Wittink, Dick and Cattin, Philipe, 1989, Commercial Use of Conjoint Analysis: An Update, Journal of Marketing, 53 (July),
Trade-off Analysis: A Survey of Commercially Available Techniques
Trade-off Analysis: A Survey of Commercially Available Techniques Trade-off analysis is a family of methods by which respondents' utilities for various product features are measured. This article discusses
Sample Size Issues for Conjoint Analysis
Chapter 7 Sample Size Issues for Conjoint Analysis I m about to conduct a conjoint analysis study. How large a sample size do I need? What will be the margin of error of my estimates if I use a sample
The Bass Model: Marketing Engineering Technical Note 1
The Bass Model: Marketing Engineering Technical Note 1 Table of Contents Introduction Description of the Bass model Generalized Bass model Estimating the Bass model parameters Using Bass Model Estimates
Sawtooth Software. Which Conjoint Method Should I Use? RESEARCH PAPER SERIES. Bryan K. Orme Sawtooth Software, Inc.
Sawtooth Software RESEARCH PAPER SERIES Which Conjoint Method Should I Use? Bryan K. Orme Sawtooth Software, Inc. Copyright 2009, Sawtooth Software, Inc. 530 W. Fir St. Sequim, 0 WA 98382 (360) 681-2300
Keep It Simple: Easy Ways To Estimate Choice Models For Single Consumers
Keep It Simple: Easy Ways To Estimate Choice Models For Single Consumers Christine Ebling, University of Technology Sydney, [email protected] Bart Frischknecht, University of Technology Sydney,
Sawtooth Software. The CBC System for Choice-Based Conjoint Analysis. Version 8 TECHNICAL PAPER SERIES. Sawtooth Software, Inc.
Sawtooth Software TECHNICAL PAPER SERIES The CBC System for Choice-Based Conjoint Analysis Version 8 Sawtooth Software, Inc. 1 Copyright 1993-2013, Sawtooth Software, Inc. 1457 E 840 N Orem, Utah +1 801
The HB. How Bayesian methods have changed the face of marketing research. Summer 2004
The HB How Bayesian methods have changed the face of marketing research. 20 Summer 2004 Reprinted with permission from Marketing Research, Summer 2004, published by the American Marketing Association.
Interpreting the Results of Conjoint Analysis
Chapter 9 Interpreting the Results of Conjoint Analysis Conjoint analysis provides various outputs for analysis, including part-worth utilities, counts, importances, shares of preference, and purchase
A Procedure for Classifying New Respondents into Existing Segments Using Maximum Difference Scaling
A Procedure for Classifying New Respondents into Existing Segments Using Maximum Difference Scaling Background Bryan Orme and Rich Johnson, Sawtooth Software March, 2009 Market segmentation is pervasive
Statistical Innovations Online course: Latent Class Discrete Choice Modeling with Scale Factors
Statistical Innovations Online course: Latent Class Discrete Choice Modeling with Scale Factors Session 3: Advanced SALC Topics Outline: A. Advanced Models for MaxDiff Data B. Modeling the Dual Response
Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus
Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives
Sawtooth Software. Calibrating Price in ACA: The ACA Price Effect and How to Manage It RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES Calibrating Price in ACA: The ACA Price Effect and How to Manage It Peter Williams And Dennis Kilroy The KBA Consulting Group Pty. Ltd., 2000 Copyright 2000-2002,
Market Simulators for Conjoint Analysis
Chapter 10 Market Simulators for Conjoint Analysis The market simulator is usually considered the most important tool resulting from a conjoint analysis project. The simulator is used to convert raw conjoint
Sawtooth Software. Conjoint Analysis: How We Got Here and Where We Are (An Update) RESEARCH PAPER SERIES. Joel Huber, Duke University
Sawtooth Software RESEARCH PAPER SERIES Conjoint Analysis: How We Got Here and Where We Are (An Update) Joel Huber, Duke University Copyright 2005, Sawtooth Software, Inc. 530 W. Fir St. Sequim, WA 98382
Lagos State SubscribersTrade-OffsRegardingMobileTelephoneProvidersAnAnalysisofServiceAttributesEvaluation
Global Journal of Management and Business Research Volume 12 Issue 4 Version 1.0 March 2012 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online
Pricing in a Competitive Market with a Common Network Resource
Pricing in a Competitive Market with a Common Network Resource Daniel McFadden Department of Economics, University of California, Berkeley April 8, 2002 I. This note is concerned with the economics of
Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios
Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios By: Michael Banasiak & By: Daniel Tantum, Ph.D. What Are Statistical Based Behavior Scoring Models And How Are
Sawtooth Software. How Many Questions Should You Ask in Choice-Based Conjoint Studies? RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES How Many Questions Should You Ask in Choice-Based Conjoint Studies? Richard M. Johnson and Bryan K. Orme, Sawtooth Software, Inc. 1996 Copyright 1996-2002, Sawtooth
Discover-CBC: How and Why It Differs from SSI Web s CBC Software
Discover-CBC: How and Why It Differs from SSI Web s CBC Software (Copyright Sawtooth Software, 2014) Abstract and Overview Sawtooth Software has developed a SaaS (Software as a Service) questionnaire authoring
ADVANCED MARKETING ANALYTICS:
ADVANCED MARKETING ANALYTICS: MARKOV CHAIN MODELS IN MARKETING a whitepaper presented by: ADVANCED MARKETING ANALYTICS: MARKOV CHAIN MODELS IN MARKETING CONTENTS EXECUTIVE SUMMARY EXECUTIVE SUMMARY...
CBC/HB v5. Software for Hierarchical Bayes Estimation for CBC Data. (Updated August 20, 2009)
CBC/HB v5 Software for Hierarchical Bayes Estimation for CBC Data (Updated August 20, 2009) Sawtooth Software, Inc. Sequim, WA http://www.sawtoothsoftware.com In this manual, we refer to product names
Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach
Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach Refik Soyer * Department of Management Science The George Washington University M. Murat Tarimcilar Department of Management Science
An Analysis of Rank Ordered Data
An Analysis of Rank Ordered Data Krishna P Paudel, Louisiana State University Biswo N Poudel, University of California, Berkeley Michael A. Dunn, Louisiana State University Mahesh Pandit, Louisiana State
Asymmetry and the Cost of Capital
Asymmetry and the Cost of Capital Javier García Sánchez, IAE Business School Lorenzo Preve, IAE Business School Virginia Sarria Allende, IAE Business School Abstract The expected cost of capital is a crucial
Conjoint Analysis. Warren F. Kuhfeld. Abstract
Conjoint Analysis Warren F. Kuhfeld Abstract Conjoint analysis is used to study consumers product preferences and simulate consumer choice. This chapter describes conjoint analysis and provides examples
Marketing Mix Modelling and Big Data P. M Cain
1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored
Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification
Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon
CHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
5. Multiple regression
5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Traditional Conjoint Analysis with Excel
hapter 8 Traditional onjoint nalysis with Excel traditional conjoint analysis may be thought of as a multiple regression problem. The respondent s ratings for the product concepts are observations on the
Incorporating prior information to overcome complete separation problems in discrete choice model estimation
Incorporating prior information to overcome complete separation problems in discrete choice model estimation Bart D. Frischknecht Centre for the Study of Choice, University of Technology, Sydney, [email protected],
Overview of Factor Analysis
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,
Applying System Dynamics to Business: An Expense Management Example
Applying System Dynamics to Business: An Expense Management Example Bill Harris Facilitated Systems http://facilitatedsystems.com/ May 17, 2000 Abstract If you re new to system dynamics, it may be hard
Sawtooth Software. ACBC Technical Paper TECHNICAL PAPER SERIES
Sawtooth Software TECHNICAL PAPER SERIES ACBC Technical Paper Copyright 2009, Sawtooth Software, Inc. 530 W. Fir St. Sequim, WA 98382 (360) 681-2300 www.sawtoothsoftware.com Background The Adaptive Choice-Based
Inflation. Chapter 8. 8.1 Money Supply and Demand
Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that
Gerry Hobbs, Department of Statistics, West Virginia University
Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit
Comparison of Estimation Methods for Complex Survey Data Analysis
Comparison of Estimation Methods for Complex Survey Data Analysis Tihomir Asparouhov 1 Muthen & Muthen Bengt Muthen 2 UCLA 1 Tihomir Asparouhov, Muthen & Muthen, 3463 Stoner Ave. Los Angeles, CA 90066.
Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. Philip Kostov and Seamus McErlean
Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. by Philip Kostov and Seamus McErlean Working Paper, Agricultural and Food Economics, Queen
The primary goal of this thesis was to understand how the spatial dependence of
5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial
EMPIRICAL MACRO PRODUC TION FUNCTIONS SOME METHODOLOGICAL
- SaBHBBI ^ m v > x S EMPIRICAL MACRO PRODUC TION FUNCTIONS SOME METHODOLOGICAL COMMENTS by Sören Wibe Umeå Economic Studies No. 153 UNIVERSITY OF UMEÅ 1985 ISSN 0348-1018 - 1 - EMPIRICAL MACRO PR ODUCTION
Application Note. The Optimization of Injection Molding Processes Using Design of Experiments
The Optimization of Injection Molding Processes Using Design of Experiments PROBLEM Manufacturers have three primary goals: 1) produce goods that meet customer specifications; 2) improve process efficiency
Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 2009-2010
Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 2009-2010 Week 1 Week 2 14.0 Students organize and describe distributions of data by using a number of different
7. Normal Distributions
7. Normal Distributions A. Introduction B. History C. Areas of Normal Distributions D. Standard Normal E. Exercises Most of the statistical analyses presented in this book are based on the bell-shaped
Handling attrition and non-response in longitudinal data
Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein
Chapter 14 Managing Operational Risks with Bayesian Networks
Chapter 14 Managing Operational Risks with Bayesian Networks Carol Alexander This chapter introduces Bayesian belief and decision networks as quantitative management tools for operational risks. Bayesian
Sawtooth Software. An Overview and Comparison of Design Strategies for Choice-Based Conjoint Analysis RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES An Overview and Comparison of Design Strategies for Choice-Based Conjoint Analysis Keith Chrzan, Maritz Marketing Research and Bryan Orme, Sawtooth Software, Inc.
APPLIED MISSING DATA ANALYSIS
APPLIED MISSING DATA ANALYSIS Craig K. Enders Series Editor's Note by Todd D. little THE GUILFORD PRESS New York London Contents 1 An Introduction to Missing Data 1 1.1 Introduction 1 1.2 Chapter Overview
Introduction to Strategic Supply Chain Network Design Perspectives and Methodologies to Tackle the Most Challenging Supply Chain Network Dilemmas
Introduction to Strategic Supply Chain Network Design Perspectives and Methodologies to Tackle the Most Challenging Supply Chain Network Dilemmas D E L I V E R I N G S U P P L Y C H A I N E X C E L L E
THE HYBRID CART-LOGIT MODEL IN CLASSIFICATION AND DATA MINING. Dan Steinberg and N. Scott Cardell
THE HYBID CAT-LOGIT MODEL IN CLASSIFICATION AND DATA MINING Introduction Dan Steinberg and N. Scott Cardell Most data-mining projects involve classification problems assigning objects to classes whether
Joint Product Signals of Quality. James E. McClure and Lee C. Spector. Ball State University Muncie, Indiana. September 1991.
Joint Product Signals of Quality By James E. McClure and Lee C. Spector Ball State University Muncie, Indiana September 1991 Abstract In the absence of other information about the quality of an experience
I.e., the return per dollar from investing in the shares from time 0 to time 1,
XVII. SECURITY PRICING AND SECURITY ANALYSIS IN AN EFFICIENT MARKET Consider the following somewhat simplified description of a typical analyst-investor's actions in making an investment decision. First,
College Readiness LINKING STUDY
College Readiness LINKING STUDY A Study of the Alignment of the RIT Scales of NWEA s MAP Assessments with the College Readiness Benchmarks of EXPLORE, PLAN, and ACT December 2011 (updated January 17, 2012)
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
D-optimal plans in observational studies
D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES Special Features of CBC Software for Packaged Goods and Beverage Research Bryan Orme, Sawtooth Software, Inc. Copyright 2003, Sawtooth Software, Inc. 530 W. Fir
An Extension Model of Financially-balanced Bonus-Malus System
An Extension Model of Financially-balanced Bonus-Malus System Other : Ratemaking, Experience Rating XIAO, Yugu Center for Applied Statistics, Renmin University of China, Beijing, 00872, P.R. China Phone:
Factor Rotations in Factor Analyses.
Factor Rotations in Factor Analyses. Hervé Abdi 1 The University of Texas at Dallas Introduction The different methods of factor analysis first extract a set a factors from a data set. These factors are
UBS Global Asset Management has
IIJ-130-STAUB.qxp 4/17/08 4:45 PM Page 1 RENATO STAUB is a senior assest allocation and risk analyst at UBS Global Asset Management in Zurich. [email protected] Deploying Alpha: A Strategy to Capture
PS 271B: Quantitative Methods II. Lecture Notes
PS 271B: Quantitative Methods II Lecture Notes Langche Zeng [email protected] The Empirical Research Process; Fundamental Methodological Issues 2 Theory; Data; Models/model selection; Estimation; Inference.
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow
WHITE PAPER: PREDICTIVE CODING DEFENSIBILITY........................................ Predictive Coding Defensibility and the Transparent Predictive Coding Workflow Who should read this paper Predictive
GMO WHITE PAPER. The Capacity of an Equity Strategy. Defining and Estimating the Capacity of a Quantitative Equity Strategy. What Is Capacity?
GMO WHITE PAPER March 2006 The Capacity of an Equity Strategy Defining and Estimating the Capacity of a Quantitative Equity Strategy Marco Vangelisti Product Manager Capacity is an intuitive but ill-defined
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
The Best of Both Worlds:
The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most
Introduction to Experimental Design for Discrete Choice Models. George Boomer with apologies to Warren Kuhfeld
Introduction to Experimental Design for Discrete Choice Models George Boomer with apologies to Warren Kuhfeld 1 Why Should We Concern Ourselves with Experimental Design? We can always observe how people
IBM SPSS Direct Marketing 23
IBM SPSS Direct Marketing 23 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 23, release
Data Mining 5. Cluster Analysis
Data Mining 5. Cluster Analysis 5.2 Fall 2009 Instructor: Dr. Masoud Yaghini Outline Data Structures Interval-Valued (Numeric) Variables Binary Variables Categorical Variables Ordinal Variables Variables
6 Scalar, Stochastic, Discrete Dynamic Systems
47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
REGRESSION MODEL OF SALES VOLUME FROM WHOLESALE WAREHOUSE
REGRESSION MODEL OF SALES VOLUME FROM WHOLESALE WAREHOUSE Diana Santalova Faculty of Transport and Mechanical Engineering Riga Technical University 1 Kalku Str., LV-1658, Riga, Latvia E-mail: [email protected]
A Review of Cost of AB 32 on California Small Businesses Summary Report of Findings by Varshney & Associates
A Review of Cost of AB 32 on California Small Businesses Summary Report of Findings by Varshney & Associates Matthew E. Kahn UCLA and NBER Professor, Institute of the Environment Department of Economics
IBM SPSS Direct Marketing 22
IBM SPSS Direct Marketing 22 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 22, release
Bayesian Predictive Profiles with Applications to Retail Transaction Data
Bayesian Predictive Profiles with Applications to Retail Transaction Data Igor V. Cadez Information and Computer Science University of California Irvine, CA 92697-3425, U.S.A. [email protected] Padhraic
ElegantJ BI. White Paper. The Competitive Advantage of Business Intelligence (BI) Forecasting and Predictive Analysis
ElegantJ BI White Paper The Competitive Advantage of Business Intelligence (BI) Forecasting and Predictive Analysis Integrated Business Intelligence and Reporting for Performance Management, Operational
Multiple Choice Models II
Multiple Choice Models II Laura Magazzini University of Verona [email protected] http://dse.univr.it/magazzini Laura Magazzini (@univr.it) Multiple Choice Models II 1 / 28 Categorical data Categorical
>> BEYOND OUR CONTROL? KINGSLEY WHITE PAPER
>> BEYOND OUR CONTROL? KINGSLEY WHITE PAPER AUTHOR: Phil Mobley Kingsley Associates December 16, 2010 Beyond Our Control? Wrestling with Online Apartment Ratings Services Today's consumer marketplace
The Determinants of Used Rental Car Prices
Sung Jin Cho / Journal of Economic Research 10 (2005) 277 304 277 The Determinants of Used Rental Car Prices Sung Jin Cho 1 Hanyang University Received 23 August 2005; accepted 18 October 2005 Abstract
James E. Bartlett, II is Assistant Professor, Department of Business Education and Office Administration, Ball State University, Muncie, Indiana.
Organizational Research: Determining Appropriate Sample Size in Survey Research James E. Bartlett, II Joe W. Kotrlik Chadwick C. Higgins The determination of sample size is a common task for many organizational
A successful market segmentation initiative answers the following critical business questions: * How can we a. Customer Status.
MARKET SEGMENTATION The simplest and most effective way to operate an organization is to deliver one product or service that meets the needs of one type of customer. However, to the delight of many organizations
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow
Predictive Coding Defensibility and the Transparent Predictive Coding Workflow Who should read this paper Predictive coding is one of the most promising technologies to reduce the high cost of review by
Econometric analysis of the Belgian car market
Econometric analysis of the Belgian car market By: Prof. dr. D. Czarnitzki/ Ms. Céline Arts Tim Verheyden Introduction In contrast to typical examples from microeconomics textbooks on homogeneous goods
Competition-Based Dynamic Pricing in Online Retailing
Competition-Based Dynamic Pricing in Online Retailing Marshall Fisher The Wharton School, University of Pennsylvania, [email protected] Santiago Gallino Tuck School of Business, Dartmouth College,
The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy
BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.
Sawtooth Software. Perspectives Based on 10 Years of HB in Marketing Research RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES Perspectives Based on 10 Years of HB in Marketing Research Greg M. Allenby, The Ohio State University and Peter E. Rossi, University of Chicago Copyright 2003, Sawtooth
Teaching Multivariate Analysis to Business-Major Students
Teaching Multivariate Analysis to Business-Major Students Wing-Keung Wong and Teck-Wong Soon - Kent Ridge, Singapore 1. Introduction During the last two or three decades, multivariate statistical analysis
" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
Using simulation to calculate the NPV of a project
Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial
CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA
Examples: Multilevel Modeling With Complex Survey Data CHAPTER 9 EXAMPLES: MULTILEVEL MODELING WITH COMPLEX SURVEY DATA Complex survey data refers to data obtained by stratification, cluster sampling and/or
