Sawtooth Software Prize: CBC Predictive Modeling Competition



Similar documents
Discover-CBC: How and Why It Differs from SSI Web s CBC Software

A Procedure for Classifying New Respondents into Existing Segments Using Maximum Difference Scaling

Sawtooth Software. Which Conjoint Method Should I Use? RESEARCH PAPER SERIES. Bryan K. Orme Sawtooth Software, Inc.

CONTEST RULES FOR INFUSIONSOFT S 2016 SMALL BUSINESS ICON CONTEST

Sawtooth Software. The CBC System for Choice-Based Conjoint Analysis. Version 8 TECHNICAL PAPER SERIES. Sawtooth Software, Inc.

Official Rules & Regulations Innovation Competition 2016 Season

RESEARCH PAPER SERIES

NO PURCHASE NECESSARY TO ENTER OR WIN. A PURCHASE DOES NOT IMPROVE YOUR CHANCES OF WINNING. VOID WHERE PROHIBITED BY LAW.

HF Markets Ltd HOTFOREX DEMO CONTEST TERMS AND CONDITIONS

Networks for Development The Caribbean ICT Research Programme Telecommunications Policy and Management Programme, Mona School of Business, UWI

Diamond Anniversary Cruise Sweepstakes Official Rules

Surf for Cash Iowa Savings Bank Rules

Sawtooth Software. ACBC Technical Paper TECHNICAL PAPER SERIES

Global Process & Solutions LSCA Business Centers

RBC Next Great Innovator Challenge - Prototyping Division Rules and Regulations

Sample Size Issues for Conjoint Analysis

Tutorial #6: Analyzing Partial Rank Data

TOWN OF INNISFIL OUR PLACE GOOSECHASE CONTEST (The Contest )

"Orthoptists providing a clear vision for the future of eye health".

5. How to Enter: There is one (1) method of entry for the Contest. To enter, complete the following steps:

Description Terms and Conditions for LJCSC Pin It To Win It Renew Advantage Contest How to Enter

Official Rules: Audio Technica/Parsons Audio Recording Competition

HOLLAND AMERICA LINE CHATEAU STE. MICHELLE-HOLLAND AMERICA LINE WELCOME ABOARD 2016 SWEEPSTAKES OFFICIAL RULES

RULES AND GUIDELINES. OFFICIAL Win A Dream Vacation SWEEPSTAKES RULES

CBC/HB v5. Software for Hierarchical Bayes Estimation for CBC Data. (Updated August 20, 2009)

Scotiabank Big Boost for Small Business Contest Official Contest Rules & Regulations

2016 Music Contest

OFFICIAL SCHOLARSHIP RULES of the 10th Annual Create-A-Greeting-Card Scholarship Contest

Trade-off Analysis: A Survey of Commercially Available Techniques

CONTEST RULES. CBC Books Fan Choice Contest presented by TD Bank Group ( Contest )

Renewal Awards OFFICIAL RULES

TFE Hotels, IKEA Canberra and Visit Canberra, Shop and Stay Promotion Game of Skill Terms and Conditions

2015 HOLLAND AMERICA RESERVATION CENTER SURVEY SWEEPSTAKES

ABU DHABI UNIVERSITY STOCK MARKET COMPETITION

FOUR SEASONS HOTEL MAUI AT WAILEA WED IN WAILEA Official Rules

OFFICIAL GUIDELINES AND CRITERIA FOREVER SAINT PAUL CHALLENGE

CURIOUS CODEFEST HACKATHON OFFICIAL RULES

Holistic Holiday at Sea. Cruise Sweepstakes. . Phone Number. Rules

Library Mobile App Competition. Student Information Packet. University of Illinois Undergraduate Library Spring 2013

Statistical Innovations Online course: Latent Class Discrete Choice Modeling with Scale Factors

"Seven Billion Dreams. One Planet. Consume with Care."

Cornell Hospitality Business Plan Competition Official Rules and Guidelines

Better credit models benefit us all

1. Sweepstakes: 9/18 #PatsHatFriday Sweepstakes (this Sweepstakes ).

2016 QUINTANA ROO BIKE FRAME DRAWING. Official Rules

Rules and Regulations

2016 Corporate Communications - Purpose, Process and Winners

How I won the Chess Ratings: Elo vs the rest of the world Competition

Official Rules for Chicago Helicopter Experience Sweepstakes

New York StartUP! 2015 Business Plan Competition Orientation Session

How To Win A League Of Legends Tournament

2016 InnovateHER: Innovating for Women Business Challenge and Summit

1. Sweepstakes: 2/4/16 Coldplay ticket Sweepstakes (this Sweepstakes ).

Guidelines, Criteria and FAQs

INTRODUCTION. in teams.

Traditional Conjoint Analysis with Excel

S TATE BAR OF MICHIGAN 2012 LAW DAY CONTEST LAW DAY MICHIGAN COURTS JUSTICE FREEDOM. Contest Rules Entry Form Permission Form

Official Rules: Hour Challenge go.wisc.edu/100hour Facebook: 100 Hour Challenge UW Madison 2015

Cisco Live DevNet Hackathon Participation Agreement

Business Plan Competition Application to Compete

Download an Cisco UCS(Unified Computing System) Server Kit and stand a chance to win Shopping Vouchers worth THB300!

RADIO DISNEY "CODE WORD OF THE DAY" AND OTHER DAILY PRIZE CONTEST/SWEEPSTAKES RULES

2014 Entry Form (Complete one for each entry.) Fill out the entry name exactly as you want it listed in the program.

Hackathon Cisco Paris Participation Agreement

Freescale Technology Forum Design Challenge (India Challenge) Official Rules (India)

Knowledge Discovery and Data Mining

Correcting for CBC Model Bias: A Hybrid Scanner Data - Conjoint Model. Martin Natter Markus Feurstein

We will randomly draw the winning name on Friday, August 29, at 10:00 AM Pacific Standard Time. Make sure to SHARE this post with your friends!

Data Mining Practical Machine Learning Tools and Techniques

Mafiaoza s Jingle Contest Rules and Registration Form

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

CONTEST RULES. Canadian Broadcasting Corporation ( CBC ) and Sinorama ( Sponsor )

Guidelines, Criteria and FAQs

MICROSOFT ASSASSIN'S CREED PIRATES DEMO - DEVELOPER CHALLENGE CONTEST OFFICIAL RULES

UEFA EURO 2016 #SCOREWITHCARLSBERG CONTEST ( CONTEST )

Data Mining. Nonlinear Classification

Goodguys 2016 Giveaway Contest 2016 Ford Mustang GT

Sawtooth Software. How Many Questions Should You Ask in Choice-Based Conjoint Studies? RESEARCH PAPER SERIES

Hydrogen Design Contest: Hydrogen Applications for Airports. Official Rules and Design Guidelines Last updated: 30 November 2007

MySDMoms.com NO BUMMER SUMMER Promotion OFFICIAL RULES

OFFICIAL RULES FOR 2016 LAS VEGAS BIKINI TEAM MODEL SEARCH

Accenture U.S. Cross-Campus Case Competition Official Rules 1. NO PURCHASE NECESSARY. PURCHASE WILL NOT ENHANCE CHANCES OF WINNING.

Pittsburgh Penguins & RAM Trucks Fan Appreciation Lease Giveaway OFFICIAL RULES

IEEEXtreme Competition Rules

SABEW s BEST IN BUSINESS CANADA COMPETITION Contest Year This contest covers work published, broadcast and posted in the calendar year 2015.

Resolve to save money in 2010 and win free electricity for a year from Champion Energy Services

Unibet Tipping Competition Terms and Conditions

Diversity Advantage Challenge First Edition Terms and Conditions

Canadian citizens, whether living in Canada or abroad, and permanent residents of Canada are eligible to enter.

INTRODUCING THE RULES OF THE GAME

Contact Treasury Management Support: (toll free) Monday through Friday, 7:30 am 5:30 pm (Pacific Time)

RighTime Home Services Partners with Temecula Education Foundation for Annual Think Green Contest

Council for Private Education engages students in Poster Design Competition

BANGKOK BUSINESS CHALLENGE THE SEED OF SUCCESS

Mobile Apps 13 Concept to Company Contest - Official Rules -

Evaluation & Validation: Credibility: Evaluating what has been learned

CAMPERS VILLAGE THE NORTH FACE THERMOBALL JACKET SWEEPSTAKES. Important: Please read these rules before entering this contest (the "Contest").

OFFICIAL RULES OF THE FLOGGING MOLLY FILLMORE CHARLOTTE NC T2W SWEEPSTAKES.

2015 Taipei International Design Award

Official Rules Eligibility: Sponsor: Administrator Agreement to Official Rules: Timing: How to Enter the Contest:

Transcription:

Sawtooth Software Prize: CBC Predictive Modeling Competition Winning Prize: $5,000 USD Description, Rules, and Procedures v1.1 (Inspired by the Netflix Prize Competition) A. Design of the CBC Experiment During October, 2015 Sawtooth Software collected a CBC (Choice-Based Conjoint) dataset consisting of 1200 total respondents using commercial panel sample provided by Survey Sampling International (SSI) on the topic of 7-11 day cruise vacations (see Attribute list in Appendix A). The data set also features attitudinal, demographic, and past behavior questions asked prior to the CBC questions (see Appendix C). The CBC presentation format was four full-profile concepts per task with no None (see CBC task appearance in Appendix B). Similar to the Netflix Prize competition, the data set features two types of holdout observations: Quiz and Test. The Quiz observations are used to score participants on the leader board from week to week during the 9-month competition. At the end of the competition, Test observations are added to the Quiz observations for final validation to determine the final winner. Models that overfit to the Quiz observations may end up being not as successful when the Test observations are added for the final predictive scoring. The 1200 respondents were randomly divided (in real-time during data collection) into two cells: N=600 training data set; 21 total CBC tasks, including 6 random holdout tasks interspersed, where those 6 holdout tasks consisted of: o 3 random holdout tasks (the Within-Sample Quiz questions) o 3 random holdout tasks (the Within-Sample Test questions) N=600 Out-Of-Sample (OOS) holdout tasks; 21 fixed tasks in one version (block) asked in rotated order, each identical in layout (4 concepts with no None) to the tasks in the training data set. For predictive validation purposes, two of the concepts within each fixed holdout task are designated as Quiz concepts and the other two concepts are designated as Test concepts. 1

The training data set was designed using Sawtooth Software s Balanced Overlap 1 random design strategy: a near-orthogonal design with modest level overlap within each task, with unique per respondent versions (blocks) of the questionnaire distributed across the 600 respondents. The fixed (single version) design for the OOS holdout tasks also used Balanced Overlap for three of the four concepts per task, with the fourth concept manually constructed to be quite similar to (but not dominated by) one of the other three concepts. The manually constructed concept appeared in randomized positions across the tasks. Such competitive product scenarios involving some highly similar competitive offerings are quite common in the real world. B. Rules and Procedures 1. Release of Liability and Agreement to Compete with Integrity and to the Benefit of the Community: a. All participants, both the individuals and the companies they represent, hold Sawtooth Software and the panel company (SSI) harmless from any potential and/or perceived damages due to participating in this competition. b. All participants agree not to game the predictions to increase their score: meaning that the predictions must come from the model rather than non-theory based adjustments (other than for scale factor) to increase the fit score. c. Sawtooth Software intends to make public via published articles and conference presentations the learnings from the competition and the details of the winning team s model. Participants grant Sawtooth Software all rights to make such information public. 2. Lead team members must first write Bryan Orme (bryan@sawtoothsoftware.com) and indicate that the team all agrees to the rules and conditions in this Sawtooth Software Prize: CBC Predictive Modeling Competition Description, Rules, and Procedures document. The email requesting participation must contain the phrase: We/I <list team participants names, with lead team member listed first>, representing team <team s name, limited to 40 characters> have reviewed the Sawtooth Software Prize: CBC Predictive Modeling Competition Description, Rules, and Procedures v1.1 document and we/i agree to the rules and conditions therein which include holding Sawtooth Software and SSI harmless from any potential and/or perceived damages due to participating in this competition. 1 Please note that 21 total random tasks were generated using the Balanced Overlap approach, which achieves excellent one-way and two-way level balance. However, we removed six tasks for holdout validation purposes, so the remaining 15 tasks no longer have the near-perfect one-way and two-way balance that the original set of 21 had. This caused a loss of about 4% in D-efficiency relative to usual Balanced Overlap Designs for this project. 2

3. If the same individual participates on multiple teams, only one of those teams may receive one of the top three awards (winner and finalists). Multiple teams may not have the same lead team member, though an individual is welcome to participate on multiple teams. 4. Files to download: Participants should download the data files from http://www.sawtoothsoftware.com/2016prize. The Training Data included in the download are split across two files (indexed by CaseID): Covariates.csv: Attitudinal, demographic, preference and past behavior screener question responses by respondent formatted in a.csv file (See Appendix C) with column labels to describe the contents of the file. Training.cho (alternatively, Training.CSV): Experimental design (generated via Sawtooth Software s balanced overlap CBC design method) and responses (including latency time to complete each choice task in the.cho file) to the 15 CBC choice tasks (for format, see https://www.sawtoothsoftware.com/help/issues/ssiweb/online_help/hid_web_cbc_choformat. htm) The Holdout Design Data in the download are split across two files (only the first indexed by CaseID): WSholdouts.csv: Just the design per respondent for the 3 Quiz and 3 Test random holdout tasks (without the responses or latency), also generated via balanced overlap method, formatted as a.csv file with column labels to describe the contents of the file. OOSholdouts.csv: Contains the experimental design for the 21 OOS fixed (one version) holdout tasks (in a.csv file, with labels) so that participants may specify these as market simulation scenarios and predict the aggregate (population) OOS shares of preference. Hint: to give participants a feel for the appropriate scaling (steepness or flatness of shares) for the holdout shares of preference, the standard deviation of the true share values across the 21 OOS fixed holdout tasks is 10.72. 5. The responses to the Quiz & Test within-sample random tasks and the OOS fixed Quiz & Test holdout concepts are seen only by Sawtooth Software. Participants will not know which within-sample random holdout tasks (in WSHoldouts.csv) are the Quiz vs. the Test questions (which have been randomly assigned per respondent), nor will they be told which of the OOS concepts (in OOSHoldouts.csv) are the Quiz vs. the Test holdout concepts (two concepts per OOS holdout task have been selected randomly by Sawtooth Software to be the Quiz holdouts and the remaining two have been designated the Test holdouts). 6. Prediction Files and Brief Model Description to Submit to Sawtooth Software: Team leaders submit their predictions (to bryan@sawtoothsoftware.com) under a single team name (identity of the team members is maintained confidential except for eventual winners, though their 3

identity will be known to Sawtooth Software). Team leaders are to submit individual-level choice predictions (as discrete choices, not probabilities) for all 6 within-sample holdout tasks plus aggregate share of preference predictions for the 21 OOS Quiz holdout tasks in specific.csv formats: WSpredictions.csv: CaseID, Predicted_Choice1 through Predicted_Choice6 (values of 1 to 4, indicating which of four concepts each respondent is most likely to pick for the 6 random holdouts, where each row in the file is a different respondent). CaseID must match original CaseIDs as found in the training data set. No labels in first row please. OOSpredictions.csv: Predicted_SOP1 through Predicted_SOP84 (aggregate Shares Of Preference for 21 tasks x 4 concepts, 84 values stored in a single column of the file across 84 rows, where the shares sum to 100 within each of the 21 holdout task scenarios. 4 decimal places of precision recommended, though more may be submitted.) No labels in first row please. 7. Separate models may be used to predict the within-sample holdouts and the OOS shares of preference. Participants must give a short description of the models with each submission to communicate the essentials of their modeling approach: typically three to five sentences that will only be made public at the end of the competition. 8. The winning team, within 30 days after the competition concludes, must provide enough detail regarding their winning model so that others may reproduce it using open-source or commercially available software (no proprietary or patented approaches allowed). Failure to do so disqualifies the team and the prize will go to the next-highest scoring team that qualifies for the prize. 9. An honorable mention (the One-Model Wonder award) will be given to the team achieving the best predictive score who uses the same model to predict both within-sample and out-of-sample holdouts. To receive this honor, make sure to indicate with each submission if it is the same model used for both within- and OOS predictions. If this is not indicated, it will be assumed that separate models are being used to fit the within-sample and out-of-sample holdouts. 10. Additional honorable mentions for Best Within-Sample Hit Rate and Best Out-Of-Sample Share Predictions will also be given. 11. Scoring the Submissions: For reporting continuous leader board standings (published on Sawtooth Software s website and updated on either a weekly or monthly basis), the predictions only of the Quiz responses (representing a random half of the within-sample and out of sample holdout observations) will be scored via automated script. 12. Teams may submit new predictions no more than once per week (except in the last two weeks of the competition where they may submit once per business day) and their best submission is published as their current score. 13. The composite score is used for leader board ranking and in the final scoring to determine the winner and will be computed as the product of the average individual-level raw hit rate (across the 4

within-sample Random holdout tasks) and R-squared for shares of preference predictions for the OOS fixed choice concepts. 14. For both leader board tracking and in the final report of the competition, the individual hit rate score, the OOS share of preference R-Squared, and the composite score will be reported. 15. Final Submission at Close of Competition Prior to the deadline at the close of the competition (17:00 or 5PM Mountain Daylight Time, July 29, 2016), the teams must submit via email their final predictions (labeled in the subject field of the email as final prediction ) of the holdout responses which will be used to determine the final winner. It is the team s responsibility prior to 5PM Mountain Daylight Time, July 29, 2016 to ensure that Bryan Orme has received their final model submission (via email confirmation or phone call to +1 801 477 4700). If a team fails to submit predictions via an email labeled final prediction by the stated deadline above, the team s previously submitted best model as shown on the leader board will be assumed to be the team s final prediction. Regarding questions involving whether a submission has been done prior to the deadline, the received time stamp on the email (made by the server processing the email) serves as the official time of submission. 16. The final score will be computed for each team s final prediction submission using the combined Quiz and Test responses. (Thus, the leader board first place team may not necessarily be the final winner once the Test holdout tasks and concepts are added to the Quiz responses for predictive scoring). 17. Details about How to Win the Prize: To claim the $5000 cash prize, the winner must beat the composite score (when computed using both the Quiz and Test holdouts) for Sawtooth Software s Benchmark Ensemble solution (described in Appendix D below). 18. Sawtooth Software will seed the competition with 3 benchmark solutions: 1) Default CBC/HB, 2) Priors 2 Optimized CBC/HB, and 3) Ensemble approach (described in Appendix D below). The methodologies and model selection for all three benchmark submissions will be set prior to seeing the holdout data and will not be iterated to improve the predictive fit (other than tuning RFC market simulations for scale factor to maximize share of preference fit to the OOS Quiz holdout concepts). The predictive results for Sawtooth Software s three benchmark solutions will be shown in the leader board and in the final ranking for comparison. C. Additional Logistics and Prize Details 1. Length of the competition: 9 months, starting on November 1, 2015 and ending at 5PM Mountain Daylight Time, July 29, 2016. 2 For the priors optimized CBC/HB utility run, the optimization of prior variance and prior degrees of freedom is done only using the training data set; not by using or otherwise referring to the holdouts. 5

2. First place winning team (Winner(s)) receives $5000 USD. First place Winner also receives an opportunity to co-speak with Bryan Orme in a presentation covering the challenge results at the Sawtooth Software September 28-30 main conference sessions in Park City, Utah (complimentary conference registration for the 2.5 days included; does not include airfare, transportation, accommodations or other travel-related expenses that must be covered by the winner; one registration only per team to be utilized by one member of the team). 3. 2 nd and 3 rd place winners (Finalists) receive an opportunity to co-lead a panel discussion at the 2016 Sawtooth Software free at the Sawtooth Software September 28-30 main conference sessions in Park City, Utah (complimentary conference registration for the 2.5 days included; does not include airfare, transportation, accommodations or other travel-related expenses that must be covered by the winner; one registration only per team to be utilized by one member of the team). 4. Top 3 overall winners (winner and finalists), honorable mention ( One-Model Wonder ), and honorable mentions based on best within- or best out-of-sample predictions may wear a special name tag at the conference provided by Sawtooth Software indicating their team s special honor. 5. Additional teams (as selected by Sawtooth Software) and honorable mentions may be invited participate in the panel discussion at the 2016 conference to further discuss with others the outcomes and learnings of the competition (one team member per team to serve as lead panelists). 6. Teams receiving the top 3 awards (Winner and the two Finalists) must be entirely unique in terms of their team members. If the same individual participates on multiple teams, only one of those teams may receive a top 3 award. 7. After Winner(s) have provided suitable documentation (as judged by Sawtooth Software) so that their model(s) may be understood and reproduced and has returned tax-related forms, cash prize to be delivered as early as November 30, 2016. D. Additional Legal: 1. All participants, both the individuals and the companies they represent, hold Sawtooth Software and the panel company (SSI) harmless from any potential and/or perceived damages due to participating in this competition. 2. Sawtooth Software intends to make public via published articles and conference presentations the learnings from the competition and the details of the winning team s model. Participants grant Sawtooth Software all rights to make such information public. 3. No purchase necessary to compete or win the prize. 4. All entrants must be at least 18 years of age. 5. Contest void where prohibited by law. 6. Sawtooth Software employees and direct relatives not eligible to win the prize. 6

7. Winner(s) responsible for paying any applicable local, state, and federal and international taxes. Winner(s) required to complete applicable prize-related US tax reporting forms to Sawtooth Software. Failure to return tax reporting forms by November 30, 2015 will result in forfeiture of the prize. 7

Appendix A CBC Attribute & Level List Attribute 1: Destination 1. Mexican Riviera<br>(sailing out of Los Angeles, CA) 2. Eastern Caribbean<br>(sailing out of Fort Lauderdale, FL) 3. Western Caribbean<br>(sailing out of Tampa, FL) 4. Alaska<br>(sailing out of Seattle, WA) 5. Norway and Northern Europe<br>(sailing out of Oslo, Norway) 6. Mediterranean<br>(sailing out of Barcelona, Spain) Attribute 2: Cruise Line 1. Norwegian 2. Disney 3. Royal Caribbean 4. Princess 5. Holland America 6. Carnival Attribute 3: Number of Days 1. 7 days 2. 8 days 3. 9 days 4. 10 days 5. 11 days Attribute 4: Stateroom 1. Inside stateroom (no windows) 2. Oceanview stateroom, porthole window 3. Balcony stateroom, sliding door to private balcony Attribute 5: Ship Amenities/Age: 1. Fewer amenities, older ship 2. More amenities, newer ship Attribute 6: Price per Person per Day 1. $100 per person per day 2. $125 per person per day 3. $150 per person per day 4. $175 per person per day 5. $200 per person per day (Note: total price per person was also displayed below the price per person per day, computed as total days x price per person per day.) 8

Appendix B CBC Task Appearance 9

Appendix C Additional Survey Questions Included in Training Dataset (Covariates.csv) Variable_Name: Cruisedbefore Variable_Name: Howmany Variable_Name: Intend (disqualify respondent if pick Not at all likely ) 10

Variable_Name: Abiltravel Variable_Name: Abilspend Variable_Names: Motivation_1 to Motivation_8 (multi-response question; 1=checked, 0=not checked) 11

Variable_Name: TopDestination Variable_Name: TopCruiseline Variable_Name: TopLength 12

Variable_Name: Demos1 Variable_Name: Demos2 Variable_Name: Demos3 13

Appendix D Description of Sawtooth Software s Benchmark Ensemble Solution (Inspired by Lattery s 2015 Sawtooth Software Conference presentation.) An ensemble of Latent Class and HB solutions (using simple averaging to obtain consensus), where the ensemble contains 40 replicates: 20 replicates of Latent Class 24-group solutions, broken out early such that the last 10 iterations provide about 0.1% total lift in LL. Pseudo individual-level utilities for each replicate developed by taking the weighted average of the part-worth utilities, where the weights are each respondent s probability of membership in each group. 20 replicates of HB solutions. First, optimal priors (prior DF and prior variance) will be searched on the training data set using jack-knife and bootstrap resampling. These optimal priors will be used in all HB replicates in the ensemble. Each HB replicate will be estimated using alternating sets of covariates developed using combinations of survey questions as covariates. Method of predicting individual-level choices for Sawtooth Software s Ensemble Solution: Shares of preference for each within-sample random holdout task to be computed using the logit rule for each of the 40 replicates. The individual hit rates will be computed by averaging the shares of preference across the 40 replicates for each respondent, thus determining for each random task which concept has the highest share of preference and is the most likely choice for each respondent. Method of predicting shares of preference for the OOS fixed holdout tasks for Sawtooth Software s Ensemble Solution: Randomized First Choice (stacking the raw individual-level utilities for all the replicates in the ensemble, such that the respondent utility run contains nxr cases in the conjoint simulator representing n respondents by r replicates), tuned to the Quiz data set for optimal scale factor. 14