Statistics, Data Analysis & Econometrics



Similar documents
Basic Statistical and Modeling Procedures Using SAS

Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing. C. Olivia Rud, VP, Fleet Bank

Modeling Lifetime Value in the Insurance Industry

VI. Introduction to Logistic Regression

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Developing Risk Adjustment Techniques Using the System for Assessing Health Care Quality in the

Using An Ordered Logistic Regression Model with SAS Vartanian: SW 541

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means

4 Other useful features on the course web page. 5 Accessing SAS

11. Analysis of Case-control Studies Logistic Regression

Segmentation For Insurance Payments Michael Sherlock, Transcontinental Direct, Warminster, PA

Final Exam Practice Problem Answers

Guido s Guide to PROC FREQ A Tutorial for Beginners Using the SAS System Joseph J. Guido, University of Rochester Medical Center, Rochester, NY

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Dongfeng Li. Autumn 2010

Additional sources Compilation of sources:

An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA

Experimental Design for Influential Factors of Rates on Massive Open Online Courses

Beginning Tutorials. PROC FREQ: It s More Than Counts Richard Severino, The Queen s Medical Center, Honolulu, HI OVERVIEW.

This can dilute the significance of a departure from the null hypothesis. We can focus the test on departures of a particular form.

SAS Software to Fit the Generalized Linear Model

Data Analysis Tools. Tools for Summarizing Data

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Data analysis process

PROC LOGISTIC: Traps for the unwary Peter L. Flom, Independent statistical consultant, New York, NY

Paper D Ranking Predictors in Logistic Regression. Doug Thompson, Assurant Health, Milwaukee, WI

Analysing Questionnaires using Minitab (for SPSS queries contact -)

International Statistical Institute, 56th Session, 2007: Phil Everson

GLM I An Introduction to Generalized Linear Models

Some Essential Statistics The Lure of Statistics

Recall this chart that showed how most of our course would be organized:

Generalized Linear Models

Quantitative Methods for Finance

Descriptive Statistics

NCSS Statistical Software

How to set the main menu of STATA to default factory settings standards

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

MEASURES OF LOCATION AND SPREAD

Foundation of Quantitative Data Analysis

Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Geostatistics Exploratory Analysis

SUMAN DUVVURU STAT 567 PROJECT REPORT

ln(p/(1-p)) = α +β*age35plus, where p is the probability or odds of drinking

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

NCSS Statistical Software

Abbas S. Tavakoli, DrPH, MPH, ME 1 ; Nikki R. Wooten, PhD, LISW-CP 2,3, Jordan Brittingham, MSPH 4

Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP

A Property & Casualty Insurance Predictive Modeling Process in SAS

Lecture 19: Conditional Logistic Regression

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Random effects and nested models with SAS

Getting Correct Results from PROC REG

Normality Testing in Excel

Descriptive Analysis

Predicting Customer Churn in the Telecommunications Industry An Application of Survival Analysis Modeling Using SAS

Overview Classes Logistic regression (5) 19-3 Building and applying logistic regression (6) 26-3 Generalizations of logistic regression (7)

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

Cool Tools for PROC LOGISTIC

Multiple Linear Regression in Data Mining

A LOGISTIC REGRESSION MODEL TO PREDICT FRESHMEN ENROLLMENTS Vijayalakshmi Sampath, Andrew Flagel, Carolina Figueroa

Non-life insurance mathematics. Nils F. Haavardsson, University of Oslo and DNB Skadeforsikring

Lecture 14: GLM Estimation and Logistic Regression

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Ordinal Regression. Chapter

KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

Electronic Thesis and Dissertations UCLA

Data Analysis Methodology 1

Statistical Functions in Excel

Statistical Models in R

Role of Customer Response Models in Customer Solicitation Center s Direct Marketing Campaign

Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors.

Alex Vidras, David Tysinger. Merkle Inc.

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217

SAS Syntax and Output for Data Manipulation:

Instructions for SPSS 21

Bill Burton Albert Einstein College of Medicine April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1

Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:

SPSS Tests for Versions 9 to 13

Week TSX Index

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

Chapter 5 Analysis of variance SPSS Analysis of variance

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Study Guide for the Final Exam

2. Filling Data Gaps, Data validation & Descriptive Statistics

ADVANCED FORECASTING MODELS USING SAS SOFTWARE

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

Predictor Coef StDev T P Constant X S = R-Sq = 0.0% R-Sq(adj) = 0.

Univariate Regression

An introduction to using Microsoft Excel for quantitative data analysis

Predicting the US Presidential Approval and Applying the Model to Foreign Countries.

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

data visualization and regression

Risk Management : Using SAS to Model Portfolio Drawdown, Recovery, and Value at Risk Haftan Eckholdt, DayTrends, Brooklyn, New York

2 Sample t-test (unequal sample sizes and unequal variances)

ADD-INS: ENHANCING EXCEL

Categorical Data Analysis

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Transcription:

Using the LOGISTIC Procedure to Model Responses to Financial Services Direct Marketing David Marsh, Senior Credit Risk Modeler, Canadian Tire Financial Services, Welland, Ontario ABSTRACT It is more important than ever to accurately select receptive consumers for direct solicitation. Increasingly saturated markets, the growing costs of direct mail and telemarketing, decreasing consumer responsiveness and increasing dissatisfaction with direct marketing methods combine to require a more finely tuned approach. With a database of customer information and the LOGISTIC procedure, a predictive model can be produced and put into operation very quickly. This paper will address aspects building a marketing response model using the SAS system in a financial services setting. Topics to be discussed include experiment design, data screening, preliminary data analysis and characteristic selection, model selection, as well as validation and tracking issues. INTRODUCTION Today it is vital for marketers to make the most of every customer contact opportunity. In addition to the direct costs associated with each contact are the less tangible harm to the company name for junk mail or harassing phone calls. The long-term goal of any marketing modeler should be a world in which no consumer receives junk mail. That is, every contact the have with a company is just the sort of thing they would be interested in. There would seem to be only two ways of accomplishing this: get to know each customer as individuals or develop your predictive modeling skills to the point where you can, with near certainty, target only customers interested in an unsolicited promotion. This paper will give an overview of how the SAS system can be used to help build response models in the context of a financial services organization. It will not discuss in detail the statistical algorithms behind the SAS procedures. PROC LOGISTIC, when coupled with careful planning and a knowledge of both your customer and product, can help you create marketing response models that will reduce your costs and increase consumer response rates. METHOD For demonstration purposes, this paper will use examples from the solicitation by telemarketing of retail credit card credit holders for an extended warranty product. This offer doubles the manufacturers warranty on any product purchased with the card. Any cardholder who agrees to sign up for the product and who pays for the first month s premium on their card will be considered a positive response; everyone else is a negative. Experiment Design Experiment design is arguably the most important step in the development of a marketing response model. If the design is flawed, no amount of data manipulation or clever statistical technique will be able to compensate for it. Experiment design seeks to obtain results for a sample of accounts, such that a logistic regression model can be built that is applicable to the entire target market. In other words, to build a proper response model, you need to test the marketing offer on a random sample (or pseudorandom sample) of accounts, build the model based on that, and then apply the model to the entire population. To that end, two questions must be answered: Who should be solicited? and How many should be included in the sample? When selecting a sample for model building purposes, it s important to make sure that the sample is homogeneous and from the same population that will be targeted with the promotion. Cardholders who will never be selected for logical reasons (e.g. closed accounts, accounts not in good standing) should be excluded from the sample, as should those who don t fit the marketing profile (e.g. in a region that s not targeted, outside a specific demographic). If cardholders will not be solicited more than once, this should be reflected in the sample as well. Because the responses will be binary (i.e. signed up or didn t), the expected response rate is the only major factor to consider, outside of cost, in selecting a sample size. The total size of the sample is important, but it is just as important to have a sufficient number of responders to build a model. A good rule of thumb is that at least 150 responders are necessary to build a basic model. When selecting a sample size, consider other analyses in addition to modeling that might be conducted (e.g. financial, telemarketing script) and adjust the size accordingly. A typical response rate to telemarketing is about 3% so in our example of an extended warranty product we selected 10,000 cardholders for our sample. Sometimes it may be more sensible to not build a model, and these instances should be identified during the experiment design phase. Examples include when the population is expected to change significantly, when the 1

majority of customers have already been solicited or when a mass mailing strategy is already in place. Data Collection & Screening While waiting for the results of the test mailing is an ideal time to collect the data on the independent variables. These values will either be purchased or be found in your data warehouse. Experience and knowledge of your customer base and product is invaluable here (e.g. don t sell refrigerators to people North of 60º!). Don t ignore experience with similar products and the input of others. Table 1 gives a listing of the data we collected for our example. Normally many more pieces of data will be collected and analyzed. A good rule of thumb is to start with ~ 20 to 30 variables and narrow it down to 6 to 12. Table 1 Variable Name Description Valid Values Avg_Bal Average card -$250 to $4,500 balance over past 12 months Card_tenure Age of Account 0 to 505 in months CtacMemb Member of 1 = Member roadside assistence program 0 = Non- Member Geo_Code Risk_Scr WPPromo Geodemographic Code Current Credit Risk Score Number of times promoted with similar products in the past N, R, T, U, S 500 to 750 0 to 2 Once the data has been collected, be on the look out for data that doesn t make sense (e.g. year of birth 1850) and try to find the source of the problem. At this stage it is important to consult with the parties that have the most detailed knowledge of the data in question, whether it be the data analysis group, marketing, finance or credit risk. Check also for outliers to determine reasonableness. A good rule of thumb for normally distributed data is to look closely at any data points more than 1.5 times the interquartile range from either the 25 th or 75 th percentile and more than 3 standard deviations from the mean. In our example, we ran the UNIVARIATE procedure on each variable. The code below runs a summary data analysis for the Risk Score. The output follows. PROC UNIVARIATE DATA=VARIABLE_SELECTION; VAR RISK_SCR; TITLE Analysis of Risk Score ; Analysis of Risk Score The UNIVARIATE Procedure Variable: RISK_SCR (RISK_SCR) Moments N 10000 Sum Weights 10000 Mean 624.6158 Sum Observations 6246158 Std Deviation 60.47539 Variance 3657.27292 Skewness -0.1341447 Kurtosis 1.14597041 Uncorrected SS 3938018048 Corrected SS 36569071.9 Coeff Variation 9.6820143 Std Error Mean 0.6047539 Location Basic Statistical Measures Variability Mean 624.6158 Std Deviation 60.47539 Median 625.0000 Variance 3657 Mode 619.0000 Range 956.00000 Interquartile Range 80.00000 Tests for Location: Mu0=0 Test -Statistic- -----p Value------ Student's t t 1032.843 Pr > t <.0001 Sign M 4999.5 Pr >= M <.0001 Signed Rank S 24997500 Pr >= S <.0001 Quantiles (Definition 5) Quantile Estimate 100% Max 956 99% 761 95% 724 90% 703 75% Q3 665 50% Median 625 25% Q1 585 10% 548 5% 526 1% 482 0% Min 0 ----Lowest---- Extreme Observations ----Highest--- Value Obs Value Obs 0 5621 832 5206 384 4200 841 7410 384 559 845 7019 415 4180 855 8572 415 1813 956 1233 From the interquartile range (80) and the 25% and 75% quartiles (585 and 665 respectively), we calculate an acceptable minimum and maximum of 465 and 785. Calculating three standard deviations from the mean we compute a minimum and maximum of 443 and 806. There are data points that fall outside of our two 2

acceptable ranges, but after speaking with the credit risk department, we realize that all but the risk score of 0 and 956 are acceptable. Looking more closely at those two points we determine that the 0 was miscoded and should be 600 and the 956 should be 656. Similar analyses of the other potential variables were conducted and we determined the data was acceptable. Preliminary Data Analysis & Characteristic Selection Testing Predictiveness of CTAC The FREQ Procedure Table of RESPONSE by CTACMEMB RESPONSE(RESPONSE) CTACMEMB(CTACMEMB) Frequency Col Pct 0 1 Total 0 8034 1516 9550 95.84 93.75 1 349 101 450 4.16 6.25 Total 8383 1617 10000 Once the response file is back from the telemarketing firm it s a simple matter to merge the responses with the data file collected during the previous phase. It s good practice to merge such data on both an account identifier and a tracking code: Statistics for Table of RESPONSE by CTACMEMB Statistic DF Value Prob Chi-Square 1 13.6852 Likelihood Ratio Chi-Square 1 12.5499 Continuity Adj. Chi-Square 1 13.2048 DATA TOTAL_RESPONSE_FILE; Mantel-Haenszel Chi-Square 1 13.6838 MERGE VARIABLE_SELECTION (IN=A) Phi Coefficient 0.0370 RESPONSE_FILE (IN=B); Contingency Coefficient 0.0370 BY ACCOUNT_ID TRACK_CODE; Cramer's V 0.0370 IF A AND B; RUN: Fisher's Exact Test Cell (1,1) Frequency (F) 8034 Left-sided Pr <= F 0.9998 It is standard practice to hold back a portion of the completed file to validate the model once it s built. Typically, this will be 10 to 20% of the total base. For the purposes of our example, however, all the observations were included. Right-sided Pr >= F Table Probability (P) Two-sided Pr <= P Sample Size = 10000 2.375E-04 8.742E-05 3.855E-04 Selection of characteristics for the model is informed not just by numerical analysis but also by knowledge of the product and customer base. It is important to select characteristics that are predictive, independent, stable with respect to the others and to the responses, and that make logical sense. Predictive variables can be identified using the FREQ procedure for discrete data and the TTEST procedure for continuous variables. These are not the most exhaustive tests, nor even the most statistically appropriate, but at this stage, we are interested only in a rough idea of the data. We will refine these variables further once we get to the modeling stage. PROC TTEST applied to a continuous variable such as AVG_BAL yields an indication of its quality as a predictor. From the example below, we can see that there is strong evidence to suggest AVG_BAL is also a good indicator of responsiveness. PROC TTEST; CLASS RESPONSE; VAR AVG_BAL; TITLE 'Testing Variance of Avg_Bal wrt response'; Testing Variance of Avg_Bal wrt repsonse In our example, the variable CTACMEMB is clearly predictive, based on the results generated by the following SAS code, particularly the Chi Square Stat for the table. The TTEST Procedure Statistics Lower CL Upper CL Variable Class N Mean Mean Mean PROC FREQ DATA=TOTAL_RESPONSE_FILE; AVG_BAL 0 9550 573.56 579.91 586.25 TABLES RESPONSE*CTACMEMB/NOPERCENT AVG_BAL 1 450 618.01 651.01 684 NOROW CHISQ; AVG_BAL Diff (1-2) -101.2-71.1-41.01 TITLE Testing Predictiveness of CTAC ; Lower CL Upper CL Variable Class Std Dev Std Dev Std Dev Std Err AVG_BAL 0 311.93 316.36 320.91 3.2372 AVG_BAL 1 334.31 356.16 381.09 16.79 AVG_BAL Diff (1-2 313.9 318.25 322.72 15.352 T-Tests 3

Variable Method Variances DF t Value AVG_BAL Pooled Equal 9998-4.63 AVG_BAL Satterthwaite Unequal 483-4.16 Equality of Variances Variable Method Num DF Den DF F Value AVG_BAL Folded F 449 9549 1.27 After conducting similar analysis for the six variables examined and through discussions with Finance, Marketing and Credit Risk, we arrived at the following groupings of the variables (Table 2). Each of the new variables in a group are mutually exclusive so each observation will have one and only one variable in each group set to 1 with all the others set to 0. Table 2. Variable Source Range Name Variable ABAL01 Avg_bal $250 ABAL02 Avg_bal ($250,$750] ABAL03 Avg_bal > $750 CTEN01 Card_ten 12 CTEN02 Card_ten (12,24] CTEN03 Card_ten > 24 CTMEM01 Ctac_memb 0 CTMEM02 Ctac_memb 1 GEO01 Geo_Code R, S,N GEO02 Geo_Code T, U RS01 Risk_scr 600 RS02 Risk_scr (600,650] RS03 Risk_scr > 650 WP01 WPPromo 0 WP02 WPPromo > 0 The final step in characteristic selection involves screening to make sure colinearity will not overly influence the model. In the final model output this may appear as 1. Variables that have a negative coefficient when the preliminary analysis suggestions it should be positive 2. Coefficients that change dramatically with the addition of an x variable 3. Variables that come in as non-significant despite preliminary analysis that suggests otherwise 4. A number of variables that do not pass significance tests but yield a model with excellent fit statistics The quickest way to screen for this is by running the CORR procedure on the variables and looking more closely at values higher than 0.5. PROC CORR; VAR RESPONSE ABAL01-ABAL02 RS01-RS03 CTMEM02 WP01 CTEN01-CTEN02 GEO01; TITLE Printing Correlations for Logistic variables ; Model Selection Once the variables to input to the model have been selected, the process moves quite quickly. To begin with, we use forward, backward, stepwise and no selection on all the variables to get a general sense of how the variables are working together. PROC LOGISTIC is the sensible choice for building this sort of model because binary data like these violate several of the assumptions of linear regression, most significantly the assumption of constant variance. PROC LOGISTIC; MODEL RESPONSE = ABAL01-ABAL02 RS01-RS03 CTMEM02 WP01 CTEN01-CTEN02 GEO01/SELECTION=STEPWISE; OUTPUT OUT=LOGOUT PRED=PRED; The LOGISTIC Procedure Model Fit Statistics Intercept Intercept and Criterion Only Covariates AIC 3672.423 3394.489 SC 3679.633 3466.592-2 Log L 3670.423 3374.489 Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio 295.9339 9 <.0001 Score 334.2510 9 <.0001 Wald 296.8750 9 <.0001 Residual Chi-Square Test Chi-Square DF Pr > ChiSq 1.3344 1 0.2480 NOTE: No (additional) effects met the 0.05 significance level for entry into the model. Summary of Stepwise Selection Effect Number Score Step Entered Removed DF In Chi-Square Pr > ChiSq 1 WP01 1 1 178.6441 <.0001 2 RS01 1 2 39.7511 <.0001 3 CTEN01 1 3 35.2942 <.0001 4 ABAL02 1 4 21.7952 <.0001 5 CTMEM02 1 5 14.4824 0.0001 6 ABAL01 1 6 14.2742 0.0002 7 CTEN02 1 7 12.0260 0.0005 8 RS02 1 8 10.0222 0.0015 9 GEO01 1 9 10.0667 0.0015 4

The LOGISTIC Procedure Analysis of Maximum Likelihood Estimates Standard Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept 1-3.5819 0.1482 583.7802 <.0001 ABAL01 1-0.6569 0.1758 13.9570 0.0002 ABAL02 1-0.6918 0.1105 39.1748 <.0001 RS01 1 0.8445 0.1268 44.3401 <.0001 RS02 1 0.4413 0.1371 10.3668 0.0013 CTMEM02 1 0.4544 0.1193 14.5180 0.0001 WP01 1 1.2718 0.1002 161.0498 <.0001 CTEN01 1 0.7481 0.1169 40.9149 <.0001 CTEN02 1 0.4464 0.1309 11.6271 0.0006 GEO01 1-0.3144 0.0994 10.0012 0.0016 Odds Ratio Estimates Point 95% Wald Effect Estimate Confidence Limits ABAL01 0.518 0.367 0.732 ABAL02 0.501 0.403 0.622 RS01 2.327 1.815 2.983 RS02 1.555 1.188 2.034 CTMEM02 1.575 1.247 1.990 WP01 3.567 2.931 4.342 CTEN01 2.113 1.680 2.657 CTEN02 1.563 1.209 2.020 GEO01 0.730 0.601 0.887 Association of Predicted Probabilities and Observed Responses Percent Concordant 71.6 Somers' D 0.458 Percent Discordant 25.8 Gamma 0.470 Percent Tied 2.5 Tau-a 0.039 Pairs 4297500 c 0.729 Often the automated runs will yield identical results. In our example above, for instance, variables were only added to the model. None were removed despite it being a stepwise procedure, so the results of the forward selection are identical. When comparing the models, the AIC (Akaike s Information Criteria) and SC (Schwartze s Criteria) consider fit and characteristics: the lower the number the better. The section Testing the Global Null Hypothesis considers whether the model has any predictive value at all. That is, it tests the hypothesis that all the coefficients are zero. The Chi-Square values are all significantly high so that model has at least some predictive value. The listings for individual characteristics give the coefficient and the related Chi-Square stat. The Pr > ChiSq column tests the null hypothesis and all of these variables pass well. The coefficients should all agree with the preliminary analysis. Finally, the Association of Predicted Probabilities and Observed Responses gives more summary statistics. The Percent Discordant, Somer s D and Gamma tests all suggest that the model is strong. In determining which variables to add, delete or merge, consider the ChiSq tests. Very close coefficients may need to be merged. The degree of differentiation between the two can be tested with the CONTRAST statement. Another thing to consider when looking at this output is the order in which the variables entered the model and the effect they had on it. Dramatic changes in the score Chi- Sq with the addition of a single variable could be a sign of instability in the model. In the case of our example, we are looking at an optimal solution to the problem. Validation The simplest way to validate the model results is to create a lift table by dividing the population into deciles based on their predicted value (from logout), then compare these to the actual observed. If a validation sample has been held back, it can also be scored and a similar table created for comparison purposes. The following SAS code create our deciles and lift chart data: PROC SORT; BY DESCENDING PRED; DATA LOGOUT; SET LOGOUT; DEC=INT(_N_/((10000-10+1)/10+1)+1); PROC TABULATE F=COMMA8.4; CLASS DEC; VAR PRED RESP; TABLE DEC, MEAN*PRED RESP*N RESP*SUM; TITLE Creating a Lift Table for Model ; From Table 3 we see the model fairly accurately predicts the response rate and is very successful in rank ordering the responders. Table 3. Decile N Resp Act Pred 1 1000 129 12.9% 13.1% 2 1000 84 8.4% 7.6% 3 1000 61 6.1% 5.6% 4 1000 43 4.3% 4.3% 5 1000 38 3.8% 3.9% 6 1000 34 3.4% 3.3% 7 1000 28 2.8% 2.9% 8 1000 18 1.8% 1.7% 9 1000 15 1.5% 1.4% 10 1000 0 0.0% 0.3% 5

Implementation and Tracking The greatest challenges in implementing a model are ensuring the SAS programs are coded properly and that the population against which the model is used is analogous to the population upon which it was built. The only guaranteed method of ensuring that the code is accurate is to score a sample of observations by hand. It is also sensible to use the data retrieval programs in development and production, if possible. Tracking is most easily accomplished by using the lift table which, when combined with a graph, clearly shows the lift a model provides over a random selection. That is, it clearly demonstrates the gain in efficiency of selecting marketing leads using a model over a random selection. AUTHOR CONTACT David Marsh Canadian Tire Financial Services 555 Prince Charles Dr. Welland, Ontario L3C 6B5 CANADA Voice: (905) 735-3131 ext. 3860 Fax: (905) 714-2701 Internet: david.marsh@ctal.com CONCLUSION This paper has demonstrated that it is possible to build and implement a response model using basic SAS procedures in a very short period of time. Further refinements can be made to the resulting model, such as response net of cancellations or with profit factored in. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies. 6