Statistics, Data Analysis & Econometrics
|
|
|
- Philip King
- 10 years ago
- Views:
Transcription
1 Using the LOGISTIC Procedure to Model Responses to Financial Services Direct Marketing David Marsh, Senior Credit Risk Modeler, Canadian Tire Financial Services, Welland, Ontario ABSTRACT It is more important than ever to accurately select receptive consumers for direct solicitation. Increasingly saturated markets, the growing costs of direct mail and telemarketing, decreasing consumer responsiveness and increasing dissatisfaction with direct marketing methods combine to require a more finely tuned approach. With a database of customer information and the LOGISTIC procedure, a predictive model can be produced and put into operation very quickly. This paper will address aspects building a marketing response model using the SAS system in a financial services setting. Topics to be discussed include experiment design, data screening, preliminary data analysis and characteristic selection, model selection, as well as validation and tracking issues. INTRODUCTION Today it is vital for marketers to make the most of every customer contact opportunity. In addition to the direct costs associated with each contact are the less tangible harm to the company name for junk mail or harassing phone calls. The long-term goal of any marketing modeler should be a world in which no consumer receives junk mail. That is, every contact the have with a company is just the sort of thing they would be interested in. There would seem to be only two ways of accomplishing this: get to know each customer as individuals or develop your predictive modeling skills to the point where you can, with near certainty, target only customers interested in an unsolicited promotion. This paper will give an overview of how the SAS system can be used to help build response models in the context of a financial services organization. It will not discuss in detail the statistical algorithms behind the SAS procedures. PROC LOGISTIC, when coupled with careful planning and a knowledge of both your customer and product, can help you create marketing response models that will reduce your costs and increase consumer response rates. METHOD For demonstration purposes, this paper will use examples from the solicitation by telemarketing of retail credit card credit holders for an extended warranty product. This offer doubles the manufacturers warranty on any product purchased with the card. Any cardholder who agrees to sign up for the product and who pays for the first month s premium on their card will be considered a positive response; everyone else is a negative. Experiment Design Experiment design is arguably the most important step in the development of a marketing response model. If the design is flawed, no amount of data manipulation or clever statistical technique will be able to compensate for it. Experiment design seeks to obtain results for a sample of accounts, such that a logistic regression model can be built that is applicable to the entire target market. In other words, to build a proper response model, you need to test the marketing offer on a random sample (or pseudorandom sample) of accounts, build the model based on that, and then apply the model to the entire population. To that end, two questions must be answered: Who should be solicited? and How many should be included in the sample? When selecting a sample for model building purposes, it s important to make sure that the sample is homogeneous and from the same population that will be targeted with the promotion. Cardholders who will never be selected for logical reasons (e.g. closed accounts, accounts not in good standing) should be excluded from the sample, as should those who don t fit the marketing profile (e.g. in a region that s not targeted, outside a specific demographic). If cardholders will not be solicited more than once, this should be reflected in the sample as well. Because the responses will be binary (i.e. signed up or didn t), the expected response rate is the only major factor to consider, outside of cost, in selecting a sample size. The total size of the sample is important, but it is just as important to have a sufficient number of responders to build a model. A good rule of thumb is that at least 150 responders are necessary to build a basic model. When selecting a sample size, consider other analyses in addition to modeling that might be conducted (e.g. financial, telemarketing script) and adjust the size accordingly. A typical response rate to telemarketing is about 3% so in our example of an extended warranty product we selected 10,000 cardholders for our sample. Sometimes it may be more sensible to not build a model, and these instances should be identified during the experiment design phase. Examples include when the population is expected to change significantly, when the 1
2 majority of customers have already been solicited or when a mass mailing strategy is already in place. Data Collection & Screening While waiting for the results of the test mailing is an ideal time to collect the data on the independent variables. These values will either be purchased or be found in your data warehouse. Experience and knowledge of your customer base and product is invaluable here (e.g. don t sell refrigerators to people North of 60º!). Don t ignore experience with similar products and the input of others. Table 1 gives a listing of the data we collected for our example. Normally many more pieces of data will be collected and analyzed. A good rule of thumb is to start with ~ 20 to 30 variables and narrow it down to 6 to 12. Table 1 Variable Name Description Valid Values Avg_Bal Average card -$250 to $4,500 balance over past 12 months Card_tenure Age of Account 0 to 505 in months CtacMemb Member of 1 = Member roadside assistence program 0 = Non- Member Geo_Code Risk_Scr WPPromo Geodemographic Code Current Credit Risk Score Number of times promoted with similar products in the past N, R, T, U, S 500 to to 2 Once the data has been collected, be on the look out for data that doesn t make sense (e.g. year of birth 1850) and try to find the source of the problem. At this stage it is important to consult with the parties that have the most detailed knowledge of the data in question, whether it be the data analysis group, marketing, finance or credit risk. Check also for outliers to determine reasonableness. A good rule of thumb for normally distributed data is to look closely at any data points more than 1.5 times the interquartile range from either the 25 th or 75 th percentile and more than 3 standard deviations from the mean. In our example, we ran the UNIVARIATE procedure on each variable. The code below runs a summary data analysis for the Risk Score. The output follows. PROC UNIVARIATE DATA=VARIABLE_SELECTION; VAR RISK_SCR; TITLE Analysis of Risk Score ; Analysis of Risk Score The UNIVARIATE Procedure Variable: RISK_SCR (RISK_SCR) Moments N Sum Weights Mean Sum Observations Std Deviation Variance Skewness Kurtosis Uncorrected SS Corrected SS Coeff Variation Std Error Mean Location Basic Statistical Measures Variability Mean Std Deviation Median Variance 3657 Mode Range Interquartile Range Tests for Location: Mu0=0 Test -Statistic p Value Student's t t Pr > t <.0001 Sign M Pr >= M <.0001 Signed Rank S Pr >= S <.0001 Quantiles (Definition 5) Quantile Estimate 100% Max % % % % Q % Median % Q % 548 5% 526 1% 482 0% Min Lowest---- Extreme Observations ----Highest--- Value Obs Value Obs From the interquartile range (80) and the 25% and 75% quartiles (585 and 665 respectively), we calculate an acceptable minimum and maximum of 465 and 785. Calculating three standard deviations from the mean we compute a minimum and maximum of 443 and 806. There are data points that fall outside of our two 2
3 acceptable ranges, but after speaking with the credit risk department, we realize that all but the risk score of 0 and 956 are acceptable. Looking more closely at those two points we determine that the 0 was miscoded and should be 600 and the 956 should be 656. Similar analyses of the other potential variables were conducted and we determined the data was acceptable. Preliminary Data Analysis & Characteristic Selection Testing Predictiveness of CTAC The FREQ Procedure Table of RESPONSE by CTACMEMB RESPONSE(RESPONSE) CTACMEMB(CTACMEMB) Frequency Col Pct 0 1 Total Total Once the response file is back from the telemarketing firm it s a simple matter to merge the responses with the data file collected during the previous phase. It s good practice to merge such data on both an account identifier and a tracking code: Statistics for Table of RESPONSE by CTACMEMB Statistic DF Value Prob Chi-Square Likelihood Ratio Chi-Square Continuity Adj. Chi-Square DATA TOTAL_RESPONSE_FILE; Mantel-Haenszel Chi-Square MERGE VARIABLE_SELECTION (IN=A) Phi Coefficient RESPONSE_FILE (IN=B); Contingency Coefficient BY ACCOUNT_ID TRACK_CODE; Cramer's V IF A AND B; RUN: Fisher's Exact Test Cell (1,1) Frequency (F) 8034 Left-sided Pr <= F It is standard practice to hold back a portion of the completed file to validate the model once it s built. Typically, this will be 10 to 20% of the total base. For the purposes of our example, however, all the observations were included. Right-sided Pr >= F Table Probability (P) Two-sided Pr <= P Sample Size = E E E-04 Selection of characteristics for the model is informed not just by numerical analysis but also by knowledge of the product and customer base. It is important to select characteristics that are predictive, independent, stable with respect to the others and to the responses, and that make logical sense. Predictive variables can be identified using the FREQ procedure for discrete data and the TTEST procedure for continuous variables. These are not the most exhaustive tests, nor even the most statistically appropriate, but at this stage, we are interested only in a rough idea of the data. We will refine these variables further once we get to the modeling stage. PROC TTEST applied to a continuous variable such as AVG_BAL yields an indication of its quality as a predictor. From the example below, we can see that there is strong evidence to suggest AVG_BAL is also a good indicator of responsiveness. PROC TTEST; CLASS RESPONSE; VAR AVG_BAL; TITLE 'Testing Variance of Avg_Bal wrt response'; Testing Variance of Avg_Bal wrt repsonse In our example, the variable CTACMEMB is clearly predictive, based on the results generated by the following SAS code, particularly the Chi Square Stat for the table. The TTEST Procedure Statistics Lower CL Upper CL Variable Class N Mean Mean Mean PROC FREQ DATA=TOTAL_RESPONSE_FILE; AVG_BAL TABLES RESPONSE*CTACMEMB/NOPERCENT AVG_BAL NOROW CHISQ; AVG_BAL Diff (1-2) TITLE Testing Predictiveness of CTAC ; Lower CL Upper CL Variable Class Std Dev Std Dev Std Dev Std Err AVG_BAL AVG_BAL AVG_BAL Diff ( T-Tests 3
4 Variable Method Variances DF t Value AVG_BAL Pooled Equal AVG_BAL Satterthwaite Unequal Equality of Variances Variable Method Num DF Den DF F Value AVG_BAL Folded F After conducting similar analysis for the six variables examined and through discussions with Finance, Marketing and Credit Risk, we arrived at the following groupings of the variables (Table 2). Each of the new variables in a group are mutually exclusive so each observation will have one and only one variable in each group set to 1 with all the others set to 0. Table 2. Variable Source Range Name Variable ABAL01 Avg_bal $250 ABAL02 Avg_bal ($250,$750] ABAL03 Avg_bal > $750 CTEN01 Card_ten 12 CTEN02 Card_ten (12,24] CTEN03 Card_ten > 24 CTMEM01 Ctac_memb 0 CTMEM02 Ctac_memb 1 GEO01 Geo_Code R, S,N GEO02 Geo_Code T, U RS01 Risk_scr 600 RS02 Risk_scr (600,650] RS03 Risk_scr > 650 WP01 WPPromo 0 WP02 WPPromo > 0 The final step in characteristic selection involves screening to make sure colinearity will not overly influence the model. In the final model output this may appear as 1. Variables that have a negative coefficient when the preliminary analysis suggestions it should be positive 2. Coefficients that change dramatically with the addition of an x variable 3. Variables that come in as non-significant despite preliminary analysis that suggests otherwise 4. A number of variables that do not pass significance tests but yield a model with excellent fit statistics The quickest way to screen for this is by running the CORR procedure on the variables and looking more closely at values higher than 0.5. PROC CORR; VAR RESPONSE ABAL01-ABAL02 RS01-RS03 CTMEM02 WP01 CTEN01-CTEN02 GEO01; TITLE Printing Correlations for Logistic variables ; Model Selection Once the variables to input to the model have been selected, the process moves quite quickly. To begin with, we use forward, backward, stepwise and no selection on all the variables to get a general sense of how the variables are working together. PROC LOGISTIC is the sensible choice for building this sort of model because binary data like these violate several of the assumptions of linear regression, most significantly the assumption of constant variance. PROC LOGISTIC; MODEL RESPONSE = ABAL01-ABAL02 RS01-RS03 CTMEM02 WP01 CTEN01-CTEN02 GEO01/SELECTION=STEPWISE; OUTPUT OUT=LOGOUT PRED=PRED; The LOGISTIC Procedure Model Fit Statistics Intercept Intercept and Criterion Only Covariates AIC SC Log L Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio <.0001 Score <.0001 Wald <.0001 Residual Chi-Square Test Chi-Square DF Pr > ChiSq NOTE: No (additional) effects met the 0.05 significance level for entry into the model. Summary of Stepwise Selection Effect Number Score Step Entered Removed DF In Chi-Square Pr > ChiSq 1 WP < RS < CTEN < ABAL < CTMEM ABAL CTEN RS GEO
5 The LOGISTIC Procedure Analysis of Maximum Likelihood Estimates Standard Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept <.0001 ABAL ABAL <.0001 RS <.0001 RS CTMEM WP <.0001 CTEN <.0001 CTEN GEO Odds Ratio Estimates Point 95% Wald Effect Estimate Confidence Limits ABAL ABAL RS RS CTMEM WP CTEN CTEN GEO Association of Predicted Probabilities and Observed Responses Percent Concordant 71.6 Somers' D Percent Discordant 25.8 Gamma Percent Tied 2.5 Tau-a Pairs c Often the automated runs will yield identical results. In our example above, for instance, variables were only added to the model. None were removed despite it being a stepwise procedure, so the results of the forward selection are identical. When comparing the models, the AIC (Akaike s Information Criteria) and SC (Schwartze s Criteria) consider fit and characteristics: the lower the number the better. The section Testing the Global Null Hypothesis considers whether the model has any predictive value at all. That is, it tests the hypothesis that all the coefficients are zero. The Chi-Square values are all significantly high so that model has at least some predictive value. The listings for individual characteristics give the coefficient and the related Chi-Square stat. The Pr > ChiSq column tests the null hypothesis and all of these variables pass well. The coefficients should all agree with the preliminary analysis. Finally, the Association of Predicted Probabilities and Observed Responses gives more summary statistics. The Percent Discordant, Somer s D and Gamma tests all suggest that the model is strong. In determining which variables to add, delete or merge, consider the ChiSq tests. Very close coefficients may need to be merged. The degree of differentiation between the two can be tested with the CONTRAST statement. Another thing to consider when looking at this output is the order in which the variables entered the model and the effect they had on it. Dramatic changes in the score Chi- Sq with the addition of a single variable could be a sign of instability in the model. In the case of our example, we are looking at an optimal solution to the problem. Validation The simplest way to validate the model results is to create a lift table by dividing the population into deciles based on their predicted value (from logout), then compare these to the actual observed. If a validation sample has been held back, it can also be scored and a similar table created for comparison purposes. The following SAS code create our deciles and lift chart data: PROC SORT; BY DESCENDING PRED; DATA LOGOUT; SET LOGOUT; DEC=INT(_N_/(( )/10+1)+1); PROC TABULATE F=COMMA8.4; CLASS DEC; VAR PRED RESP; TABLE DEC, MEAN*PRED RESP*N RESP*SUM; TITLE Creating a Lift Table for Model ; From Table 3 we see the model fairly accurately predicts the response rate and is very successful in rank ordering the responders. Table 3. Decile N Resp Act Pred % 13.1% % 7.6% % 5.6% % 4.3% % 3.9% % 3.3% % 2.9% % 1.7% % 1.4% % 0.3% 5
6 Implementation and Tracking The greatest challenges in implementing a model are ensuring the SAS programs are coded properly and that the population against which the model is used is analogous to the population upon which it was built. The only guaranteed method of ensuring that the code is accurate is to score a sample of observations by hand. It is also sensible to use the data retrieval programs in development and production, if possible. Tracking is most easily accomplished by using the lift table which, when combined with a graph, clearly shows the lift a model provides over a random selection. That is, it clearly demonstrates the gain in efficiency of selecting marketing leads using a model over a random selection. AUTHOR CONTACT David Marsh Canadian Tire Financial Services 555 Prince Charles Dr. Welland, Ontario L3C 6B5 CANADA Voice: (905) ext Fax: (905) Internet: [email protected] CONCLUSION This paper has demonstrated that it is possible to build and implement a response model using basic SAS procedures in a very short period of time. Further refinements can be made to the resulting model, such as response net of cancellations or with profit factored in. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies. 6
Basic Statistical and Modeling Procedures Using SAS
Basic Statistical and Modeling Procedures Using SAS One-Sample Tests The statistical procedures illustrated in this handout use two datasets. The first, Pulse, has information collected in a classroom
Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing. C. Olivia Rud, VP, Fleet Bank
Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing C. Olivia Rud, VP, Fleet Bank ABSTRACT Data Mining is a new term for the common practice of searching through
Modeling Lifetime Value in the Insurance Industry
Modeling Lifetime Value in the Insurance Industry C. Olivia Parr Rud, Executive Vice President, Data Square, LLC ABSTRACT Acquisition modeling for direct mail insurance has the unique challenge of targeting
VI. Introduction to Logistic Regression
VI. Introduction to Logistic Regression We turn our attention now to the topic of modeling a categorical outcome as a function of (possibly) several factors. The framework of generalized linear models
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@ Yanchun Xu, Andrius Kubilius Joint Commission on Accreditation of Healthcare Organizations,
Using An Ordered Logistic Regression Model with SAS Vartanian: SW 541
Using An Ordered Logistic Regression Model with SAS Vartanian: SW 541 libname in1 >c:\=; Data first; Set in1.extract; A=1; PROC LOGIST OUTEST=DD MAXITER=100 ORDER=DATA; OUTPUT OUT=CC XBETA=XB P=PROB; MODEL
Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
4 Other useful features on the course web page. 5 Accessing SAS
1 Using SAS outside of ITCs Statistical Methods and Computing, 22S:30/105 Instructor: Cowles Lab 1 Jan 31, 2014 You can access SAS from off campus by using the ITC Virtual Desktop Go to https://virtualdesktopuiowaedu
11. Analysis of Case-control Studies Logistic Regression
Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:
Segmentation For Insurance Payments Michael Sherlock, Transcontinental Direct, Warminster, PA
Segmentation For Insurance Payments Michael Sherlock, Transcontinental Direct, Warminster, PA ABSTRACT An online insurance agency has built a base of names that responded to different offers from various
Final Exam Practice Problem Answers
Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal
Guido s Guide to PROC FREQ A Tutorial for Beginners Using the SAS System Joseph J. Guido, University of Rochester Medical Center, Rochester, NY
Guido s Guide to PROC FREQ A Tutorial for Beginners Using the SAS System Joseph J. Guido, University of Rochester Medical Center, Rochester, NY ABSTRACT PROC FREQ is an essential procedure within BASE
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Dongfeng Li. Autumn 2010
Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA
ABSTRACT An Introduction to Statistical Tests for the SAS Programmer Sara Beck, Fred Hutchinson Cancer Research Center, Seattle, WA Often SAS Programmers find themselves in situations where performing
Experimental Design for Influential Factors of Rates on Massive Open Online Courses
Experimental Design for Influential Factors of Rates on Massive Open Online Courses December 12, 2014 Ning Li [email protected] Qing Wei [email protected] Yating Lan [email protected] Yilin Wei [email protected]
Beginning Tutorials. PROC FREQ: It s More Than Counts Richard Severino, The Queen s Medical Center, Honolulu, HI OVERVIEW.
Paper 69-25 PROC FREQ: It s More Than Counts Richard Severino, The Queen s Medical Center, Honolulu, HI ABSTRACT The FREQ procedure can be used for more than just obtaining a simple frequency distribution
This can dilute the significance of a departure from the null hypothesis. We can focus the test on departures of a particular form.
One-Degree-of-Freedom Tests Test for group occasion interactions has (number of groups 1) number of occasions 1) degrees of freedom. This can dilute the significance of a departure from the null hypothesis.
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in
Data analysis process
Data analysis process Data collection and preparation Collect data Prepare codebook Set up structure of data Enter data Screen data for errors Exploration of data Descriptive Statistics Graphs Analysis
PROC LOGISTIC: Traps for the unwary Peter L. Flom, Independent statistical consultant, New York, NY
PROC LOGISTIC: Traps for the unwary Peter L. Flom, Independent statistical consultant, New York, NY ABSTRACT Keywords: Logistic. INTRODUCTION This paper covers some gotchas in SAS R PROC LOGISTIC. A gotcha
Paper D10 2009. Ranking Predictors in Logistic Regression. Doug Thompson, Assurant Health, Milwaukee, WI
Paper D10 2009 Ranking Predictors in Logistic Regression Doug Thompson, Assurant Health, Milwaukee, WI ABSTRACT There is little consensus on how best to rank predictors in logistic regression. This paper
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected]
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected] Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:
International Statistical Institute, 56th Session, 2007: Phil Everson
Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: [email protected] 1. Introduction
GLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial
Some Essential Statistics The Lure of Statistics
Some Essential Statistics The Lure of Statistics Data Mining Techniques, by M.J.A. Berry and G.S Linoff, 2004 Statistics vs. Data Mining..lie, damn lie, and statistics mining data to support preconceived
Recall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
Generalized Linear Models
Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the
Quantitative Methods for Finance
Quantitative Methods for Finance Module 1: The Time Value of Money 1 Learning how to interpret interest rates as required rates of return, discount rates, or opportunity costs. 2 Learning how to explain
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
How to set the main menu of STATA to default factory settings standards
University of Pretoria Data analysis for evaluation studies Examples in STATA version 11 List of data sets b1.dta (To be created by students in class) fp1.xls (To be provided to students) fp1.txt (To be
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
MEASURES OF LOCATION AND SPREAD
Paper TU04 An Overview of Non-parametric Tests in SAS : When, Why, and How Paul A. Pappas and Venita DePuy Durham, North Carolina, USA ABSTRACT Most commonly used statistical procedures are based on the
Foundation of Quantitative Data Analysis
Foundation of Quantitative Data Analysis Part 1: Data manipulation and descriptive statistics with SPSS/Excel HSRS #10 - October 17, 2013 Reference : A. Aczel, Complete Business Statistics. Chapters 1
Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
Geostatistics Exploratory Analysis
Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras [email protected]
SUMAN DUVVURU STAT 567 PROJECT REPORT
SUMAN DUVVURU STAT 567 PROJECT REPORT SURVIVAL ANALYSIS OF HEROIN ADDICTS Background and introduction: Current illicit drug use among teens is continuing to increase in many countries around the world.
ln(p/(1-p)) = α +β*age35plus, where p is the probability or odds of drinking
Dummy Coding for Dummies Kathryn Martin, Maternal, Child and Adolescent Health Program, California Department of Public Health ABSTRACT There are a number of ways to incorporate categorical variables into
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.
Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
Abbas S. Tavakoli, DrPH, MPH, ME 1 ; Nikki R. Wooten, PhD, LISW-CP 2,3, Jordan Brittingham, MSPH 4
1 Paper 1680-2016 Using GENMOD to Analyze Correlated Data on Military System Beneficiaries Receiving Inpatient Behavioral Care in South Carolina Care Systems Abbas S. Tavakoli, DrPH, MPH, ME 1 ; Nikki
Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP
Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP ABSTRACT In data mining modelling, data preparation
A Property & Casualty Insurance Predictive Modeling Process in SAS
Paper AA-02-2015 A Property & Casualty Insurance Predictive Modeling Process in SAS 1.0 ABSTRACT Mei Najim, Sedgwick Claim Management Services, Chicago, Illinois Predictive analytics has been developing
Lecture 19: Conditional Logistic Regression
Lecture 19: Conditional Logistic Regression Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South Carolina
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma [email protected] The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software
STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used
Random effects and nested models with SAS
Random effects and nested models with SAS /************* classical2.sas ********************* Three levels of factor A, four levels of B Both fixed Both random A fixed, B random B nested within A ***************************************************/
Getting Correct Results from PROC REG
Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
Descriptive Analysis
Research Methods William G. Zikmund Basic Data Analysis: Descriptive Statistics Descriptive Analysis The transformation of raw data into a form that will make them easy to understand and interpret; rearranging,
Predicting Customer Churn in the Telecommunications Industry An Application of Survival Analysis Modeling Using SAS
Paper 114-27 Predicting Customer in the Telecommunications Industry An Application of Survival Analysis Modeling Using SAS Junxiang Lu, Ph.D. Sprint Communications Company Overland Park, Kansas ABSTRACT
Overview Classes. 12-3 Logistic regression (5) 19-3 Building and applying logistic regression (6) 26-3 Generalizations of logistic regression (7)
Overview Classes 12-3 Logistic regression (5) 19-3 Building and applying logistic regression (6) 26-3 Generalizations of logistic regression (7) 2-4 Loglinear models (8) 5-4 15-17 hrs; 5B02 Building and
Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection
Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.
Cool Tools for PROC LOGISTIC
Cool Tools for PROC LOGISTIC Paul D. Allison Statistical Horizons LLC and the University of Pennsylvania March 2013 www.statisticalhorizons.com 1 New Features in LOGISTIC ODDSRATIO statement EFFECTPLOT
Multiple Linear Regression in Data Mining
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
A LOGISTIC REGRESSION MODEL TO PREDICT FRESHMEN ENROLLMENTS Vijayalakshmi Sampath, Andrew Flagel, Carolina Figueroa
A LOGISTIC REGRESSION MODEL TO PREDICT FRESHMEN ENROLLMENTS Vijayalakshmi Sampath, Andrew Flagel, Carolina Figueroa ABSTRACT Predictive modeling is the technique of using historical information on a certain
Non-life insurance mathematics. Nils F. Haavardsson, University of Oslo and DNB Skadeforsikring
Non-life insurance mathematics Nils F. Haavardsson, University of Oslo and DNB Skadeforsikring Overview Important issues Models treated Curriculum Duration (in lectures) What is driving the result of a
Lecture 14: GLM Estimation and Logistic Regression
Lecture 14: GLM Estimation and Logistic Regression Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics
Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This
Ordinal Regression. Chapter
Ordinal Regression Chapter 4 Many variables of interest are ordinal. That is, you can rank the values, but the real distance between categories is unknown. Diseases are graded on scales from least severe
KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
Electronic Thesis and Dissertations UCLA
Electronic Thesis and Dissertations UCLA Peer Reviewed Title: A Multilevel Longitudinal Analysis of Teaching Effectiveness Across Five Years Author: Wang, Kairong Acceptance Date: 2013 Series: UCLA Electronic
Data Analysis Methodology 1
Data Analysis Methodology 1 Suppose you inherited the database in Table 1.1 and needed to find out what could be learned from it fast. Say your boss entered your office and said, Here s some software project
Statistical Functions in Excel
Statistical Functions in Excel There are many statistical functions in Excel. Moreover, there are other functions that are not specified as statistical functions that are helpful in some statistical analyses.
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova
Role of Customer Response Models in Customer Solicitation Center s Direct Marketing Campaign
Role of Customer Response Models in Customer Solicitation Center s Direct Marketing Campaign Arun K Mandapaka, Amit Singh Kushwah, Dr.Goutam Chakraborty Oklahoma State University, OK, USA ABSTRACT Direct
Failure to take the sampling scheme into account can lead to inaccurate point estimates and/or flawed estimates of the standard errors.
Analyzing Complex Survey Data: Some key issues to be aware of Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 24, 2015 Rather than repeat material that is
Alex Vidras, David Tysinger. Merkle Inc.
Using PROC LOGISTIC, SAS MACROS and ODS Output to evaluate the consistency of independent variables during the development of logistic regression models. An example from the retail banking industry ABSTRACT
Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217
Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing
SAS Syntax and Output for Data Manipulation:
Psyc 944 Example 5 page 1 Practice with Fixed and Random Effects of Time in Modeling Within-Person Change The models for this example come from Hoffman (in preparation) chapter 5. We will be examining
Instructions for SPSS 21
1 Instructions for SPSS 21 1 Introduction... 2 1.1 Opening the SPSS program... 2 1.2 General... 2 2 Data inputting and processing... 2 2.1 Manual input and data processing... 2 2.2 Saving data... 3 2.3
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1
Bill Burton Albert Einstein College of Medicine [email protected] April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce
Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
SPSS Tests for Versions 9 to 13
SPSS Tests for Versions 9 to 13 Chapter 2 Descriptive Statistic (including median) Choose Analyze Descriptive statistics Frequencies... Click on variable(s) then press to move to into Variable(s): list
Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500 6 8480
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI
STATS8: Introduction to Biostatistics Data Exploration Babak Shahbaba Department of Statistics, UCI Introduction After clearly defining the scientific problem, selecting a set of representative members
Chapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Study Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
2. Filling Data Gaps, Data validation & Descriptive Statistics
2. Filling Data Gaps, Data validation & Descriptive Statistics Dr. Prasad Modak Background Data collected from field may suffer from these problems Data may contain gaps ( = no readings during this period)
ADVANCED FORECASTING MODELS USING SAS SOFTWARE
ADVANCED FORECASTING MODELS USING SAS SOFTWARE Girish Kumar Jha IARI, Pusa, New Delhi 110 012 [email protected] 1. Transfer Function Model Univariate ARIMA models are useful for analysis and forecasting
BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I
BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential
Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
Univariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
An introduction to using Microsoft Excel for quantitative data analysis
Contents An introduction to using Microsoft Excel for quantitative data analysis 1 Introduction... 1 2 Why use Excel?... 2 3 Quantitative data analysis tools in Excel... 3 4 Entering your data... 6 5 Preparing
Predicting the US Presidential Approval and Applying the Model to Foreign Countries.
Predicting the US Presidential Approval and Applying the Model to Foreign Countries. Author: Daniel Mariani Date: May 01, 2013 Thesis Advisor: Dr. Samanta ABSTRACT Economic factors play a significant,
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
data visualization and regression
data visualization and regression Sepal.Length 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 I. setosa I. versicolor I. virginica I. setosa I. versicolor I. virginica Species Species
Risk Management : Using SAS to Model Portfolio Drawdown, Recovery, and Value at Risk Haftan Eckholdt, DayTrends, Brooklyn, New York
Paper 199-29 Risk Management : Using SAS to Model Portfolio Drawdown, Recovery, and Value at Risk Haftan Eckholdt, DayTrends, Brooklyn, New York ABSTRACT Portfolio risk management is an art and a science
2 Sample t-test (unequal sample sizes and unequal variances)
Variations of the t-test: Sample tail Sample t-test (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing
ADD-INS: ENHANCING EXCEL
CHAPTER 9 ADD-INS: ENHANCING EXCEL This chapter discusses the following topics: WHAT CAN AN ADD-IN DO? WHY USE AN ADD-IN (AND NOT JUST EXCEL MACROS/PROGRAMS)? ADD INS INSTALLED WITH EXCEL OTHER ADD-INS
Categorical Data Analysis
Richard L. Scheaffer University of Florida The reference material and many examples for this section are based on Chapter 8, Analyzing Association Between Categorical Variables, from Statistical Methods
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
