COMPARISON OF POSTAL AND ONLINE SURVEYS: COST, SPEED, RESPONSE RATES AND RELIABILITY



Similar documents
An Analysis of the Time Use of Elementary School Library Media Specialists and Factors That Influence It

Data Tools for School Improvement

Value-Added Measures of Educator Performance: Clearing Away the Smoke and Mirrors

White Paper Proven Processes for Textbook Management

Open-Source vs. Proprietary Software Pros and Cons

SOCIETY OF ACTUARIES THE AMERICAN ACADEMY OF ACTUARIES RETIREMENT PLAN PREFERENCES SURVEY REPORT OF FINDINGS. January 2004

Pearson Student Mobile Device Survey 2015

Response Rates in Online Teaching Evaluation Systems

Preliminary Findings. A Survey of K-12 Educators on Social Networking and Content-Sharing Tools

Research Overview: Telephone versus Online Research Advantages and Pitfalls

Secondly, this study was peer reviewed, as I have mentioned, by other top experts in the testing and measurement community before it was released.

Career and Technical Education

Higher Performing High Schools

Paperless Practice N O V E M B E R O P T O M E T R I C M A N A G E M E N T W W W. O P T O M E T R I C M A N A G E M E N T.

A STUDY GUIDE. Nonfiction Matters. Stephanie Harvey

The Official SAT Online Course Case Studies

LONG-TERM CARE IN AMERICA: AMERICANS OUTLOOK AND PLANNING FOR FUTURE CARE

ACT National Curriculum Survey Policy Implications on Preparing for Higher Standards. improve yourself

The State of Global Disaster Recovery Preparedness

Community Life Survey Summary of web experiment findings November 2013

ASSESSMENT TECHNOLOGY, INCORPORATED

Which Design Is Best?

Pearson Inform v4.0 Educators Guide

User Guide. Version 3.0 April 2006

BUSINESS EQUIPMENT.

Are Arizona Public Schools Making the Best Use of School Counselors? Results of a Three-Year Study of Counselors Time Use

Economic Confidence, Education, And Student Debt

Technical Report. Teach for America Teachers Contribution to Student Achievement in Louisiana in Grades 4-9: to

BENCHMARKING PERFORMANCE AND EFFICIENCY OF YOUR BILLING PROCESS WHERE TO BEGIN

Online Course Delivery at 50 Accredited Institutions: The Critical Issues. Robert M. Colley Associate Dean, Continuing Education Syracuse University

IBM SPSS Direct Marketing 22

MAJOR INCIDENT MANAGEMENT TRENDS

How Students, Teachers, and Principals Benefit from Strong School Libraries The Indiana Study

4TH ANNUAL PRINCIPALS ASSESSMENT OF PUBLIC EDUCATION

How To Know If Online Learning Is As Good As Face To Face Instruction

From the Top: Superintendents on Instructional Leadership

Creating an Effective Mystery Shopping Program Best Practices

The 2015 Corpus Christi Catholic- Christian Community College Scholarship

BETTER THAN BIG DATA Why Smart Data is Superior for Most Business Decisions

Public Utilities Commission Commercial Survey

Pearson Student Mobile Device Survey 2014

City of Toppenish Online Presentment and Payment Frequently Asked Questions

*Note: Screen magnification settings may affect document appearance.

A Guide to Choosing the Right EMR Software. A Guide to Choosing the Right EMR Software

Change Management in Higher Education: Using Model and Design Thinking to Develop Ideas that Work for your Institution

How to Keep Marketing Out of the Spam Folder A guide for marketing managers and developers

The Influence of Session Length On Student Success

r e y n o l d s a n d r e y n o l d s Delivering the Vehicle and an Improved Customer Experience Jon Strawsburg Vice President, Product Planning

How To Improve Your Knowledge Of Psychology

IBM SPSS Direct Marketing 23

ON-PREMISE OR OUTSOURCE: EXPLORING THE IMPACT OF TECHNOLOGY DEPLOYMENT ON MARKETING EFFECTIVENESS

THE FUTURE OF INTERNET-BASED SURVEY METHODS

Pearson Student Mobile Device Survey 2014

Welcome back to EDFR I m Jeff Oescher, and I ll be discussing quantitative research design with you for the next several lessons.

SHOULD SALES FORCE AUTOMATION CHANGES BRAND AUTOMATION FOR LG

Application Instructions and Checklist For the Master of Liberal Arts Entering Class of

California Public School Teachers and Their Views on College and Career Readiness. A survey conducted April 2015

Long-Term Care Insurance Buyer Profiles: Implications for CLASS

INVOICES 5-5. The world s most popular gateway to electronic invoice processing

RATING A TEACHER OBSERVATION TOOL. Five ways to ensure classroom observations are focused and rigorous

Data Products and Services. The one-stop-shop for all your business-to-consumer data requirements

User Guide Online Backup

Appendix B. National Survey of Public School Teachers

Building Evaluation Capacity: Five Models for Funders

Summary. January 2013»» white paper

The Future of College and Career Pathways

Microsoft SQL Server and Oracle Database:

Information for Law Firms On Direct Mail

AP * Statistics Review. Designing a Study

Marketing for Success. A practical guide to growing your customer base, nurturing leads, and building trust throughout the purchase process

How To Prepare For Graduate School

The Case for EMBA Sponsorship

A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT MEGAN M. MOSSER. Submitted to

Patent Careers For Technical Writers and Scientific, Engineering, and Medical Specialists

The Impact of School Library Media Centers on Academic Achievement

New Jersey A Special State-Focused Supplement to Education Week s Technology Counts 2007

Faculty Productivity and Costs at The University of Texas at Austin

University Induction Clear Credential Program

What You Should Look For In a Real-Time GPS Tracking System?

Segmentation: Foundation of Marketing Strategy

The Five Rules for Reliable Marketing Research

Outreach Degree Completion Program

Tyler. tyler pulse a tyler school solution. Evolution. Information: Case Study. Empowering schools with information

MARKETING TIPS. From Our InfoUSA Experts

THE EVOLUTION of Talent Management Consulting

A Comparison of Training & Scoring in Distributed & Regional Contexts Writing

Testimony to the Tennessee Senate Education Committee in Support of. of the Common Core State Standards.

1Targeting 2. 4Analysis. Introducing Marketing Automation. Best Practices for Financial Services and Insurance Organizations.

Topic #4: Goals, Objectives, Outcomes, Progress, and Action Plans Program Examples

2012 Marketing and Student Recruitment Practices for

Sciences Center New Orleans

FREQUENTLY ASKED QUESTIONS ABOUT HEALTH EDUCATION ENDORSEMENTS. Updated August 12, 2008

Disaster Recovery Planning

The case for continuous penetration testing

The Orthopaedic Surgeon Online Reputation & SEO Guide

Speech Analytics: Best Practices for Analytics- Enabled Quality Assurance. Author: DMG Consulting LLC

Final Report for 2006 AICPA Summer Internship: AICPA Practice Analysis Methodology for Sampling Design and Selected Topics

Comparison of Features: DataDirector 3.x vs. 4.0

Netstar Strategic Solutions Practice Development Methodology

CHAPTER III METHODOLOGY. The purpose of this study was to describe which aspects of course design

Transcription:

COMPARISON OF POSTAL AND ONLINE SURVEYS: COST, SPEED, RESPONSE RATES AND RELIABILITY Research Conducted By: Education Market Research Robert M. Resnick, Ph.d. February 2012 With the support of 2012 education Market Research and MCH Strategic Data. copying prohibited.

Page 2 EXECUTIVE SUMMARY The intent of this study was to put a number of hypotheses about postal vs. online surveys to the test. Conventional wisdom has it that online surveys are superior to postal surveys in terms of both speed and cost-efficiency. It is also generally assumed that the reliability of data is essentially equivalent if postal and online survey data is compared. The other issue to be investigated here was the effect of length of questionnaire on response rates and data reliability. The empirical evidence used to answer these questions came from a parallel administration of the same EMR survey, at the same time, to separate but similar samples of educators. One set of surveys was printed and mailed and responses were collected via return mail, while the second set was deployed via e-mail and the surveys were filled out and the responses were collected online. The data obtained through this simple experiment sheds light on the relative advantages and possible limitations of both postal and online surveys, in terms of speed, cost-efficiency, and data reliability. Comparing the actual cost, speed, response rates, and data reliability of the two survey modes yielded the following profile of pros and cons. Relative Merits Of Postal vs. Online Surveys CRITERION POSTAL ONLINE Cost $23.14 per usable response - 9% less than online $25.22 per usable response Speed 85% of responses within 2 weeks 85% of responses in less than 1 day Response rate 4.4% - 6 times higher than online 0.75% Optimum number of questions 30 questions or more 15 questions is too many Reliability High split-half and testretest reliability Uncertain - needs more research In this specific test, the postal mode outpaced the online mode on four of the five key criteria, with the online mode superior only in terms of speed. If more than 10 questions are needed to get the job done, the postal mode would be the only prudent option. Given that the online mode is faster, but not cheaper compared to the postal mode, it must be emphasized that there is evidently a significant trade off to be concerned about with regard to higher speed vs. lower response rates and reliability when the online mode is utilized.

Page 3 INTRODUCTION A market or customer survey project can be divided into four parts: PHASE Four Phases Of A Survey Project DESCRIPTION 1 Conceptual Phase 2 Operational Phase 3 Analysis Phase 4 Implementation Phase 1. The conceptual phase, during which it is decided to whom, to how many, and when and where the surveys will be sent out, what questions will be asked, what form the questions will take, how many, and in what order they will be displayed on the questionnaire. 2. The operational phase, during which the surveys are prepared and then sent out to the target population, completed questionnaires are retrieved, and the data is tabulated. 3. The analysis phase, during which the tabulated data is carefully examined to determine if the responses confirm what is already known or believed to be the case, or if there are surprises and the responses suggest an alternative to what was previously believed to be true. There is also an opportunity to pinpoint significant differences, if any, in opinions and needs by grade level, job title, geographic region, etc. which could cause product developers and marketers to adopt different strategies for different sub-groups within the overall population of educators. 4. The implementation phase, during which those who receive the survey results and analyses decide, based at least in part on the data, what next steps to take. Surveys are some times printed and mailed and filled out with pens and pencils by the respondents, and then mailed back. Some times they are deployed via e-mail with the responses collected online. Education Market Research (EMR) has, for seventeen years, primarily (although certainly not exclusively) used the printing and mailing approach to accomplish educator surveys. Recent advances in technology, and the ready availability of e-mail addresses for educators, have made the online approach a plausible alternative to the postal mode. The purpose of this paper is to compare the two approaches in the Operational Phase (as defined above) of a survey project, which is where they differ in a number of important ways. It is assumed that the Conceptual, Analysis, and Implementation phases would be much the same regardless of which type of survey was used to obtain the data.

Page 4 Some might wonder why it is even necessary to consider the relative merits of the two approaches. They might suggest that, on the face of it, the printing/mailing approach is the slow, expensive, old fashioned or analog way, while the online approach is the fast, inexpensive modern or digital way to conduct survey research. It is hard to argue the issue of speed, but a careful cost comparison is certainly warranted. And what about response rates, and the closely related issue of reliability of the data? Is there a trade off to be concerned about with regard to speed vs. response rates and reliability? If it was an open and shut case in favor of online surveys, one would expect that all of the major survey companies had already switched to the online mode. But that has not happened. Putting EMR aside, J.D. Power and Associates (a division of The McGraw-Hill Companies) is a good example of the continued reliance on postal surveys. The most recent J.D. Power 2012 Vehicle Reliability and Service Survey, was mailed in October 2011. The survey instrument was eight pages long with 68 numbered questions (since many questions required multiple responses, there are actually more than 68 questions). An inquiry made to J.D. Power about the survey revealed that this flagship research project is only done by mail (although there are some other surveys that J.D. Power conducts online). Is it possible that J.D. Power has tested and discovered that surveys of this extensive length do not lend themselves to online delivery? So the intent of this study was to put a number of hypotheses about postal vs. online surveys to the test. Conventional wisdom has it that online surveys are superior to postal surveys in terms of both speed and cost-efficiency. It is also generally assumed (or possibly not considered at all) that the reliability of data is essentially equivalent if postal and online survey data is compared. The other issue to be investigated here was the effect of length of questionnaire on response rates and data reliability. Do longer surveys with relatively complex questions work as well in the online mode as shorter, simpler ones? The empirical evidence used to answer these questions came from a parallel administration of the same survey, at about the same time, to separate but similar samples of educators. One set of surveys was printed and mailed and responses were collected via return mail, while the second set was deployed via e-mail and the surveys were filled out and the responses were collected online. The data obtained through this simple experiment sheds light on the relative advantages and possible limitations of both postal and online surveys, in terms of speed, cost-efficiency, and data reliability.

Page 5 METHOD In order to provide information on current trends in the elementary Reading market segment, EMR designed a detailed survey, and then mailed it on October 17, 2011 to 18,000 randomly selected educators segmented as follows. Sampling Matrix JOB TITLE ELEMENTARY MIDDLE/JHS TOTAL Classroom teacher 6,000 - - 6,000 Reading teacher 3,000 3,000 6,000 Curriculum supervisor - - - - 6,000 TOTAL 9,000 3,000 18,000 The survey consisted of four printed pages with a total of 30 numbered questions, of which 26 were multiple choice and 4 required a write in response. Of the 26 multiple choice questions, 6 offered the option of writing in an other response. Three weeks later, on November 8, 2011, the same 30-question Reading Market survey was replicated using the online method. This survey contained identical questions, but due to the formatting necessary for viewing on-screen, the length expanded to 16 pages or screens, compared to 4 pages for the postal version. The invitation to participate in the survey was e- mailed to 90,000 randomly selected elementary teachers (excluding the 18,000 chosen for the postal survey). In terms of a response deadline, the postal survey was mailed (via first class mail) on October 17, 2011 with a response deadline of November 11, 2011, a 4-week window. The online survey was deployed on November 8, 2011 with a November 20, 2011 response deadline, a 12-day window. MCH Strategic Data Both for the postal and for the online survey, the educator names were provided by MCH Strategic Data (www.mchdata.com), a leading compiler of education market data. MCH also handled the configuration and deployment of the online survey, and the collection and tabulation of responses. MCH s database features comprehensive coverage of public, private, and parochial schools, 15,000 school districts, and nearly 5 million educators. In 2010, MCH Strategic Data acquired a fully updated version of the QED Education Database and merged it with its own comprehensive K-12 database.

Page 6 MCH Strategic Data has an 83-year commitment to accurate, complete, and timely marketing databases. The company compiles the foremost databases available for business-to-institution marketing, including education, health care, government, and religion, with the education database making up its largest, most comprehensive segment. The compiling operation is centered in the MCH Research Department in Sweet Springs, Missouri. The compiling team is staffed with full-time, year-round professionals who use a wide variety of techniques to update, verify, and enhance the database. Many of the Research staff members have first-hand subject area expertise, in addition to decades of MCH compiling experience. MCH employs a wide variety of compilation methods and sources to develop the foremost education marketing data available. * Annual telephone surveys verify school and district telephone numbers, names and addresses, fundamental attributes like enrollment and grade span, key personnel names, and identify new and closed schools. MCH telephone verifies information on every school district and 99% of schools. * School rosters and class schedules are used to add, delete, or verify millions of educator names and job functions. * Published and web-based sources are used to validate information and rectify discrepancies, and to verify teacher names and job functions. * State directories and licensing files assure completeness of the database and add to comprehensive feature attributes. * Federal Department of Education statistics provide ethnicity, funding, and other attributes, and verify links between schools and districts. * Postal address verification ensures that mailing addresses are always deliverable and receive appropriate postal discounts. * Email address verification assures deliverability of email addresses and maintains CAN-SPAM compliance. * Geospatial processing identifies and rectifies questions and issues arising from geo-location analysis. * Customer-provided information from the GetThereTM Guarantee program quickly identifies questionable addresses, which allows MCH to investigate and update information in their database. * Internal Quality Control Audits include dozens of automated and manual checks to identify and correct errors, preventing them from being added to MCH s all inclusive database. The MCH compiling professionals schedule these and other compiling processes to ensure that the MCH Strategic Data education database is as comprehensive, complete, and accurate as possible.

Page 7 RESULTS Survey Costs In most cases, cost is the primary concern of the client who requests a market survey. If alternate methods ave available to achieve the same goal, it is almost inevitable that the less expensive option will be chosen. The conventional wisdom seems to be that online surveys, which avoid all printing and postage costs, are less expensive to conduct compared to postal surveys. In the case of EMR s two test surveys, it was just the opposite. The postal mode was actually less expensive compared to the online mode. The cost for obtaining a mailing list, printing and mailing the surveys (first class mail), return postage, and data tabulation was just over $14,000. The cost for obtaining an e-mail list, formatting the survey for online viewing, deploying the survey, collecting the online responses, and tabulating the data was just over $17,000. Of course, the postal survey was mailed to a list of 18,000 names while the online survey invitation was e-mailed to 90,000 names. It is no surprise then that the online survey cost was higher. If the lists were of equal size (18,000 names in both cases), that would have reduced the online survey cost to something in the neighborhood of $5,000. However, the goal was to produce a similar number of responses from each survey mode. Since postal surveys generally yield higher response rates than online surveys, it was necessary to increase the size of the online sample in order to ensure a number of responses roughly equal to that which the mail survey was expected to deliver. With a goal of 600-900 completed surveys in mind, it was estimated going in that a mailing of 18,000 (with a 3%-5% response rate) would be sufficient to produce that result. On the other side of the coin, since online survey response rates are typically between 0.5% and 1.5%, it was determined that a deployment of 90,000 would be necessary to match the postal survey result. As it turned out, that forecast was right on target for both modes. Given that the postal survey actually produced 605 usable returns, and the online survey produced 674, it is fair to consider them equivalent in terms of effectiveness despite the wide difference in initial sample size. That being the case, it is also fair to say that the postal survey was less expensive compared to the online survey, both in terms of total cost, and in terms of cost per usable response. The cost per usable response computes to $23.14 for the postal survey ($14,000/605 responses), and $25.22 for the online survey ($17,000/674 responses). On a percentage basis, the online survey total cost was 21% higher, and on a per response basis it was 9% higher compared to the postal survey. So if cost is of the essence, the postal survey gets the check mark on this score.

Page 8 Elapsed Time Required To Collect Survey Data The postal version of EMR s survey allowed a 4-week window from mail date to response deadline. The response pattern for this survey, as is typical for such projects, was relatively few responses received through the first week, the vast majority coming in during the end of the first week and into the second week, and then responses gradually trailing off in the third and fourth weeks. Around 85% were received within the first two weeks, and 15% in the last two weeks. Additional responses, one or two at a time, continued to come in after the response deadline. The online version of EMR s identical survey allowed a 12-day window for responses. As it turned out, virtually all of the responses were received within the first 24 hours. None were received after the second day. So the same 85% completion level which took two weeks to be reached with the postal survey, took less than one day to be reached with the online survey. Obviously, the online survey proved to be far superior in terms of speed of data collection. If speed is of the essence, clearly the online mode is the only choice. Response Rates EMR s postal survey sample included elementary classroom teachers, Reading teachers, Principals, and curriculum supervisors, while the online version of that same survey included only elementary classroom teachers. Comparing apples to apples, the response rate for elementary classroom teachers was 4.4% in the postal survey, and 0.75% in the online survey. In other words, the response rate was six times higher in the postal survey compared to the online survey. Again, the two survey modes can be considered equally effective in terms of total usable responses produced, but the postal survey was five times more efficient, producing a roughly equal number of usable responses from an initial sample which was one fifth the size of the online survey sample. Are those response rates typical of postal and online surveys? Based on EMR s experience, a 4.4% response rate is actually on the low side of the continuum for a postal survey. Such job titles as classroom teacher, school librarian, and department chairperson have often produced response rates as high as 8% to 10% on prior EMR surveys of educators. In terms of online surveys, MCH Strategic Data s most recent project, a survey of Principals, obtained a 1.3% response rate, which is considerably better than EMR s Reading survey result (0.75%). Part of the reason for that difference can be attributed to the length (number of questions) on the EMR and MCH surveys. The interaction of survey length and response rate will be discussed in a separate section of this report.

Page 9 All of the anecdotal evidence points to an expected response rate range for postal surveys of between 3% and 10%. Online surveys should be expected to yield response rates between 0.5% and 1.5%. At either extreme, comparing equivalent surveys, one would anticipate that the postal response rate should be around six times higher than the online response rate. If high response rate is essential, the postal mode is the best choice. Interaction of Survey Length And Response Rates As a general rule of thumb, the longer the survey (more questions) the lower the response rate. It is possible to get people to respond to just a handful of quick questions with little or no monetary incentive to respond. On the other hand, as the survey gets longer, it takes a stronger incentive to ensure that the response rate will be within an acceptable range. The question here is, what is the effect of survey length on response rates for the postal and the online modes? This test of the same 30-question survey via both the postal and online modes revealed some interesting answers. When people receive a printed survey via regular first class mail, it makes sense to think that many of them toss it in the trash without even opening it. Just as most people delete an e-mail invitation to participate in an online survey without even opening it. Those who do open the postal survey are able to look it over, scan the number and complexity of the questions, and decide if they are willing to take the time and effort to fill it out. If they decide to go forward, they also have the option of answering only some of the questions and sending back an incomplete questionnaire. Similarly, those who open the e-mail invitation can decide if they want to click the link to the survey or not. If they opt to go to the survey, it then becomes more opaque for them, because they cannot readily see the number and complexity of the questions since they are presented one screen at a time. [A "survey completed" percentage bar was included at the bottom of the survey, but it did not state how many questions remained to be answered.] If they start answering and, at some point, decide that this is taking too much time, or there are too many questions, they can elect to stop and their questionnaire will be incomplete. There is no way to know how many people actually start filling out a postal survey and then fail to put it in the return mail. We only know how many are returned, complete or incomplete. One of the advantages of the online survey technology is that we do know how many people answered some, but not all of the questions. In fact, with EMR s online survey there were 674 completed questionnaires, and another 555 partials. However, when those partials were reviewed to see how many of them had answered at least half (15) of the 30 questions, none had. In other words, of those who were motivated enough to start answering the online survey, but not motivated enough to finish, none was willing to answer as many as 15 multiple choice questions.

Page 10 This gives us a simple rule of thumb for online surveys we did not have before: evidently 15 questions is too many if one is concerned about response rate. That is based on the fact that almost as many started EMR s 30-question online survey and quit before they were half way through as completed it. If all who started had finished, the response rate would have been 1.3%, almost double the actual 0.75% rate. This hypothesis is also based on the fact that MCH s recent Principal survey achieved a 1.3% response rate utilizing a 10-question survey. So it seems that if you increase the length of your online survey from 10 to 15 questions, you risk losing half of your likely respondents. Conversely, if you limit your online survey to 10 simple questions or fewer, you are likely to get a response rate closer to the top of the online response rate range of 0.5% to 1.5%. While more research should be done to confirm this finding, it seems prudent to operate as if online surveys are most effective when limited to around 10 questions. If more than 10 questions are required, the postal mode is a better choice. Reliability Of Survey Data Most people in the survey business would agree that you don t know anything from a survey unless you know how reliable the survey is. A questionnaire will always produce numerical results, even if those results are meaningless. With unreliable data you run the risk of making business decisions based on survey results that don t actually mean anything. Only a test of reliability can tell you if you should trust the results. In simple terms, a reliable questionnaire is one that would give the same results if it was used repeatedly with the same group. In the case at hand, that would mean comparing random geographic samples of elementary school teachers and getting the same results each time. Instead of investing the time and money doing the identical survey twice, there are tests of reliability for questionnaires which indicate whether the results are meaningful. For the purpose of this investigation of postal and online surveys, we looked at split-half measures of reliability. That involved comparing the results obtained from the first half of the sample (early responders) to those of the second half (late responders). If, for example, the results from the first 300 respondents turned out to be remarkably similar to the results from the second 300, it increases our confidence that if we polled 300 more, or even 3,000 more, the results would still be the same. That is the operational definition of reliability. EMR s Reading Market survey, in its printed/mailed form, has been replicated seven times between 1999 and its latest administration in the Fall 2011. Based on a review of core questions embedded in each of those surveys it is clear that the test-retest reliability of this survey is extremely high. That is, results from year to year are strikingly similar in those areas where change would not be expected, such as educators average years of teaching experience, the importance of Reading program alignment to standards, and criteria driving buying decisions.

Page 11 To the point of split-half reliability or consistency, results obtained from the first half of the educators responding to EMR s 2011 survey were compared to results obtained from the second half of those responding to the same postal survey. The following are some sample comparisons. How Many Years Of Teaching Experience Do You Have? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Over 20 years 40.1% 41.1% -1.0% 11-20 years 36.7% 37.2% -0.5% 6-10 years 16.2% 15.5% 0.7% 3-5 years 5.4% 5.6% -0.2% 1-2 years 1.0% 0.3% 0.7% Less than 1 year 0.7% 0.3% 0.4% AVERAGE 17.0 YEARS 17.3 YEARS -0.3% The differences range from a low of 0.2% to a high of 1.1%. How much of a difference should be flagged as meaningful? With a survey of this sample size we would usually consider a difference of 3% to 5% or more to be potentially meaningful. Using that criterion, none of the differences above are meaningful. In other words, you can operate as if the average tenure of teachers in the field is 17.0 years or 17.3 years - there is no statistical difference between the two. Additional examples of highly consistent results on EMR s Reading survey are as follows. If You Had It To Do Over, Would You Choose The Same Core Program Again? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Yes 70.2% 69.7% 0.5% No 29.8% 30.3% -0.5%

Page 12 How Important Is It That Your Reading Program Is Aligned To Common Core Standards? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Very important 82.7% 83.9% -1.2% Somewhat important 14.3% 15.1% -0.8% Not important 1.0% 0.7% 0.3% Not familiar with standards 2.0% 0.3% 1.7% Apart From Fit With Students Needs, What Are The Most Important Criteria Driving Your Buying Decision? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Proof it works 85.1% 84.2% 0.9% Price 42.0% 43.5% -1.5% Peer recommendations 25.3% 22.9% 2.4% Reputation of brand or publisher 16.3% 16.1% 0.2% Positive reviews/awards 11.1% 14.7% -3.6% Online/digital delivery 10.1% 7.2% 2.9% Other 6.6% 7.2% -0.6% Based on these question-by-question comparisons, we would say that the postal survey is highly reliable, which means the results can and should be trusted. On the online survey side of the fence, because of the way the survey data was originally coded, it was not possible to re-run the question-by-question results comparing the first half to the second half, and thus obtaining a direct measure of split-half reliability. However, since the postal survey data proved to be highly reliable, it was possible to compare some of the key data from the postal survey to the same data collected with the online survey. Presumably, if the postal results are reliable, and the online results match closely with the postal results, then the online results should also be considered reliable. A comparison of the years of teaching experience question, postal survey vs. online survey, follows.

Page 13 How Many Years Of Teaching Experience Do You Have? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Over 20 years 40.6% 40.8% -0.2% 11-20 years 36.9% 35.5% 1.4% 6-10 years 15.8% 17.4% -1.6% 3-5 years 5.5% 5.6% -0.1% 1-2 years 0.7% 0.7% 0.0% On this standard question the postal and online results match up perfectly. Thus if the postal results for this question are reliable, the online results are similarly reliable. Additional questionby-question comparisons follow. If You Had It To Do Over, Would You Choose The Same Core Program Again? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Yes 69.9% 59.1% 10.8% No 30.1% 40.9% -10.8% How Important Is It That Your Reading Program Is Aligned To Common Core Standards? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Very important 83.3% 87.4% -4.1% Somewhat important 14.7% 10.2% 4.5% Not important 0.8% 0.3% 0.5% Not familiar with standards 1.2% 2.1% -0.9%

Page 14 Apart From Fit With Students Needs, What Are The Most Important Criteria Driving Your Buying Decision? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Proof it works 84.7% 80.7% 4.0% Price 42.8% 43.5% -0.7% Peer recommendations 24.1% 24.3% -0.2% Reputation of brand or publisher 16.2% 16.9% -0.7% Positive reviews/awards 12.9% 20.8% -7.9% Online/digital delivery 8.6% 6.8% 1.8% Other 6.9% 8.5% -1.6% While the first comparison yielded evidence of equal reliability for the two survey modes, the second, third, and fourth comparisons showed significant differences. How satisfied are educators with their currently adopted Reading programs? The postal survey (69.9%) indicates they are, on average, very satisfied, but the online survey (59.1%) says not so much. Another example is the motivating power of positive reviews/awards when it comes to influencing purchasing decisions. The postal survey (12.9%) indicates, on average, positive reviews/awards are not very influential, but the online survey (20.8%) says they are significantly more influential. Of course, the two surveys reflect the opinions of two separate sets of respondents, so shouldn t we expect and tolerate differences between the two? The answer is a resounding no! In both cases the sample is intended to fairly and accurately represent the universe of elementary classroom teachers, so we have every right to expect the results to be, effectively, the same. Does it make a difference which number we trust? It certainly could, which is why we strive for reliable results. The answer to the question, how reliable does a survey need to be?, is as reliable as possible. More research should be done in this area, but the preliminary evidence at hand suggests that the postal survey produces more reliable results compared to the online version of the same survey. How much this has to do with the typically higher response rates on postal surveys is not known, but it appears to contribute to the contrast. So the postal survey gets the check mark on the reliability score.

Page 15 DISCUSSION Need empirical data from the field to confirm (or disconfirm) what you believe to be true? Why not do a quick, cost-effective online survey? The lesson here is, look before you leap. How you do that survey should depend on the importance to you of cost, speed, response rate, number of questions and, ultimately, reliability of survey results. Postal and online surveys have different profiles with respect to each of those critical variables. The purpose of the experiment described herein was to compare the relative merits of the postal and online survey approaches. Some might suggest that, on the face of it, the printing/mailing approach is the slow, expensive, old fashioned way, while the online approach is the fast, inexpensive, modern way to conduct survey research. And it is hard to argue the issue of speed, but what about a cost comparison? And what about response rates, and the closely related issue of reliability of the data? Is there a trade off to be concerned about with regard to speed vs. response rates and reliability? In order to provide information on current trends in the elementary Reading market segment, EMR designed a detailed survey, and then mailed it on October 17, 2011 to 18,000 randomly selected educators segmented as follows. Sampling Matrix JOB TITLE ELEMENTARY MIDDLE/JHS TOTAL Classroom teacher 6,000 - - 6,000 Reading teacher 3,000 3,000 6,000 Curriculum supervisor - - - - 6,000 TOTAL 9,000 3,000 18,000 The survey consisted of four printed pages with a total of 30 numbered questions, of which 26 were multiple choice and 4 required a write in response. Of the 26 multiple choice questions, 6 offered the option of writing in an other response. Three weeks later, on November 8, 2011, the same 30-question Reading Market survey was replicated using the online method. This survey contained identical questions, but due to the formatting necessary for viewing on-screen, the length expanded to 16 pages or screens, compared to 4 pages for the postal version. The invitation to participate in the survey was e- mailed to 90,000 randomly selected elementary teachers (excluding the 18,000 chosen for the postal survey).

Page 16 Comparing the actual cost, speed, response rates, and data reliability of the two survey modes yielded the following profile of pros and cons. Relative Merits Of Postal vs. Online Surveys CRITERION POSTAL ONLINE Cost $23.14 per usable response - 9% less than online $25.22 per usable response Speed 85% of responses within 2 weeks 85% of responses in less than 1 day Response rate 4.4% - 6 times higher than online 0.75% Optimum number of questions 30 questions or more 15 questions is too many Reliability High split-half and testretest reliability Uncertain - needs more research In this specific test, the postal mode outpaced the online mode on four of the five key criteria, with the online mode superior only in terms of speed. If more than 10 questions are needed to get the job done, the postal mode would be the only prudent option. Given that the online mode is faster, but not cheaper compared to the postal mode, it must be emphasized that there is evidently a significant trade off to be concerned about with regard to higher speed vs. lower response rates and reliability when the online mode is used. Since reliability is the ultimate criterion of success for any survey, an online survey may not always be the right way to go, particularly now that the process of designing and implementing online surveys has become a do-it-yourself activity. If we suppose that technology will continue to evolve, and as a consequence the cost of online surveys will inevitably drop below the cost of postal surveys, making online superior both in terms of cost and speed, will that rule out the use of postal surveys in the future? Putting the allimportant issue of reliability aside for the moment, the answer should still be no, because postal surveys, when properly done, allow for a richness of data analysis that is not possible when the limit is ten questions. To get the sharpest possible portrait of the target market it is necessary to look both at total responses, and at many cross-tabulations of the data, elucidating important differences in responses by job title, grade level, years of experience, size of district, type of district, state, geographic region, and many other key demographic variables. In order to accomplish that, questions relating to those demographics must be added to the survey. If that higher level of analysis is the goal, many more than ten questions will be required.

Page 17 Special Limitation Of Online Surveys in the K-12 Market Some of EMR s most interesting and useful survey results have come from comparing the responses, to the same questions, made by classroom teachers, Principals, and various districtlevel personnel. Teachers tend to be realists as far as what is or is not happening in their classrooms, while Principals tend to be cheerleaders who may exaggerate the positives and minimize the negatives. Knowing those differences should cause product developers and marketers to adopt different strategies for different sub-groups within the population of educators. In order to make those types of comparisons, EMR s rule of thumb is to have between 100 and 200 responses to analyze within each sub-group, such as teachers or Principals. For example, if the plan is to look at teachers in grades Pre-K - 2, grades 3-5, and grades 6-8, as well as elementary and middle/junior high school Principals, a minimum of 100 responses (ideally 150 responses) in each of those cross-sections of job title and grade level would be required to perform a meaningful analysis. Assuming a 5% response rate for a typical postal survey of educators, 3,000 of each sub-group in the initial sample would likely ensure the desired number of responses. So EMR s sampling matrix would be as follows. JOB TITLE Hypothetical Sampling Matrix - Postal Survey GRADES PRE-K - 2 GRADES 3-5 GRADES 6-8 TOTAL Classroom or subject teacher 3,000 3,000 3,000 9,000 Elementary Principal - - - - - - 3,000 Middle/Junior High Principal - - - - - - 3,000 TOTAL 3,000 3,000 3,000 15,000 Those starting numbers are likely to yield the desired 150 responses per cell within this matrix when the postal mode is employed. On the online survey side, we could assume a 1% response rate (the mid-point of the expected range of online survey response rates), so the following starting numbers would be needed to achieve the same results.

Page 18 JOB TITLE Hypothetical Sampling Matrix - Online Survey GRADES PRE-K - 2 GRADES 3-5 GRADES 6-8 TOTAL Classroom or subject teacher 15,000 15,000 15,000 45,000 Elementary Principal - - - - - - 15,000 Middle/Junior High Principal - - - - - - 15,000 TOTAL 15,000 15,000 15,000 75,000 As has been previously discussed, since the typical postal survey delivers five or six times the response rate of the typical online survey, it takes five or six times as many names in the initial online survey sample to ensure the same number of total responses as obtained in the postal survey. Putting aside the issue of the increased cost of a survey starting with 75,000 names, compared to 15,000, there is another problem. There are barely enough middle/junior high Principals in the universe of K-12 educators from which to pull an initial sample of 15,000. [There are only around 14,000 middle and junior highs in the entire U.S. public school market.] The situation is especially problematic at the district level, where such key job titles as federal program directors, Bilingual/ELL directors, technology coordinators, special education directors, and K-12 curriculum/instruction directors all number far fewer than 15,000. To illustrate one example of this issue, there are approximately 7,500 Bilingual/ELL directors in total. Thus an online survey of such directors, including every single one of them in the initial sample, and producing a 1% response rate, is likely to yield 75 total responses (7,500 x 1%). Is that number of responses high enough to produce reliable data? Should critical business decisions be made based on such limited results? Probably not. That being the case, if 150 or more responses from Bilingual/ELL directors is the goal, a postal survey would be the best choice. In summary, reliability is always the most important consideration when thinking about a market survey. An unreliable set of results will be far worse than no empirical data at all. Trusting false positives or false negatives could lead to terrible financial consequences. If you are talking to a survey provider, always ask about sample size and the minimum number of responses needed for sound results. Ask about average response rates, and the related issue of reliability and how it will be measured. This is how you will know if your results can be trusted. And save the questions about speed and cost for last, where they really belong.