1 COMPARISON OF POSTAL AND ONLINE SURVEYS: COST, SPEED, RESPONSE RATES AND RELIABILITY Research Conducted By: Education Market Research Robert M. Resnick, Ph.d. February 2012 With the support of 2012 education Market Research and MCH Strategic Data. copying prohibited.
2 Page 2 EXECUTIVE SUMMARY The intent of this study was to put a number of hypotheses about postal vs. online surveys to the test. Conventional wisdom has it that online surveys are superior to postal surveys in terms of both speed and cost-efficiency. It is also generally assumed that the reliability of data is essentially equivalent if postal and online survey data is compared. The other issue to be investigated here was the effect of length of questionnaire on response rates and data reliability. The empirical evidence used to answer these questions came from a parallel administration of the same EMR survey, at the same time, to separate but similar samples of educators. One set of surveys was printed and mailed and responses were collected via return mail, while the second set was deployed via and the surveys were filled out and the responses were collected online. The data obtained through this simple experiment sheds light on the relative advantages and possible limitations of both postal and online surveys, in terms of speed, cost-efficiency, and data reliability. Comparing the actual cost, speed, response rates, and data reliability of the two survey modes yielded the following profile of pros and cons. Relative Merits Of Postal vs. Online Surveys CRITERION POSTAL ONLINE Cost $23.14 per usable response - 9% less than online $25.22 per usable response Speed 85% of responses within 2 weeks 85% of responses in less than 1 day Response rate 4.4% - 6 times higher than online 0.75% Optimum number of questions 30 questions or more 15 questions is too many Reliability High split-half and testretest reliability Uncertain - needs more research In this specific test, the postal mode outpaced the online mode on four of the five key criteria, with the online mode superior only in terms of speed. If more than 10 questions are needed to get the job done, the postal mode would be the only prudent option. Given that the online mode is faster, but not cheaper compared to the postal mode, it must be emphasized that there is evidently a significant trade off to be concerned about with regard to higher speed vs. lower response rates and reliability when the online mode is utilized.
3 Page 3 INTRODUCTION A market or customer survey project can be divided into four parts: PHASE Four Phases Of A Survey Project DESCRIPTION 1 Conceptual Phase 2 Operational Phase 3 Analysis Phase 4 Implementation Phase 1. The conceptual phase, during which it is decided to whom, to how many, and when and where the surveys will be sent out, what questions will be asked, what form the questions will take, how many, and in what order they will be displayed on the questionnaire. 2. The operational phase, during which the surveys are prepared and then sent out to the target population, completed questionnaires are retrieved, and the data is tabulated. 3. The analysis phase, during which the tabulated data is carefully examined to determine if the responses confirm what is already known or believed to be the case, or if there are surprises and the responses suggest an alternative to what was previously believed to be true. There is also an opportunity to pinpoint significant differences, if any, in opinions and needs by grade level, job title, geographic region, etc. which could cause product developers and marketers to adopt different strategies for different sub-groups within the overall population of educators. 4. The implementation phase, during which those who receive the survey results and analyses decide, based at least in part on the data, what next steps to take. Surveys are some times printed and mailed and filled out with pens and pencils by the respondents, and then mailed back. Some times they are deployed via with the responses collected online. Education Market Research (EMR) has, for seventeen years, primarily (although certainly not exclusively) used the printing and mailing approach to accomplish educator surveys. Recent advances in technology, and the ready availability of addresses for educators, have made the online approach a plausible alternative to the postal mode. The purpose of this paper is to compare the two approaches in the Operational Phase (as defined above) of a survey project, which is where they differ in a number of important ways. It is assumed that the Conceptual, Analysis, and Implementation phases would be much the same regardless of which type of survey was used to obtain the data.
4 Page 4 Some might wonder why it is even necessary to consider the relative merits of the two approaches. They might suggest that, on the face of it, the printing/mailing approach is the slow, expensive, old fashioned or analog way, while the online approach is the fast, inexpensive modern or digital way to conduct survey research. It is hard to argue the issue of speed, but a careful cost comparison is certainly warranted. And what about response rates, and the closely related issue of reliability of the data? Is there a trade off to be concerned about with regard to speed vs. response rates and reliability? If it was an open and shut case in favor of online surveys, one would expect that all of the major survey companies had already switched to the online mode. But that has not happened. Putting EMR aside, J.D. Power and Associates (a division of The McGraw-Hill Companies) is a good example of the continued reliance on postal surveys. The most recent J.D. Power 2012 Vehicle Reliability and Service Survey, was mailed in October The survey instrument was eight pages long with 68 numbered questions (since many questions required multiple responses, there are actually more than 68 questions). An inquiry made to J.D. Power about the survey revealed that this flagship research project is only done by mail (although there are some other surveys that J.D. Power conducts online). Is it possible that J.D. Power has tested and discovered that surveys of this extensive length do not lend themselves to online delivery? So the intent of this study was to put a number of hypotheses about postal vs. online surveys to the test. Conventional wisdom has it that online surveys are superior to postal surveys in terms of both speed and cost-efficiency. It is also generally assumed (or possibly not considered at all) that the reliability of data is essentially equivalent if postal and online survey data is compared. The other issue to be investigated here was the effect of length of questionnaire on response rates and data reliability. Do longer surveys with relatively complex questions work as well in the online mode as shorter, simpler ones? The empirical evidence used to answer these questions came from a parallel administration of the same survey, at about the same time, to separate but similar samples of educators. One set of surveys was printed and mailed and responses were collected via return mail, while the second set was deployed via and the surveys were filled out and the responses were collected online. The data obtained through this simple experiment sheds light on the relative advantages and possible limitations of both postal and online surveys, in terms of speed, cost-efficiency, and data reliability.
5 Page 5 METHOD In order to provide information on current trends in the elementary Reading market segment, EMR designed a detailed survey, and then mailed it on October 17, 2011 to 18,000 randomly selected educators segmented as follows. Sampling Matrix JOB TITLE ELEMENTARY MIDDLE/JHS TOTAL Classroom teacher 6, ,000 Reading teacher 3,000 3,000 6,000 Curriculum supervisor ,000 TOTAL 9,000 3,000 18,000 The survey consisted of four printed pages with a total of 30 numbered questions, of which 26 were multiple choice and 4 required a write in response. Of the 26 multiple choice questions, 6 offered the option of writing in an other response. Three weeks later, on November 8, 2011, the same 30-question Reading Market survey was replicated using the online method. This survey contained identical questions, but due to the formatting necessary for viewing on-screen, the length expanded to 16 pages or screens, compared to 4 pages for the postal version. The invitation to participate in the survey was e- mailed to 90,000 randomly selected elementary teachers (excluding the 18,000 chosen for the postal survey). In terms of a response deadline, the postal survey was mailed (via first class mail) on October 17, 2011 with a response deadline of November 11, 2011, a 4-week window. The online survey was deployed on November 8, 2011 with a November 20, 2011 response deadline, a 12-day window. MCH Strategic Data Both for the postal and for the online survey, the educator names were provided by MCH Strategic Data (www.mchdata.com), a leading compiler of education market data. MCH also handled the configuration and deployment of the online survey, and the collection and tabulation of responses. MCH s database features comprehensive coverage of public, private, and parochial schools, 15,000 school districts, and nearly 5 million educators. In 2010, MCH Strategic Data acquired a fully updated version of the QED Education Database and merged it with its own comprehensive K-12 database.
6 Page 6 MCH Strategic Data has an 83-year commitment to accurate, complete, and timely marketing databases. The company compiles the foremost databases available for business-to-institution marketing, including education, health care, government, and religion, with the education database making up its largest, most comprehensive segment. The compiling operation is centered in the MCH Research Department in Sweet Springs, Missouri. The compiling team is staffed with full-time, year-round professionals who use a wide variety of techniques to update, verify, and enhance the database. Many of the Research staff members have first-hand subject area expertise, in addition to decades of MCH compiling experience. MCH employs a wide variety of compilation methods and sources to develop the foremost education marketing data available. * Annual telephone surveys verify school and district telephone numbers, names and addresses, fundamental attributes like enrollment and grade span, key personnel names, and identify new and closed schools. MCH telephone verifies information on every school district and 99% of schools. * School rosters and class schedules are used to add, delete, or verify millions of educator names and job functions. * Published and web-based sources are used to validate information and rectify discrepancies, and to verify teacher names and job functions. * State directories and licensing files assure completeness of the database and add to comprehensive feature attributes. * Federal Department of Education statistics provide ethnicity, funding, and other attributes, and verify links between schools and districts. * Postal address verification ensures that mailing addresses are always deliverable and receive appropriate postal discounts. * address verification assures deliverability of addresses and maintains CAN-SPAM compliance. * Geospatial processing identifies and rectifies questions and issues arising from geo-location analysis. * Customer-provided information from the GetThereTM Guarantee program quickly identifies questionable addresses, which allows MCH to investigate and update information in their database. * Internal Quality Control Audits include dozens of automated and manual checks to identify and correct errors, preventing them from being added to MCH s all inclusive database. The MCH compiling professionals schedule these and other compiling processes to ensure that the MCH Strategic Data education database is as comprehensive, complete, and accurate as possible.
7 Page 7 RESULTS Survey Costs In most cases, cost is the primary concern of the client who requests a market survey. If alternate methods ave available to achieve the same goal, it is almost inevitable that the less expensive option will be chosen. The conventional wisdom seems to be that online surveys, which avoid all printing and postage costs, are less expensive to conduct compared to postal surveys. In the case of EMR s two test surveys, it was just the opposite. The postal mode was actually less expensive compared to the online mode. The cost for obtaining a mailing list, printing and mailing the surveys (first class mail), return postage, and data tabulation was just over $14,000. The cost for obtaining an list, formatting the survey for online viewing, deploying the survey, collecting the online responses, and tabulating the data was just over $17,000. Of course, the postal survey was mailed to a list of 18,000 names while the online survey invitation was ed to 90,000 names. It is no surprise then that the online survey cost was higher. If the lists were of equal size (18,000 names in both cases), that would have reduced the online survey cost to something in the neighborhood of $5,000. However, the goal was to produce a similar number of responses from each survey mode. Since postal surveys generally yield higher response rates than online surveys, it was necessary to increase the size of the online sample in order to ensure a number of responses roughly equal to that which the mail survey was expected to deliver. With a goal of completed surveys in mind, it was estimated going in that a mailing of 18,000 (with a 3%-5% response rate) would be sufficient to produce that result. On the other side of the coin, since online survey response rates are typically between 0.5% and 1.5%, it was determined that a deployment of 90,000 would be necessary to match the postal survey result. As it turned out, that forecast was right on target for both modes. Given that the postal survey actually produced 605 usable returns, and the online survey produced 674, it is fair to consider them equivalent in terms of effectiveness despite the wide difference in initial sample size. That being the case, it is also fair to say that the postal survey was less expensive compared to the online survey, both in terms of total cost, and in terms of cost per usable response. The cost per usable response computes to $23.14 for the postal survey ($14,000/605 responses), and $25.22 for the online survey ($17,000/674 responses). On a percentage basis, the online survey total cost was 21% higher, and on a per response basis it was 9% higher compared to the postal survey. So if cost is of the essence, the postal survey gets the check mark on this score.
8 Page 8 Elapsed Time Required To Collect Survey Data The postal version of EMR s survey allowed a 4-week window from mail date to response deadline. The response pattern for this survey, as is typical for such projects, was relatively few responses received through the first week, the vast majority coming in during the end of the first week and into the second week, and then responses gradually trailing off in the third and fourth weeks. Around 85% were received within the first two weeks, and 15% in the last two weeks. Additional responses, one or two at a time, continued to come in after the response deadline. The online version of EMR s identical survey allowed a 12-day window for responses. As it turned out, virtually all of the responses were received within the first 24 hours. None were received after the second day. So the same 85% completion level which took two weeks to be reached with the postal survey, took less than one day to be reached with the online survey. Obviously, the online survey proved to be far superior in terms of speed of data collection. If speed is of the essence, clearly the online mode is the only choice. Response Rates EMR s postal survey sample included elementary classroom teachers, Reading teachers, Principals, and curriculum supervisors, while the online version of that same survey included only elementary classroom teachers. Comparing apples to apples, the response rate for elementary classroom teachers was 4.4% in the postal survey, and 0.75% in the online survey. In other words, the response rate was six times higher in the postal survey compared to the online survey. Again, the two survey modes can be considered equally effective in terms of total usable responses produced, but the postal survey was five times more efficient, producing a roughly equal number of usable responses from an initial sample which was one fifth the size of the online survey sample. Are those response rates typical of postal and online surveys? Based on EMR s experience, a 4.4% response rate is actually on the low side of the continuum for a postal survey. Such job titles as classroom teacher, school librarian, and department chairperson have often produced response rates as high as 8% to 10% on prior EMR surveys of educators. In terms of online surveys, MCH Strategic Data s most recent project, a survey of Principals, obtained a 1.3% response rate, which is considerably better than EMR s Reading survey result (0.75%). Part of the reason for that difference can be attributed to the length (number of questions) on the EMR and MCH surveys. The interaction of survey length and response rate will be discussed in a separate section of this report.
9 Page 9 All of the anecdotal evidence points to an expected response rate range for postal surveys of between 3% and 10%. Online surveys should be expected to yield response rates between 0.5% and 1.5%. At either extreme, comparing equivalent surveys, one would anticipate that the postal response rate should be around six times higher than the online response rate. If high response rate is essential, the postal mode is the best choice. Interaction of Survey Length And Response Rates As a general rule of thumb, the longer the survey (more questions) the lower the response rate. It is possible to get people to respond to just a handful of quick questions with little or no monetary incentive to respond. On the other hand, as the survey gets longer, it takes a stronger incentive to ensure that the response rate will be within an acceptable range. The question here is, what is the effect of survey length on response rates for the postal and the online modes? This test of the same 30-question survey via both the postal and online modes revealed some interesting answers. When people receive a printed survey via regular first class mail, it makes sense to think that many of them toss it in the trash without even opening it. Just as most people delete an invitation to participate in an online survey without even opening it. Those who do open the postal survey are able to look it over, scan the number and complexity of the questions, and decide if they are willing to take the time and effort to fill it out. If they decide to go forward, they also have the option of answering only some of the questions and sending back an incomplete questionnaire. Similarly, those who open the invitation can decide if they want to click the link to the survey or not. If they opt to go to the survey, it then becomes more opaque for them, because they cannot readily see the number and complexity of the questions since they are presented one screen at a time. [A "survey completed" percentage bar was included at the bottom of the survey, but it did not state how many questions remained to be answered.] If they start answering and, at some point, decide that this is taking too much time, or there are too many questions, they can elect to stop and their questionnaire will be incomplete. There is no way to know how many people actually start filling out a postal survey and then fail to put it in the return mail. We only know how many are returned, complete or incomplete. One of the advantages of the online survey technology is that we do know how many people answered some, but not all of the questions. In fact, with EMR s online survey there were 674 completed questionnaires, and another 555 partials. However, when those partials were reviewed to see how many of them had answered at least half (15) of the 30 questions, none had. In other words, of those who were motivated enough to start answering the online survey, but not motivated enough to finish, none was willing to answer as many as 15 multiple choice questions.
10 Page 10 This gives us a simple rule of thumb for online surveys we did not have before: evidently 15 questions is too many if one is concerned about response rate. That is based on the fact that almost as many started EMR s 30-question online survey and quit before they were half way through as completed it. If all who started had finished, the response rate would have been 1.3%, almost double the actual 0.75% rate. This hypothesis is also based on the fact that MCH s recent Principal survey achieved a 1.3% response rate utilizing a 10-question survey. So it seems that if you increase the length of your online survey from 10 to 15 questions, you risk losing half of your likely respondents. Conversely, if you limit your online survey to 10 simple questions or fewer, you are likely to get a response rate closer to the top of the online response rate range of 0.5% to 1.5%. While more research should be done to confirm this finding, it seems prudent to operate as if online surveys are most effective when limited to around 10 questions. If more than 10 questions are required, the postal mode is a better choice. Reliability Of Survey Data Most people in the survey business would agree that you don t know anything from a survey unless you know how reliable the survey is. A questionnaire will always produce numerical results, even if those results are meaningless. With unreliable data you run the risk of making business decisions based on survey results that don t actually mean anything. Only a test of reliability can tell you if you should trust the results. In simple terms, a reliable questionnaire is one that would give the same results if it was used repeatedly with the same group. In the case at hand, that would mean comparing random geographic samples of elementary school teachers and getting the same results each time. Instead of investing the time and money doing the identical survey twice, there are tests of reliability for questionnaires which indicate whether the results are meaningful. For the purpose of this investigation of postal and online surveys, we looked at split-half measures of reliability. That involved comparing the results obtained from the first half of the sample (early responders) to those of the second half (late responders). If, for example, the results from the first 300 respondents turned out to be remarkably similar to the results from the second 300, it increases our confidence that if we polled 300 more, or even 3,000 more, the results would still be the same. That is the operational definition of reliability. EMR s Reading Market survey, in its printed/mailed form, has been replicated seven times between 1999 and its latest administration in the Fall Based on a review of core questions embedded in each of those surveys it is clear that the test-retest reliability of this survey is extremely high. That is, results from year to year are strikingly similar in those areas where change would not be expected, such as educators average years of teaching experience, the importance of Reading program alignment to standards, and criteria driving buying decisions.
11 Page 11 To the point of split-half reliability or consistency, results obtained from the first half of the educators responding to EMR s 2011 survey were compared to results obtained from the second half of those responding to the same postal survey. The following are some sample comparisons. How Many Years Of Teaching Experience Do You Have? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Over 20 years 40.1% 41.1% -1.0% years 36.7% 37.2% -0.5% 6-10 years 16.2% 15.5% 0.7% 3-5 years 5.4% 5.6% -0.2% 1-2 years 1.0% 0.3% 0.7% Less than 1 year 0.7% 0.3% 0.4% AVERAGE 17.0 YEARS 17.3 YEARS -0.3% The differences range from a low of 0.2% to a high of 1.1%. How much of a difference should be flagged as meaningful? With a survey of this sample size we would usually consider a difference of 3% to 5% or more to be potentially meaningful. Using that criterion, none of the differences above are meaningful. In other words, you can operate as if the average tenure of teachers in the field is 17.0 years or 17.3 years - there is no statistical difference between the two. Additional examples of highly consistent results on EMR s Reading survey are as follows. If You Had It To Do Over, Would You Choose The Same Core Program Again? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Yes 70.2% 69.7% 0.5% No 29.8% 30.3% -0.5%
12 Page 12 How Important Is It That Your Reading Program Is Aligned To Common Core Standards? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Very important 82.7% 83.9% -1.2% Somewhat important 14.3% 15.1% -0.8% Not important 1.0% 0.7% 0.3% Not familiar with standards 2.0% 0.3% 1.7% Apart From Fit With Students Needs, What Are The Most Important Criteria Driving Your Buying Decision? (Postal Survey) RESPONSE FIRST HALF SECOND HALF DIFFERENCE Proof it works 85.1% 84.2% 0.9% Price 42.0% 43.5% -1.5% Peer recommendations 25.3% 22.9% 2.4% Reputation of brand or publisher 16.3% 16.1% 0.2% Positive reviews/awards 11.1% 14.7% -3.6% Online/digital delivery 10.1% 7.2% 2.9% Other 6.6% 7.2% -0.6% Based on these question-by-question comparisons, we would say that the postal survey is highly reliable, which means the results can and should be trusted. On the online survey side of the fence, because of the way the survey data was originally coded, it was not possible to re-run the question-by-question results comparing the first half to the second half, and thus obtaining a direct measure of split-half reliability. However, since the postal survey data proved to be highly reliable, it was possible to compare some of the key data from the postal survey to the same data collected with the online survey. Presumably, if the postal results are reliable, and the online results match closely with the postal results, then the online results should also be considered reliable. A comparison of the years of teaching experience question, postal survey vs. online survey, follows.
13 Page 13 How Many Years Of Teaching Experience Do You Have? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Over 20 years 40.6% 40.8% -0.2% years 36.9% 35.5% 1.4% 6-10 years 15.8% 17.4% -1.6% 3-5 years 5.5% 5.6% -0.1% 1-2 years 0.7% 0.7% 0.0% On this standard question the postal and online results match up perfectly. Thus if the postal results for this question are reliable, the online results are similarly reliable. Additional questionby-question comparisons follow. If You Had It To Do Over, Would You Choose The Same Core Program Again? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Yes 69.9% 59.1% 10.8% No 30.1% 40.9% -10.8% How Important Is It That Your Reading Program Is Aligned To Common Core Standards? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Very important 83.3% 87.4% -4.1% Somewhat important 14.7% 10.2% 4.5% Not important 0.8% 0.3% 0.5% Not familiar with standards 1.2% 2.1% -0.9%
14 Page 14 Apart From Fit With Students Needs, What Are The Most Important Criteria Driving Your Buying Decision? (Postal vs. Online) RESPONSE POSTAL ONLINE DIFFERENCE Proof it works 84.7% 80.7% 4.0% Price 42.8% 43.5% -0.7% Peer recommendations 24.1% 24.3% -0.2% Reputation of brand or publisher 16.2% 16.9% -0.7% Positive reviews/awards 12.9% 20.8% -7.9% Online/digital delivery 8.6% 6.8% 1.8% Other 6.9% 8.5% -1.6% While the first comparison yielded evidence of equal reliability for the two survey modes, the second, third, and fourth comparisons showed significant differences. How satisfied are educators with their currently adopted Reading programs? The postal survey (69.9%) indicates they are, on average, very satisfied, but the online survey (59.1%) says not so much. Another example is the motivating power of positive reviews/awards when it comes to influencing purchasing decisions. The postal survey (12.9%) indicates, on average, positive reviews/awards are not very influential, but the online survey (20.8%) says they are significantly more influential. Of course, the two surveys reflect the opinions of two separate sets of respondents, so shouldn t we expect and tolerate differences between the two? The answer is a resounding no! In both cases the sample is intended to fairly and accurately represent the universe of elementary classroom teachers, so we have every right to expect the results to be, effectively, the same. Does it make a difference which number we trust? It certainly could, which is why we strive for reliable results. The answer to the question, how reliable does a survey need to be?, is as reliable as possible. More research should be done in this area, but the preliminary evidence at hand suggests that the postal survey produces more reliable results compared to the online version of the same survey. How much this has to do with the typically higher response rates on postal surveys is not known, but it appears to contribute to the contrast. So the postal survey gets the check mark on the reliability score.
15 Page 15 DISCUSSION Need empirical data from the field to confirm (or disconfirm) what you believe to be true? Why not do a quick, cost-effective online survey? The lesson here is, look before you leap. How you do that survey should depend on the importance to you of cost, speed, response rate, number of questions and, ultimately, reliability of survey results. Postal and online surveys have different profiles with respect to each of those critical variables. The purpose of the experiment described herein was to compare the relative merits of the postal and online survey approaches. Some might suggest that, on the face of it, the printing/mailing approach is the slow, expensive, old fashioned way, while the online approach is the fast, inexpensive, modern way to conduct survey research. And it is hard to argue the issue of speed, but what about a cost comparison? And what about response rates, and the closely related issue of reliability of the data? Is there a trade off to be concerned about with regard to speed vs. response rates and reliability? In order to provide information on current trends in the elementary Reading market segment, EMR designed a detailed survey, and then mailed it on October 17, 2011 to 18,000 randomly selected educators segmented as follows. Sampling Matrix JOB TITLE ELEMENTARY MIDDLE/JHS TOTAL Classroom teacher 6, ,000 Reading teacher 3,000 3,000 6,000 Curriculum supervisor ,000 TOTAL 9,000 3,000 18,000 The survey consisted of four printed pages with a total of 30 numbered questions, of which 26 were multiple choice and 4 required a write in response. Of the 26 multiple choice questions, 6 offered the option of writing in an other response. Three weeks later, on November 8, 2011, the same 30-question Reading Market survey was replicated using the online method. This survey contained identical questions, but due to the formatting necessary for viewing on-screen, the length expanded to 16 pages or screens, compared to 4 pages for the postal version. The invitation to participate in the survey was e- mailed to 90,000 randomly selected elementary teachers (excluding the 18,000 chosen for the postal survey).
16 Page 16 Comparing the actual cost, speed, response rates, and data reliability of the two survey modes yielded the following profile of pros and cons. Relative Merits Of Postal vs. Online Surveys CRITERION POSTAL ONLINE Cost $23.14 per usable response - 9% less than online $25.22 per usable response Speed 85% of responses within 2 weeks 85% of responses in less than 1 day Response rate 4.4% - 6 times higher than online 0.75% Optimum number of questions 30 questions or more 15 questions is too many Reliability High split-half and testretest reliability Uncertain - needs more research In this specific test, the postal mode outpaced the online mode on four of the five key criteria, with the online mode superior only in terms of speed. If more than 10 questions are needed to get the job done, the postal mode would be the only prudent option. Given that the online mode is faster, but not cheaper compared to the postal mode, it must be emphasized that there is evidently a significant trade off to be concerned about with regard to higher speed vs. lower response rates and reliability when the online mode is used. Since reliability is the ultimate criterion of success for any survey, an online survey may not always be the right way to go, particularly now that the process of designing and implementing online surveys has become a do-it-yourself activity. If we suppose that technology will continue to evolve, and as a consequence the cost of online surveys will inevitably drop below the cost of postal surveys, making online superior both in terms of cost and speed, will that rule out the use of postal surveys in the future? Putting the allimportant issue of reliability aside for the moment, the answer should still be no, because postal surveys, when properly done, allow for a richness of data analysis that is not possible when the limit is ten questions. To get the sharpest possible portrait of the target market it is necessary to look both at total responses, and at many cross-tabulations of the data, elucidating important differences in responses by job title, grade level, years of experience, size of district, type of district, state, geographic region, and many other key demographic variables. In order to accomplish that, questions relating to those demographics must be added to the survey. If that higher level of analysis is the goal, many more than ten questions will be required.
17 Page 17 Special Limitation Of Online Surveys in the K-12 Market Some of EMR s most interesting and useful survey results have come from comparing the responses, to the same questions, made by classroom teachers, Principals, and various districtlevel personnel. Teachers tend to be realists as far as what is or is not happening in their classrooms, while Principals tend to be cheerleaders who may exaggerate the positives and minimize the negatives. Knowing those differences should cause product developers and marketers to adopt different strategies for different sub-groups within the population of educators. In order to make those types of comparisons, EMR s rule of thumb is to have between 100 and 200 responses to analyze within each sub-group, such as teachers or Principals. For example, if the plan is to look at teachers in grades Pre-K - 2, grades 3-5, and grades 6-8, as well as elementary and middle/junior high school Principals, a minimum of 100 responses (ideally 150 responses) in each of those cross-sections of job title and grade level would be required to perform a meaningful analysis. Assuming a 5% response rate for a typical postal survey of educators, 3,000 of each sub-group in the initial sample would likely ensure the desired number of responses. So EMR s sampling matrix would be as follows. JOB TITLE Hypothetical Sampling Matrix - Postal Survey GRADES PRE-K - 2 GRADES 3-5 GRADES 6-8 TOTAL Classroom or subject teacher 3,000 3,000 3,000 9,000 Elementary Principal ,000 Middle/Junior High Principal ,000 TOTAL 3,000 3,000 3,000 15,000 Those starting numbers are likely to yield the desired 150 responses per cell within this matrix when the postal mode is employed. On the online survey side, we could assume a 1% response rate (the mid-point of the expected range of online survey response rates), so the following starting numbers would be needed to achieve the same results.
18 Page 18 JOB TITLE Hypothetical Sampling Matrix - Online Survey GRADES PRE-K - 2 GRADES 3-5 GRADES 6-8 TOTAL Classroom or subject teacher 15,000 15,000 15,000 45,000 Elementary Principal ,000 Middle/Junior High Principal ,000 TOTAL 15,000 15,000 15,000 75,000 As has been previously discussed, since the typical postal survey delivers five or six times the response rate of the typical online survey, it takes five or six times as many names in the initial online survey sample to ensure the same number of total responses as obtained in the postal survey. Putting aside the issue of the increased cost of a survey starting with 75,000 names, compared to 15,000, there is another problem. There are barely enough middle/junior high Principals in the universe of K-12 educators from which to pull an initial sample of 15,000. [There are only around 14,000 middle and junior highs in the entire U.S. public school market.] The situation is especially problematic at the district level, where such key job titles as federal program directors, Bilingual/ELL directors, technology coordinators, special education directors, and K-12 curriculum/instruction directors all number far fewer than 15,000. To illustrate one example of this issue, there are approximately 7,500 Bilingual/ELL directors in total. Thus an online survey of such directors, including every single one of them in the initial sample, and producing a 1% response rate, is likely to yield 75 total responses (7,500 x 1%). Is that number of responses high enough to produce reliable data? Should critical business decisions be made based on such limited results? Probably not. That being the case, if 150 or more responses from Bilingual/ELL directors is the goal, a postal survey would be the best choice. In summary, reliability is always the most important consideration when thinking about a market survey. An unreliable set of results will be far worse than no empirical data at all. Trusting false positives or false negatives could lead to terrible financial consequences. If you are talking to a survey provider, always ask about sample size and the minimum number of responses needed for sound results. Ask about average response rates, and the related issue of reliability and how it will be measured. This is how you will know if your results can be trusted. And save the questions about speed and cost for last, where they really belong.
How Do You Know It? How Can You Show It? Penny Reed Wisconsin Assistive Technology Initiative Gayl Bowser Oregon Technology Access Program Jane Korsten Responsive Centers for Psychology and Learning Wisconsin
Climate Surveys: Useful Tools to Help Colleges and Universities in Their Efforts to Reduce and Prevent Sexual Assault Why are we releasing information about climate surveys? Sexual assault is a significant
Planning and Evaluating Health Information Outreach Projects THREE BOOKLET COLLECTING AND ANALYZING EVALUATION DATA 2nd edition Outreach Evaluation Resource Center 2013 National Network of Libraries of
The Definitive IP PBX Guide Understand what an IP PBX or Hosted VoIP solution can do for your organization and discover the issues that warrant consideration during your decision making process. This comprehensive
European Commission SOCIAL EXPERIMENTATION A methodological guide for policy makers Written by J-Pal Europe At the request of Directorate General for Employment, Social Affairs and Inclusion Version for
SPECIAL REPORT 5 Mistakes Everyone Makes with Job Descriptions And How to Avoid Them 30611060 SPECIAL REPORT 5 Mistakes Everyone Makes with Job Descriptions And How to Avoid Them 30611000 Publisher and
An Oracle Best Practice Guide March 2012 Best Practices for Improving Survey Participation Introduction... 1 Top 7 Best Practices for Question Creation... 2 1. Keep It Relevant... 2 2. Make Your Questions
Focus Groups as Qualitative Research PLANNING AND RESEARCH DESIGN FOR FOCUS GROUPS Contributors: David L. Morgan Print Pub. Date: 1997 Online Pub. Date: Print ISBN: 9780761903437 Online ISBN: 9781412984287
Making Smart IT Choices Understanding Value and Risk in Government IT Investments Sharon S. Dawes Theresa A. Pardo Stephanie Simon Anthony M. Cresswell Mark F. LaVigne David F. Andersen Peter A. Bloniarz
A Simpler Plan for Start-ups Business advisors, experienced entrepreneurs, bankers, and investors generally agree that you should develop a business plan before you start a business. A plan can help you
Technology Support: Its Depth, Breadth and Impact in America's Schools Amy Ronnkvist Sara L. Dexter Ronald E. Anderson Teaching, Learning, and Computing: 1998 National Survey Report #5 Center for Research
College Handbook Guidance Department Dear Cadet and Parents: This College Application Handbook was designed to provide you with a resource to help all of you successfully navigate the college admission
So You Want to Do Anthropology in Your Library? or A Practical Guide to Ethnographic Research in Academic Libraries Andrew Asher ERIAL Lead Research Anthropologist Susan Miller Resident Anthropologist,
Why Johnny Can t Encrypt: A Usability Evaluation of PGP 5.0 Alma Whitten School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 firstname.lastname@example.org J. D. Tygar 1 EECS and SIMS University
Are Online Degrees Substitutes for Traditional Degree Programs? Evidence from a Correspondence Study Conor Lennon April 6, 2015 Preliminary and incomplete, please do not cite. Abstract This paper reports
Becoming an Educated Consumer of Research: A Quick Look at the Basics of Research Methodologies and Design Taylor Dimsdale Mark Kutner Meeting of the Minds Practitioner-Researcher Symposium December 2004
LEADBUYING 101 PRESENTED BY 1 2 INTRODUCTION The act of buying leads is a fairly easy task. A few quick phone calls could easily result in hundreds of leads being delivered to you on a daily basis. However,
FORECASTING AND BUDGETING SoftWARE SECOND EDITION CHARTECH SOFTWARE PRODUCT GUIDE business with CONFIDENCE icaew.com/itfac IT FACULTY BENEFITS Keep on top of important developments with e-bulletins, bi-monthly
State-Funded PreK Policies on External Classroom Observations: Issues and Status Policy Information Report This report was written by: Debra J. Ackerman Educational Testing Service Author contact: email@example.com
Perfect For RTI Getting the Most out of STAR Math Using data to inform instruction and intervention The Accelerated products design, STAR Math, STAR Reading, STAR Early Literacy, Accelerated Math, Accelerated
The Acceptance of Online Graduate Coursework by School Districts in Wisconsin and Illinois Dawn A. Oler, CFCS A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Science
A Process for COTS Software Product Evaluation Santiago Comella-Dorda John Dean Grace Lewis Edwin Morris Patricia Oberndorf Erin Harper July 2004 TECHNICAL REPORT ESC-TR-2003-017 Pittsburgh, PA 15213-3890
CHAOS MA NIFES T O 2013 Think Big, Act Small THE CHAOS MANIFESTO TABLE OF CONTENTS PREFACE Introduction......................................... 1 Executive Management Support...............................
WHITEPAPER Get the Right People: 9 Critical Design Questions for Securing and Keeping the Best Hires Steven Hunt & Susan Van Klink Get the Right People: 9 Critical Design Questions for Securing and Keeping
caroline hoxby Stanford University christopher avery Harvard Kennedy School The Missing One-Offs : The Hidden Supply of High-Achieving, Low-Income Students ABSTRACT We show that the vast majority of low-income
Questionmark White Paper Assessments Through the Learning Process This paper explores how instructors and organizations can use assessments to improve the learning process. It is designed to help readers