Likelihood of Participating in Mail Survey Research Business Respondents Perspectives Thomas V. Greer Nuchai Chuchinprakarn Sudhindra Seshadri This research used a survey-on-surveys approach to study business respondents perspectives of mail surveys. Two studies were conducted. The results of the first study showed that day of the week had no effect on the likelihood of responding to a mail survey. Short questionnaires were preferred over long ones. The content of the study was the most important factor in stimulating response rates, followed by sponsorship of the study and postage paid reply envelopes. Prenotification and follow-up appeared to be the least important factors. Address correspondence to Dr. Thomas V. Greer, Department of Marketing, Van Munching Hall, School of Business, University of Maryland, College Park, MD 20742-1815. The authors thank Bashar Hijazi and Sachith Wijetunga for their assistance in conducting this research. These results were supported by the results of the second study. The results of the second study also indicated that researchers should use noncomparative scales or open-ended questions when asking respondents for facts and use comparative scales or fixed alternative responses when asking respondents for opinions or numbers. 2000 Elsevier Science Inc. All rights reserved. INTRODUCTION The mail survey is a popular research method used by academic and commercial researchers. This popularity is due to various advantages of the technique. The mail survey is considered an essential tool for industrial research- Industrial Marketing Management 29, 97 109 (2000) 2000 Elsevier Science Inc. All rights reserved. 655 Avenue of the Americas, New York, NY 10010 0019-8501/00/$ see front matter PII S0019-8501(98)00038-8
Respondents cooperative behavior can be viewed as social exchange. ers to gather information from busy business executives. Its cost is relatively low. The method also allows researchers to obtain a large amount of information from a large sample, gives respondents time to answer, allows respondents to remain anonymous, helps reduce interviewer bias, and has geographic flexibility [1 3]. However substantial these advantages may be, researchers are often concerned with the issues of potential bias of survey results due to low response rates. Over the years, a large number of studies have used different methods to study response inducement factors. Different types of studies are reported in the literature. These include a review of past studies investigating methods of improving response rates [2, 4 6], meta-analysis investigations [7 9], qualitative recommendations on how to improve response rates [10, 11], and experimental research to examine the effects of key inducement factors [1, 12 16]. Experimental research appears to be the most popular technique adopted in the past to examine the effects of response inducement factors which include survey sponsorship, cover letter, color of questionnaire paper, anonymity, prenotification, follow-up, monetary and nonmonetary incentives, type of postage, and personalization. However, the limitation of experimental research is that it is difficult to investigate several inducement factors simultaneously in one single study. In addition, most of these investigations were usually embedded in other major studies. THOMAS V. GREER is Professor of Marketing at the University of Maryland at College Park, College Park, Maryland. NUCHAI CHUCHINPRAKARN is a doctoral candidate of marketing at the University of Maryland at College Park, College Park, Maryland. SUDHINDRA SESHADRI is a management consultant, Silver Spring, Maryland. Another type of study involves the use of survey-onsurveys approach to examine respondents attitudes and preferences toward mail survey participation. Survey-onsurveys research can be used to provide insights into industrial respondents behavior. It can be used to probe into what respondents like and dislike about mail surveys and what inducement factors are most important and likely to influence them to respond. In addition, a large amount of information can be obtained easily and directly from respondents. Moreover, this technique can also be used to simultaneously investigate respondents attitudes and preferences toward a large number of inducement factors and their relative importance. However, this method is likely to generate a low response rate because the topic of the study (i.e., response behavior) may not interest the recipients of questionnaires, especially when the recipients are business executives. A few past studies using survey-on-surveys research directed at consumer respondents were available. For example, some of the studies were carried out to examine consumers willingness toward survey methods and participation [17], attitudes toward issues of informed consent in research [18], and personal characteristics of refusers and their motives for refusals [19]. Diamantopoulos and Schlegelmilch [20] appear to be the first to use the survey-on-surveys approach to report the likelihood of mail survey participation among industrial respondents. The purpose of this research was to utilize the surveyon-surveys approach to study business respondents perceptions of mail surveys. Two separate studies were conducted. The first study was designed to examine the effects of day of the week the respondent received the questionnaire as well as the length of the questionnaire on the likelihood of participating in a mail survey. We also examined the relative importance of various response inducement factors. The second study was carried out to investigate respondents preferences toward various aspects of questionnaire design as well as to check some of the findings resulting from the first study. 98
Appearance of a questionnaire may project an image of professionalism. LITERATURE REVIEW Theoretical View of Mail Surveys Mail surveys can be viewed as social exchange between the researchers and the recipients of questionnaires. The decision to cooperate is simply based on an evaluation of the ratio between the perceived rewards and costs of participating in a mail survey [21, 22]. Researchers must try to maximize the recipients rewards and minimize their costs, while establishing trust that the rewards will be received. A reward can be in either intangible or tangible form. Intangible rewards may include stressing the importance of the recipient, emphasizing the benefits that the recipient would receive, using sponsorship as an appeal, showing positive regard for the recipient, giving written appreciation, positioning the recipient as an expert, making the questionnaire interesting, expending effort through preliminary notification and follow-up, using personalization with real signatures, and using individual salutations. Tangible rewards may include using monetary or nonmonetary incentives as a symbol of trust. The cost component may include time, energy, effort, and out-of-pocket money. Related factors are length of the questionnaire, sensitivity of the subject, subordination to the researcher, the use of deadlines to put pressure on recipients, and monetary costs related to furnishing envelopes and stamps [21, 22]. Apart from perceived rewards and costs, recipients cooperative behavior will also depend on other situational factors, such as characteristics of the recipient s physical and social surroundings, time, task definition, or antecedent states [21, 23]. Industrial Populations Industrial populations refer to those respondents who receive questionnaires at their place of employment [24]. Because of factors such as their preoccupation with work, confidentiality of information, or company rules and policies, industrial populations are less likely to respond to survey questionnaires than consumer populations. The likelihood of low response rates has led to a surge in the number of studies on response inducement factors directed at industrial respondents in the 1990s. Despite differences in the two populations, research in the past has shown similarities in the effects of most response inducements factors. Both audiences were found to be positively affected by the use of follow-up, monetary and nonmonetary incentives, and survey sponsorship but not affected by the use of deadline, appeals, and colored paper. Questionnaire length was also found to be ineffective in both audiences; however, only one study on length was directed at industrial populations. Inconsistent results were found in both segments of the populations for the effects of prenotification, personalization, and delivery method. The difference between the two segments was that anonymity was found to be effective among industrial populations but not consumer populations [2, 4 6, 9]. Researchers have tried to stimulate industrial respondents cooperative behavior by increasing the level of perceived benefit and decreasing the level of cost. Numerous experimental studies on inducement factors have been reported in the literature. Support was found for anonymity [25, 26], university sponsorship [1, 14, 15], stamped return envelope [27], monetary and some nonmonetary incentives [14, 16, 28 31], and follow-up [32 34]. Other factors, such as appeals [31, 35, 36], handwritten postscripts [37, 38], personalization [39, 40], offering survey results [16, 37], and cover letter [41 43], were found to be ineffective. The effect of prenotification appears to be highly inconsistent [13, 41, 44 46]. Table 1 presents key experimental studies of response inducement factors directed at industrial populations. Day of the Week There is a dearth of research concerning the effect of day of the week the respondent receives the questionnaire. Nötzel [47] conceptually suggested that in obtain- 99
TABLE 1 Industrial Mail Surveys: Effects of Response Inducement Factors Inducement Factor Study [Ref.] Experimental Treatment Result Questionnaire length Jobber (1989) [54] Nine-page questionnaires vs. five-page questionnaires Not significant Cover letter Albaum and Strandskov (1989) [41] Describing the project vs. control group Not significant Jobber, Birro, and Sanderson (1988) [42] Describing the project vs. control group Not significant Parasuraman (1981) [43] Brief vs. detailed cover letter Not significant Anonymity Futrell and Hise (1982) [25] Anonymity vs. control group Significant Tyagi (1989) [26] Anonymity vs. control group Significant Personalization Clark and Kaminski (1990) [39] Handwriting the cover letter vs. form cover letter Significant Kimball (1961) [40] Personal salutation vs. dear Sir Not significant Egoistic appeals Childers, Pride, and Ferrell (1980)) [21] Egoistic appeal vs. social utility appeal vs. help-the-sponsor Not significant appeal vs. control Kerin and Harvey (1976) [36] Egoistic appeal vs. altruistic appeal Not significant Schneider and Johnson (1995) [31] Egoistic appeal vs. social utility appeal vs. help-the-sponsor Not significant appeal Tyagi (1989) [26] Egoistic appeal vs. altruistic appeal Significant Handwritten postscripts Jobber and Sanderson (1985) [33] Handwritten postscript (offer of results) vs. typed postscript Not significant (offer of results) vs. no postscript (offer in body of letter) Pressley (1978) [38] Handwritten postscript (personal thanks) vs. control group Not significant University sponsorship Faria and Dickinson (1992) [14] University sponsorship vs. commercial sponsorship Significant Faria and Dickinson (1996) [1] University sponsorship vs. commercial sponsorship Significant Greer and Lohtia (1994) [15] University sponsorship vs. commercial sponsorship vs. Honor society vs. control Significant Stamped return envelopes Veiga (1984) [27] Stamped return envelopes vs. business reply envelopes Significant Monetary incentives Angur and Nataraajan (1995)[28] Inclusion of $1 bill vs. control group Significant Armstrong and Yokum (1994) [29] Inclusion of $1 bill vs. control group Significant Chawla, Balakrishnan, and Smith (1992) [30] Inclusion of $1 bill vs. control group Significant Schneider and Johnson (1995) [31] Inclusion of $1 bill vs. control group Significant Nonmonetary incentives Angur and Nataraajan (1995) [28] Lottery prize giveaway vs. control group Significant Faria and Dickinson (1992) [1] Offering a donation to charity vs. control group Significant Jobber and Sanderson (1985) [37] Offering of survey results vs. control group Significant in the opposite direction Kalafatis and Tsogas (1994) [16] Inclusion of a trade or academic article vs. control group Significant Offering of survey results vs. control group Not significant Prenotification Albaum and Strandskov (1989) [41] Prior letter vs. control group Not significant Duhan and Wilson (1990) Prior letter vs. control group Significant Mitchell and Nugent (1991) [44] Prior telephone calls vs. control group Significant Murphy, Dalenberg, and Daley (1990) [45] Prior postcard vs. control group Not significant Murphy, Daley, and Dalenberg (1991) [46] Prior postcard vs. control group Significant Follow-up Jobber, Allen, and Oakland (1985) [32] Letter and questionnaire telephone reminder Significant Jobber and Sanderson (1983) [33] Letter and questionnaire Significant Swan, Epley, and Burns (1980) [34] Letter Significant Letter and questionnaire Significant Questionnaire color Greer and Lohtia (1994) [15] White vs. yellow vs. green vs. pink Not significant Jobber and Sanderson (1983) [33] Blue vs. white Not significant LaGarce and Kuhn (1995) [56] Blue and yellow version vs. black and white version Significant Pressley and Tullar (1977) [24] Yellow vs. blue vs. green vs. white Not significant Questionnaire format LaGarce and Kihn (1995) [56] User-friendly version vs. regular version Significant ing responses from consumers questionnaires that arrived at the weekend should elicit a higher response rate than questionnaires that arrived during the week. He argued that consumers were more willing to complete and return those questionnaires that arrived at the weekend because they had more free time to do so. However, other reports seemed to suggest otherwise. In their research directed at consumer respondents, Blythe and Essex [48] found no 100
Content of the study was found to be the most important inducement factor. significant difference between the response rate of questionnaires mailed on Thursday before the weekend and that of questionnaires mailed on Monday after the weekend. Using survey-on-surveys research, Diamantopoulos and Schlegelmilch [20] appeared to the first to examine industrial respondents perception of the effect of day of the week on their willingness to participate in a mail survey. Their findings indicated day of the week had no effect. In the present research, we argue that day of the week is likely to have an effect on the recipients response willingness. Day of the week can be viewed as a situational factor associated with the time potential respondents receive the questionnaires. In this research, two categories of day of the week are proposed. These are early week and late week. Early week in this research consists of Monday, Tuesday, and Wednesday, 1 whereas late week consists of Thursday and Friday. It is probable that the duration between the day of the week the potential respondents receive the questionnaires and the weekend may have a psychological effect on respondents willingness to participate in a mail survey. Weekends are likely to act as an implicit deadline. We believe that when questionnaires arrive during the early week, recipients of the questionnaires should be more willing to fill out the questionnaires since there is still lots of time left to complete the questionnaires before the end of the week. In contrast, when questionnaires arrive during the late week, recipients of the questionnaires may be less willing to fill out the questionnaires since the weekend is approaching and other work must receive higher priority. The time between the day the questionnaires arrive and the weekend is likely to become a cushion affecting respondents willingness to participate in a mail survey. 1 We consider Wednesday as part of the early week because our pilot study strongly suggests that people consider Wednesday as part of the early week rather than the late week. Hence, business respondents may be more willing to complete the questionnaires that arrive during the early week than those that arrive during the late week. Questionnaire Length Past research on the effects of questionnaire length was mainly directed at nonindustrial respondents with mixed results. Sletto [49], Scott [50], and Mason, Dressel, and Bain [51] found no significant difference between short and long questionnaires, whereas Stanton [52] found a difference between the two forms. Adams and Gale [53] found no difference in response rates between one and three pages but they found a lower response rate for five pages. Mangione [3] suggested that even though the issue of questionnaire length had received a fair amount of attention, the results seemed to be inconclusive and more research was needed to clarify the issue of length. Almost none of the work pays attention to the effect of length on industrial response rates. In his review of the literature, Jobber ([5, p.193) states that there is an urgent need to test the conclusion of Kanuk and Berenson (1975), based entirely on evidence from nonindustrial populations, that Evidence does not indicate that short questionnaires are more likely to receive higher response rates than long questionnaires. Later in his study, Jobber [54] found no significant difference between a ninepage questionnaire and a five-page questionnaire. Still more research is needed to establish the conclusive effect of questionnaire length on industrial response cooperation. Intuition seems to suggest that respondents are more willing to fill out a short questionnaire than a long one. On average, shorter questionnaires are likely to produce better response rates than longer ones [55]. Even though the length of the questionnaire may not be important to consumer respondents, it is important to business respondents because the questionnaires are likely to be completed during company time. Time is perceived as a cost 101
Prenotification and follow-up were perceived to be ineffective in stimulating response cooperation. or investment that business executives have to expend [22]. Industrial respondents may be more willing to fill out a short questionnaire than a long one because less time, energy, and effort are likely to be consumed. Relative Importance of Response Inducement Factors Experimental studies in the past have examined various inducement factors. Some were found to be effective; others were found to be ineffective or inconsistent. However, the relative importance of these factors has been left unexplored. It is important to set a priority among various inducement factors. Hence, our attempt here is to explore the relative importance of these factors. Ten major inducement factors were selected out of a large number of factors. We also provided a choice of other category for respondents who might be motivated to respond by some other factors. The selected ten inducement factors were prenotification, follow-up, sponsorship of the study, content of the study, sensitivity of the study, incentives in responding, postage paid reply envelope, cover letter, time of day of delivery, and set up time to answer. Questionnaire Design Appearances of questionnaires are perceived as a form of cost the recipients use to evaluate against perceived benefits [22]. Appearance factors may project an image of professionalism that could result in greater trust on the part of the recipient ([21], p.196). Less effort will be expended when a questionnaire is easy to follow, answer, and pleasant to the eyes. Researchers of industrial mail surveys have paid more attention to the effect of color of questionnaire paper than other aspects of questionnaire design. Studies on color generally suggest no effects of using paper with one color on response rates [15, 33]; however, a combination of two colors was found to be effective in inducing response rates [56]. Evidence on other aspects of questionnaire design is limited. LaGarce and Kuhn [56] appear to be the only ones who tested the effect of user-friendly format of questionnaires (i.e., less technical in appearance and easier to read). They found support for the use of user-friendly format. This study attempts to investigate respondents preferences toward multiple aspects of questionnaire design. A combination of five aspects of questionnaire design will be examined. These are structure of questions (fixedalternative questions versus open-ended questions), measurement scales (comparative versus noncomparative scales), the nature of response (qualitative versus quantitative), information sought (opinions versus facts), and color of questionnaire paper (important versus unimportant). The purpose is to find out what combination of design is most effective. METHODS Study 1 SAMPLE AND QUESTIONNAIRE. Study 1 was conducted to examine the effect of day of the week the respondent received the questionnaire as well as the length of the questionnaire on the likelihood of mail survey participation. The study also examined the relative importance of multiple inducement factors. The population of interest was small and mediumsized firms in one state. We confined our population to firms that were located in two major industrial counties. Our sampling frame was based on the most recent Harris Manufacturers Directory. The informant technique was used to select potential respondents. If there was only one executive listed in the directory, that person was selected as our informant; otherwise, the second name on the list was selected. A cluster sampling technique was 102
Future research on questionnaire length is needed to establish the optimal number of pages and questions. deemed most appropriate because our sampling frame was organized by counties and cities. This sampling procedure involved several steps. We first randomly selected a city from each county. Fifteen business firms were then randomly selected from the selected city as our sample units. However, if the selected city contained 15 firms or fewer, all firms were selected. This approach was used across the two counties. The total sample size drawn was 334. In the questionnaire, we asked respondents which arrival day (i.e., the day the questionnaire is received) would produce the greatest probability of their filling out the questionnaire. We then provided respondents with various levels of pages and questions and asked them to use a number between 0 and 100 to indicate the likelihood of their filling out the questionnaire with each level of length. Lastly, we asked respondents to divide 100 points to reflect the relative importance of the 10 factors that they thought might affect their willingness to participate in a mail survey. We pretested the questionnaire among faculty members and graduate students to make sure that the questions were understandable. The questionnaire, accompanied by a cover letter with university letterhead, was sent to the company executives that comprised the sample. The salutation of the cover letter was customized to individual recipients asking their cooperation in completing the questionnaire and returning it in a postage paid return envelope. The letter contained a personalized signature. Out of 334 questionnaires sent, 304 were successfully delivered. Thirty could not be delivered. Seventy-eight responses were received. Two were excluded due to severely incomplete responses. The total number of usable responses was 76, representing a response rate of 25%. The majority of respondents were either CEOs or VPs of the firm. Results DAY OF THE WEEK. Twenty-three respondents (29.49%) said that the likelihood of their participation would increase if they received the questionnaires during the early week, whereas 15 respondents (19.23%) said that they were more likely to respond if they received the questionnaires during the late week. Forty respondents (51.28%) said that day of the week was not a factor in their decisions to respond to a mail survey. The results indicated that about half of the respondents did not think that day of the week would affect their mail survey participation. We also tested for significance in the frequency distributions between respondents who preferred to receive the questionnaires during the early week and those who preferred to receive the questionnaires during the late week. The chi-square test for goodness of fit suggested no significant difference between the two groups ( 2 1.68, p.05). Day of the week appears to have no effect on respondents willingness to participate in a mail survey. QUESTIONNAIRE LENGTH. The results of the likelihood of responding to questionnaires with different levels of length are reported in Table 2. The findings indicated that the likelihood of participation was higher when the respondent received a questionnaire with fewer pages and questions. The results seem to suggest that if researchers are satisfied with 30% response rates, then four pages should be the optimal page length and 25 questions should be the optimal number of questions. However, inconsistencies between the effect of page length and that of the number of questions seem to exist if one assumes that eight to ten major questions 2 are normally placed on 2 It is assumed that one question contains multiple scaled response items. 103
TABLE 2 Likelihood of Mail Survey Participation: Questionnaire Length Dimension of Length Length Likelihood (%) Page One page 84.92 Two pages 69.23 Three pages 42.68 Four pages 30.14 Five pages 22.04 Six or more pages 8.92 Question Five questions 86.23 Ten questions 77.21 Fifteen questions 60.72 Twenty questions 49.03 Twenty-five questions 36.21 Thirty questions 24.89 Thirty-five questions 15.61 TABLE 3 Relative Importance of Response Inducement Factors Inducement Factor Average Importance Score (Total of 100 Points) Content of survey questionnaire 26.52 Organization sponsoring study 17.97 Postage paid reply envelope 15.20 Privacy/sensitivity of survey questions 9.68 Use of cover letter 7.89 Incentive in responding 6.80 Time of day of delivery 6.23 Set-up time to answer 5.01 Follow-up 1.98 Prenotification 1.69 Other (e.g., length, simplicity of questions) 1.03 each page. This would suggest that page length is possibly a more important factor than the number of questions. RELATIVE IMPORTANCE OF RESPONSE INDUCEMENT FACTORS. Respondent evaluation of the relative importance of response inducement factors is reported in Table 3. The most important factor was the content of the study, followed by survey sponsorship and postage paid return envelopes. Privacy/sensitivity of survey questions, cover letter, incentive in responding, and set-up time to answer 3 were considered less important. The least important factors in stimulating response participation were follow-up and prenotification. Study 2 3 Set-up time to answer is an uncontrollable factor. However, business respondents set-up time may be directly related to day of the week they receive the questionnaire. Future researchers may want to look at the interaction between set-up time to answer and day of the week. SAMPLE AND QUESTIONNAIRE. Study 2 was carried out after Study 1 to examine respondents preferences toward questionnaire design as well as to check respondents perceptions toward the use of prenotification and follow-up, found to be the least important factors in the first study. A separate sample was used. The sample was based on the same population of interest defined in the first study. We utilized the same sampling procedures but excluded those sample units included in the first study. The total sample size drawn in this study was 355. We provided respondents with fixed-alternative responses to the questions on preferences toward the structure of questions, measurement scales, the nature of response, information sought, and color of questionnaire paper. Then, based on a 7-point semantic differential scale where 1 means greatly decreases and 7 means greatly increases, we asked respondents whether the use of prenotification and a reminder letter would increase the likelihood of their participation. We pretested the questionnaire among faculty members and graduate students to make sure that the questions were understandable. We used similar procedures to those of Study 1 to design the cover letter that was sent with each questionnaire to executives of the companies. Of 355 questionnaires sent, 317 were successfully delivered. Thirty-eight could not be delivered. Sixty-five responses were received. One company was excluded due to severely incomplete responses. The total number of usable responses was 64, representing a response rate of 20.19%. Results QUESTIONNAIRE DESIGN. The results of respondents preferences toward questionnaire design are reported in Table 4. On the structure of questions, respondents appeared to prefer the fixed-alternative questions over open-ended questions (76.69% vs. 6.25%). The chi-square test for goodness of fit indicated a significant difference in the frequency distributions between these two types of questions ( 2 40.16, p.01). For measurement scales, comparative scales received a higher degree of preferences over noncomparative scales (45.31% vs. 15.63%). The chi-square analysis showed a signifi- 104
TABLE 4 Preferences Toward Questionnaire Design Design Aspect Frequency Percentage Structure of questions Fixed-alternativequestions 51 79.69 Open-ended questions 4 6.25 No preference 9 14.06 Measurement scales Comparative scales 29 45.31 Noncomparative scales 10 15.63 No preference 25 39.06 Nature of response Quantitative response 11 17.19 Qualitative response 19 29.69 No preference 34 53.12 Information sought Opinions 30 46.88 Facts 10 15.62 No preference 24 37.50 Color of questionnaire paper Important 8 12.50 Not important 49 76.56 No opinion 7 10.94 cant difference in the frequency distributions of respondents preferences between these two scales ( 2 9.26, p.01). As to the nature of response, about half of the respondents indicated no preference (53.12%). Qualitative response appeared to be slightly preferred over quantitative response (29.69% vs. 17.19%). However, the chi-square test suggested no significant difference in the frequency distributions between respondents who preferred giving qualitative response and those who preferred giving quantitative response ( 2 2.13, p.05). With the issue of information sought, respondents seemed to prefer questions asking for opinions to those asking for facts (46.88% vs. 15.62%). The chi-square test indicated a significant difference in the frequency distributions between the two types of information ( 2 10.00, p.01). Likewise, the frequency distributions between respondents who said that the color of questionnaire paper was not an important factor and those who said that the color was important (76.56% vs. 12.50%) were significantly different from each other ( 2 29.49, p.01). A multiple correspondence analysis was then carried out to examine the association between the five aspects of questionnaire design. The two-dimensional perceptual map is presented in Figure 1. The perceptual map clearly suggested three clusters. One was associated with people who preferred questionnaires that used either noncomparative scales or open-ended questions when asking for facts. The second cluster was related to people who preferred questionnaires that used either comparative scales or fixed alternative responses when asking for opinions or numbers. The last cluster dealt with people who were indifferent. Color of questionnaire paper did not seem to be related to any other aspect of questionnaire design. PRENOTIFICATION AND FOLLOW-UP. The mean of the effect of prenotification was 3.92 with standard deviation of 1.31, whereas the mean of the effect of follow-up was 3.58 with standard deviation of 1.65. A one-sample z-test was performed to test whether the use of prenotification and follow-up had a positive impact on response rates. The result indicated that the use of prenotification had no significant impact on the likelihood of mail survey participation (z 0.49, p.05, two-tailed test). The result on the use of follow-up indicated a significant impact on the likelihood of mail survey participation (z 2.05, FIGURE 1. Multiple correspondence analysis: Aspects of questionnaire design. (Key: Structure of questions: A, fixed alternative questions; B, open-ended questions; C, no preference. Measurement scales: D, comparative scales; E, noncomparative scales; F, no preference. Nature of response: G, quantitative response; H, qualitative response; I, no preference. Information sought: J, opinions; K, facts; L, no preference. Color of questionnaire paper: M, important; N, not important; O, no opinion.) 105
p.05, two-tailed test); however, the effect was in the opposite direction. Respondents perceived that if they had not filled out a questionnaire, a reminder letter would have had a negative effect. These results support the results found in the first study, that is, prenotification and follow-up are not important factors in stimulating response cooperation. DISCUSSION The results of the effect of day of the week suggest that day of the week the respondent receives the questionnaire has no impact on response willingness. About half of the respondents did not think that day of the week would affect their decisions to participate in a mail survey. Although the results indicated that our expectation was in the right direction (i.e., the frequency distribution of respondents who preferred to receive the questionnaire during early week was slightly larger than that of late week), the difference was not statistically significant. What this might suggest is that respondents are likely to consider responding to a questionnaire when they have a lighter work load. Which day of the week has a lighter or heavier work load is likely to be unpredictable. For some people, Mondays and Fridays may seem to have a heavier work load than other days of the week. However, this may not be the case with other people. For others, the middle of the week may produce a heavier work load than Mondays and Fridays. The level of work load is likely to vary among people depending on the nature of their work, their bosses, subordinates, customers, suppliers, etc. Hence, business people are likely to consider responding to a questionnaire when the questionnaire arrives at a time they are not completely immersed in their work. However, we have to keep in mind that other factors, such as the content of the study or appearance of the questionnaire, may also influence business people s willingness to participate in a mail survey. It is possible that even though business people may not be preoccupied with work, they may decide not to cooperate if the questionnaire does not interest them. The investigation of questionnaire length adds to the limited body of research on the effect of length on business executives response behavior. It can be said that business people are less likely to respond to a survey questionnaire than consumers because of the perceived costs associated with their time and effort. As a result, short questionnaires are more likely to generate higher response rates because less effort and energy are required to complete the questionnaires. However, researchers must be careful when it comes to the issue of length. Care must be given to the possible interaction between page length and the number of questions. It is likely that respondents will not respond to a questionnaire with few pages but many questions on each page (i.e., using a crowded layout with small font size) or a questionnaire with few questions but many pages (i.e., using many items under one question). This research is, to our knowledge, the first to examine the relative importance of inducement factors that may affect survey participation. Business respondents appear to be very concerned about the content of any study in which they are asked to participate. If the content is of interest to them, the likelihood of cooperation will be enhanced. However, when studying a difficult topic dealing with sensitive issues, researchers may wish to emphasize other concurrent techniques, such as sponsorship of the study, postage paid reply envelope, cover letter, and incentives in stimulating response rates. Based on the results of our first study, it appears that business respondents do not consider prenotification and follow-up to be important factors in stimulating their survey participation. The finding on follow-up appears to be in conflict with most past experimental studies, whereas the finding on prenotification provides additional support to those experimental studies suggesting the nonsignificant effects of this factor on response rates. The results of the second study also support the results found in the first study. Prenotification and follow-up are not effective tools in generating higher response rates. The findings on these two factors are consistent with those reported by Diamantopoulos and Schlegelmilch [20]. The results seem to suggest that respondents willingness to respond to a questionnaire may not be affected by prenotification and/ or follow-up. Recipients of a questionnaire seem to use other factors to evaluate the perceived benefits and costs of participating in a mail survey. Prenotification may create some positive attitudes toward the study; however, recipients decisions to cooperate are likely to depend on the characteristics of the questionnaire package. For example, the recipients may decide not to cooperate if they find the questionnaire long and uninteresting. The same can also be concluded for follow-up. The recipients may be willing to cooperate if they find the questionnaire interesting; however, they may postpone the completion of the questionnaires to a later time. The task of completing a questionnaire is likely to receive low priority. In this 106
case, follow-up is likely to help reinforce and remind the recipients to perform the task. In contrast, if the recipients do not like the questionnaire and decide not to cooperate, then a single follow-up is less likely to make a difference 4. The recipients willingness to participate in a mail survey is therefore likely to be influenced more by the characteristics of the questionnaire package itself than use of prenotification and/or follow-up. Our multiple correspondence analysis suggests two possible designs when dealing with two different types of information. One is the use of noncomparative scales or open-ended questions when asking respondents for facts (excluding numbers). The other is the use of comparative scales or fixed alternative responses when asking for opinions or numbers. These designs are likely to help create a positive attitude toward the questionnaire and decrease the level of perceived cost of responding to a mail survey. Our studies may be constrained by some limitations. Due to our budget constraint, we could not implement the study with a larger sample. Another limitation is our low response rates, which can partly be attributed to the topic of the study. The issues of mail surveys might not interest the recipients of the questionnaires; hence, low response rates were the results. However, we believe that our low response rates did not contribute to the issue of nonresponse bias. Nonresponse bias was not an issue in our studies since our population of interest was a homogeneous group of small and medium-sized firms and our purpose was to study business executives perceptions toward mail surveys. Hence, our samples were deemed appropriate in meeting our objectives [57]. Various avenues of future research are also available to researchers who are interested in business respondents behavior toward mail survey participation. For instance, researchers may want to utilize the survey-on-surveys approach to find out more about respondents preferences toward mail surveys and compare the results with those of conventional experimental research. Notice that more research is needed to establish the conclusive result of the effect of day of the week the respondent receives the questionnaire and its possible interaction with the respondent s set-up time to answer. Similarly, more empirical evidence is needed to establish the effect of 4 It is possible that some of those people who are not willing to respond in the first place may complete and return the questionnaires if multiple follow-up attempts are used. However, in this case the quality of data might become a problem. questionnaire length on industrial response behavior. What should be the optimal number of pages and questions? Researchers may also want to examine the possible interaction between page length and the number of questions. The interaction between day of the week the respondent receives the questionnaire and questionnaire length should also be investigated. Lastly, future research may also want to carefully investigate the effects of prenotification and follow-up. Is the recipients willingness to cooperate really affected by prenotification and follow-up? Is follow-up really effective in changing respondents willingness or is it just a reminder used to reinforce and remind those recipients who are willing to respond not to forget to perform the task? These are important research questions that should be addressed in future research. CONCLUSION Low response rates have been a major concern for both academic and commercial researchers. This research contributes to the literature on methods of improving industrial mail survey response rates by utilizing the survey-on-surveys approach to study business executives views on mail surveys. This approach can provide researchers with richer information than can experimental research. The approach also allows researchers to simultaneously examine the effects of multiple inducement factors in one single study. The results of our research suggest that day of the week does not have an effect on the likelihood of mail survey participation. However, the length of questionnaires does have an impact on respondents willingness to participate. Short questionnaires appear to increase the likelihood that the questionnaires will be completed. The results also indicate that the content of the study is the most important factor in stimulating response participation, followed by survey sponsorship and postage paid reply envelopes. Prenotification and follow-up are considered the least important factors. These results are supported by our second study which indicates that both prenotification and follow-up do not have any positive effect on respondents willingness to participate in a mail survey. In designing a questionnaire, researchers must pay attention to the format of the questions and the information sought. Noncomparative scales or open-ended questions should be used when asking respondents for facts, whereas comparative scales or fixed alternatives should be used when asking respondents for opinions or numbers. 107
REFERENCES 1. Faria, A. J., and Dickinson, J. R.: The Effect of Reassured Anonymity and Sponsor on Mail Survey Response Rate and Speed with a Business Population. Journal of Business and Industrial Marketing 11, 66 76 (1996). 2. Kanuk, L., and Berenson, C.: Mail Surveys and Response Rates: A Literature Review. Journal of Marketing Research 12, 440 453 (1975). 3. Mangione, T. W.: Mail Surveys: Improving the Quality. Sage Publications, Thousand Oaks, CA, 1995. 4. Haggett, S., and Mitchell, V. W.: Effect of Industrial Prenotification on Response Rate, Speed, Quality, Bias, and Cost. Industrial Marketing Management 23, 101 110 (1994). 5. Jobber, D.: Improving Response Rates in Industrial Mail Surveys. Industrial Marketing Management 15, 183 195 (1986). 6. Jobber, D., and O Reilly, D.: Industrial Mail Surveys: A Methodological Update. Industrial Marketing Management 27, 95 107 (1998). 7. Jobber, D., and Saunders, J.: The Specification and Estimation of a Robust Mail Survey Response Model. Proceedings Annual Conference of the European Marketing Academy (Helsinki, Finland) 2, 865 879 (1986). 8. Schlegelmilch, B. B., and Diamantopoulos, A.: Prenotification and Mail Survey Response Rates: A Quantitative Integration of the Literature. Journal of the Market Research Society 33, 243 255 (1991). 9. Yu, J., and Cooper, H.: A Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research 20, 36 44 (1983). 10. Forsgren, R. A.: Increasing Mail Survey Response Rates: Methods for Small Business Researchers. Journal of Small Business Management 27, 61 66 (1989). 11. Paxson, M. C.: Follow-up Mail Surveys. Industrial Marketing Management 21, 195 201 (1992). 12. Dickinson, J. R., and Faria, A. J.: Refinements of Charitable Contribution Incentives for Mail Surveys. Journal of the Market Research Society 37, 447 453 (1995). 13. Duhan, D. F., and Wilson, R. D.: Prenotification and Industrial Survey Responses. Industrial Marketing Management 19, 95 105 (1990). 14. Faria, A. J., and Dickinson, J. R.: Mail Survey Response, Speed, and Cost. Industrial Marketing Management 21, 51 60 (1992). 15. Greer, T. V., and Lohtia, R.: Effects of Source and Paper Color on Response Rates in Mail Surveys. Industrial Marketing Management 23, 47 54 (1994). 16. Kalafatis, S. P., and Tsogas, M. H.: Impact of the Inclusion of an Article as an Incentive in Industrial Mail Surveys. Industrial Marketing Management 23, 137 143 (1994). 17. Murphy, I. P.: Surveying a Decade of Surveys in Germany. Marketing News 30, 20, 33 (1996). 18. Singer, E.: Public Reactions to Some Ethical Issues of Social Research Attitudes and Behavior. Journal of Consumer Research 11, 501 509 (1984). 19. Demaio, T. J.: Refusals: Who, Where and Why? Public Opinion Quarterly 44, 223 233 (1980). 20. Diamantopoulos, A., and Schlegelmilch, B. B.: Determinants of Industrial Mail Survey Response: A Survey-on-Surveys Analysis of Researchers and Managers Views. Journal of Marketing Management 12, 505 531 (1996). 21. Childers, T. L., and Skinner, S. J.: Toward a Conceptualization of Mail Survey Response Behavior. Psychology & Marketing 13, 185 209 (1996). 22. Dillman, D. A.: Mail and Telephone Surveys: The Total Design Method. John Wiley & Sons, New York, 1978. 23. Belk, R. W.: Situational Variables and Consumer Behavior. Journal of Consumer Research 2, 157 164 (1975). 24. Pressley, M. M., and Tullar, W.: A Factor Interactive Investigation of Mail Survey Response Rates from a Commercial Population. Journal of Marketing Research 41, 108 112 (1977). 25. Futrell, C., and Hise, R. T.: The Effect of Anonymity and Same Day Deadline on the Response Rate to Mail Surveys. European Research October, 171 175 (1982). 26. Tyagi, P. K.: The Effects of Appeals, Anonymity and Feedback on Mail Survey Response Patterns from Salespeople. Journal of the Academy of Marketing Science 17, 234 241 (1989). 27. Veiga, J. F.: Getting the Mail Questionnaire Returned: Some Practical Research Considerations. Journal of Applied Psychology 59, 217 218 (1984). 28. Angur, M. G., and Nataraajan, R.: Do Source of Mailing and Monetary Incentives Matter in International Industrial Mail Surveys? Industrial Marketing Management 24, 351 357 (1995). 29. Armstrong, J. S., and Yokum, J. T.: Effectiveness of Monetary Incentives: Mail Surveys to Members of Multinational Professional Groups. Industrial Marketing Management 23, 133 136 (1994). 30. Chawla, S. K., Balakrishnan, P. V. (Sundar), and Smith, M. F.: Mail Response Rates from Distributors. Industrial Marketing Management 21, 307 310 (1992). 31. Schneider, K C., and Johnson, J. C.: Stimulating Response to Market Surveys of Business Professionals. Industrial Marketing Management 24, 265 276 (1995). 32. Jobber, D., Allen, N., and Oakland, J.: The Impact of Telephone Notification Strategies on Response to an Industrial Mail Survey. International Journal of Research in Marketing 2, 291 306 (1985). 33. Jobber, D., and Sanderson, S.: The Effects of a Prior Letter and Coloured Questionnaires on Mail Survey Response Rates. Journal of the Market Research Society 25, 339 349 (1983). 34. Swan, J. E., Epley, D. E., and Burns, W. L.: Can Follow-up Response Rates to a Mail Survey Be Increased by Including Another Copy of the Questionnaire? Psychological Reports 47, 103 106 (1980). 35. Childers, T. L., Pride, W. M., and Ferrell, O. C.: A Reassessment of the Effects of Appeals on Response to Mail Surveys. Journal of Marketing Research 17, 365 370 (1980). 36. Kerin, R. A., and Harvey, M. G.: Methodological Considerations in Corporate Mail Surveys: A Research Note. Journal of Business Research 4, 277 281 (1976). 37. Jobber, D., and Sanderson, S.: The Effect of Two Variables on Industrial Mail Survey Returns. Industrial Marketing Management 14, 119 121 (1985). 38. Pressley, M. M.: Care Needed When Selecting Response Inducements in Mail Surveys of Commercial Populations. Journal of the Academy of Marketing Science 6, 336 343 (1978). 39. Clark, G. L., and Kaminski, P. F.: How to Get More for Your Money in Mail Surveys. Journal of Services Marketing 4, 44 47 (1990). 40. Kimball, A. E.: Increasing the Rate of Return in Mail Surveys. Journal of Marketing 25, 63 64 (1961). 41. Albaum, G., and Strandskov, J.: Participation in a Mail Survey of International Marketers. Effects of Pre-Contact and Detailed Project Explanation. Journal of Global Marketing 2, 7 23 (1989). 42. Jobber, D., Birro, K., and Sanderson, S. M.: A Factorial Investigation of Methods of Stimulating Response to a Mail Survey. European Journal of Operational Research 37, 158 164 (1988). 108
43. Parasuraman, A.: Impact of Cover Letter Detail on Response Patterns in a Mail Survey. American Institute of Decision Sciences (13th meeting) 2, 289 291 (1981). 44. Mitchell, V., and Nugent, S.: Industrial Mail Surveys: The Costs and Benefits of Telephone Pre-Notification. Journal of Marketing Management 7, 257 270 (1991). 45. Murphy, P. R., Dalenberg, D. R., and Daley, J. M.: Improving Survey Response with Postcards. Industrial Marketing Management 19, 349 355 (1990). 46. Murphy, P. R., Daley, J. M., and Dalenberg, D. R.: Exploring the Effects of Postcard Prenotification on Industrial Firms Response to Mail Surveys. Journal of the Market Research Society 33, 335 341 (1991). 47. Nötzel, R.: The Theory and Practice of the Mail Survey. The European Marketing Research Review 7, 71 106 (1972). 48. Blythe, I., and Essex, P.: Variations on a Postal Theme. Market Research Society Annual Conference Proceedings, 35 51 (1981). 49. Sletto, R. F.: Pretesting of Questionnaires. American Sociological Review 5, 193 200 (1940). 50. Scott, C.: Research on Mail Surveys. Journal of the Royal Statistical Society 124 (Series A, Part 2), 143 195 (1961). 51. Mason, W. S., Dressel, R. J., and Bain, R. K.: An Experimental Study of Factors Affecting Response to a Mail Survey of Beginning Teachers. Public Opinion Quarterly 25, 296 299 (1961). 52. Stanton, F.: Notes on the Validity of Mail Questionnaire Returns. The Journal of Applied Psychology 23, 95 105 (1939). 53. Adams, L. L. M., and Gale, D.: Solving the Quandary Between Questionnaire Length and Response Rate in Educational Research. Research in Higher Education 17, 231 240 (1982). 54. Jobber, D.: An Examination of the Effects of Questionnaire Factors on Response to an Industrial Mail Survey. International Journal of Research in Marketing 6, 129 140 (1989). 55. Erdos, P. L., and Morgan, A. J.: Professional Mail Surveys. Robert E. Krieger Publishing Company, Malabar, FL, 1983. 56. LaGarce, R., and Kuhn, L. D.: The Effect of Visual Stimuli on Mail Survey Response Rates. Industrial Marketing Management 24, 11 18 (1995). 57. Hunt, S.: Commentary on an Empirical Investigation of a General Theory of Marketing Ethics. Journal of the Academy of Marketing Science 18, 173 177 (1990). 109