1 Assessment & Evaluation in Higher Education Vol. 29, No. 5, October 2004 Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations Curt J. Dommeyer*, Paul Baum, Robert W. Hanna & Kenneth S. Chapman California State University, Northridge, USA This study compares student evaluations of faculty teaching that were completed in-class with those collected online. The two methods of evaluation were compared on response rates and on evaluation scores. In addition, this study investigates whether treatments or incentives can affect the response to online evaluations. It was found that the response rate to the online survey was generally lower than that to the in-class survey. When a grade incentive was used to encourage response to the online survey, a response rate was achieved that was comparable with that to the in-class survey. Additionally, the study found that online evaluations do not produce significantly different mean evaluation scores than traditional in-class evaluations, even when different incentives are offered to students who are asked to complete online evaluations. Introduction In the spring semester 2000, the authors conducted a study to compare student evaluations gathered via the traditional method, i.e. in-class, with those collected online. Our purpose was three-fold: (i) to determine if the method of evaluation affects the response rate; (ii) to determine if the method of evaluation affects an instructor s teaching evaluation scores; (iii) to determine if online treatments affect evaluation response rates and teaching evaluation scores. Problems with the traditional method of faculty evaluation Virtually every university in the USA regularly conducts student evaluations of faculty teaching performance, the majority of which are conducted in a classroom setting with paper surveys. The results of these evaluations are often used to make promotion, tenure and merit pay decisions and, consequently, generate controversy among faculty. While most previous research has focused on the psychometric *Corresponding author: Department of Marketing, College of Business Administration and Economics, California State University, Northridge, CA , USA. ISSN (print)/issn X (online)/04/ Taylor & Francis Ltd DOI: /
2 612 C. J. Dommeyer et al. properties of the reliability and validity of the questions or appropriate usage of the results (Centra, 1993), some studies have concluded that there are major concerns over how the evaluation data is collected (Franklin & Theall, 1989). Complaints include instructors manipulating ratings through their comments or actions when distributing questionnaires (Simpson and Siguaw, 2000), opportunities for instructors to alter results prior to turning them in (Ory, 1990) and a lack of written comments as students fill out questionnaires at the end of class (Layne et al., 1999). The online method of faculty evaluation Within the past few years, several papers have reported the development and use of online faculty evaluations for students receiving their instruction over the Internet (Henderson, 2001; Reid, 2001) or in a classroom setting (Ha & Marsh, 1998; Ha et al., 1998; Dommeyer et al., 2002b; Moss & Hendry, 2002; Ku, 2002/2003). In a typical online evaluation, students are provided with a web site address where they can gain access to the survey instrument. Prior to giving their responses, students are informed that professors will not have access to any student s individual responses and that professors will receive the results of the survey in summary form only after the final grades have been posted. After students log on to the online system, typically using a student ID number, they are able to indicate their responses to multiple response items and to type their answers to open-ended questions. After students submit their responses, they can receive a printed document that verifies that they have completed the evaluation. Students are generally given at least 2 weeks in which to provide their evaluations, usually near the end of term. The online method of gathering faculty evaluations has numerous advantages over the traditional, in-class method. Once an online evaluation system is established, many of the costs of the traditional method can be avoided, i.e. the costs of printing, distributing, collecting, scanning and storing the paper surveys, the costs of typing students responses to open-ended questions and the costs of delivering hard copy summary reports to faculty. Kronholm et al. (1999) compared the costs of gathering faculty evaluations online with those of gathering the evaluations in-class for a 22-item survey of 327 distance learning students across 18 locations. They concluded that the in-class version of the survey would cost $ while the online version would cost only $ Of course, larger scale surveys should generate even more cost savings, since the variable costs associated with an online survey are minimal or nonexistent. The online evaluations are less susceptible to faculty influence than the in-class evaluations. In the typical in-class evaluation, it is possible that the faculty member might perform on the day of the evaluations some activity that is designed to elicit a favorable response from students, e.g. have a pizza party, play an entertaining educational game, announce that the workload requirements have been reduced or announce that there is now a way that students may earn extra credit (Simpson & Siquaw, 2000). Just the presence of the faculty member before or during an in-class evaluation could affect a student s response, especially if the student fears that the faculty member may have some way of identifying the student s response. The
3 Gathering faculty teaching evaluations 613 online method of evaluation is less susceptible to these influences since the student responds to the online survey from a personal computer of his choosing outside the classroom during a time period that is somewhat distant from the classroom experience. Moreover, since the faculty member would not have any contact with the evaluation forms of an online survey, there would be no opportunity for a faculty member to alter the data after it had been collected. With paper-and-pencil evaluations conducted in the classroom, students have only one opportunity to provide their opinion of their professor, i.e. during the class period the surveys are distributed. However, with the online method of evaluation, students have multiple days on which to provide their evaluation. Moreover, during the period of the online evaluation, the online system can be programmed to send reminder notices to those students who have not yet responded to the survey (Ha & Marsh, 1998; Ha et al., 1998; Crawford et al., 2001; Ku, 2002/2003). Furthermore, since students are not constrained by time during an online response session, they can provide as complete a response to the questions as they wish. Researchers who have compared online and in-class responses with open-ended questions reveal that students provide more information to open-ended questions in the online format (Ha & Marsh, 1998; Layne et al., 1999; Hmieleski & Champagne, 2000). Another advantage of the online method is that it can permit professors greater flexibility in the design of the survey instrument. Some online systems allow professors to generate questions specifically designed for their courses and to have complicated skipping and branching patterns (Ha & Marsh, 1998; Ha et al., 1998; Reid, 2001; Ku, 2002/2003). Polls of students who have used both the online and traditional methods of evaluation reveal that most students preferred the online method of evaluation and had little difficulty accessing and using the online system (Ha & Marsh, 1998; Ha et al., 1998; Layne et al., 1999). Those students having complaints about the online evaluations were most likely to think that the evaluations were too time consuming or to fear that their responses may not have been anonymous (Layne et al., 1999; Dommeyer et al., 2002b). Some students no doubt feel that the integrity of the online system could be compromised, causing their log on ID number to be revealed with their responses. The lack of an anonymous response is also a concern of students using the traditional method of evaluation, as they sometimes fear that a professor will be able to identify their handwriting in answers to open-ended questions (Layne et al., 1999). Faculty have been reluctant to adopt the online method of evaluation. In polls where faculty have been asked to select which method, online or traditional, they would prefer their students use for faculty evaluations, less than one-third of faculty chose the online method (Ha & Marsh, 1998; Dommeyer et al., 2002a). Faculty fear that the online method will produce a lower response rate and a less accurate response than the traditional method (Dommeyer et al., 2002a). Moreover, they fear not only that the online method may attract responses from students who rarely attend class but also that some students will be influenced by their peers during online evaluation (Ha & Marsh, 1998).
4 614 C. J. Dommeyer et al. Only a few studies have investigated whether online evaluation produces biased results. Both Ha et al. (1998) and Layne et al. (1999) used experimental designs that randomly manipulated the method used by students to complete the same faculty survey. Both studies concluded that the method of evaluation (online versus traditional) had no significant effect on the faculty ratings. In another study, Dommeyer (2002b) compared respondents and non-respondents to an online faculty evaluation and found no evidence of a non-response bias on the following variables: gender, expected grade in the class and rating of the professor s teaching performance. Currently, the principal problem with online evaluations is a potentially low response rate. Response rates to online faculty evaluations have ranged anywhere from 23 (Ha et al., 1998) to 92% (Ku, 2002/2003), with the higher response rates associated with surveys that used one or more reminder messages (Ha & Marsh, 1998; Ha et al., 1998, Ku, 2002/2003). When Layne et al. (1999) investigated how the method of evaluation (in-class versus online) affected the response rate to a faculty evaluation, they found that the in-class survey produced a higher response rate than the online method (60.6 versus 47.8%). To prevent low response rates to online evaluations, faculty may need to utilize techniques that will motivate students to participate in an online evaluation. Besides comparing the results of online and in-class faculty evaluations, this paper reports on the effectiveness of three techniques designed to increase the response rate to an online evaluation. Experimental design The study was conducted using undergraduate business majors at California State University, Northridge. A total of 16 instructors participated in the study. Although the sample of instructors represents a convenience sample, the courses represent a cross-section of lower and upper division core courses that are required for business majors. Table 1 summarizes the distribution of instructors across the seven departments. Each instructor taught two sections of the same course. One section was evaluated online and the other section was evaluated in-class using the traditional Scantron form. Each instructor was evaluated online and in-class with the form approved by the instructor s department. Although the questions on the evaluation forms varied by department, all instructors in a given department, regardless of the course taught, used the same form for both the online and in-class evaluations. The treatments and the randomized incomplete block design Each of the instructors in this study was assigned to have one of his/her sections evaluated in-class and the other evaluated online. In the online evaluation, each instructor was assigned either to a control group or to one of the following online treatments:
5 Gathering faculty teaching evaluations a very modest grade incentive (one-quarter of a percent) for completing the online evaluations; 2. an in-class demonstration of how to log on to the web site and complete the form; 3. an early grade feedback incentive in which students were told they would receive early feedback of their course grades (by postcard and/or posting the grades online) if at least two-thirds of the class completed online evaluations. The treatment each professor received in the online evaluation was determined by a randomized incomplete block design. In this design, instructors represent the blocks and method of evaluation is the factor of interest. Method of evaluation is at five levels: three online treatments, an online control group that received no treatment and an in-class evaluation. Table 1 displays how the 16 instructors were assigned to the online evaluations: four instructors were assigned the grade incentive, two received the early grade feedback incentive, two were placed in the demo treatment and the remaining eight instructors were evaluated online without any treatment (the control group). Standardizing procedures To control the conditions under which the experiment was conducted, instructors were requested to read a prepared statement explaining the online procedure for the particular treatment assigned to the instructor s section. Instructors who gave a grade incentive informed their students that the grade incentive would be onequarter of one percent for any student who had completed the online evaluation. Instructors that provided the early grade feedback incentive told their students that at least two-thirds of the class would have to complete the online evaluations for the class to receive early feedback of their grades. Instructors who administered the demo treatment provided a live, in-class demonstration of how to log on to the online system, how to fill out the evaluation form and how to log off. Students in the other online conditions where given written instructions on how to log on and complete the online evaluations but were not given a live demonstration. Each student doing an online evaluation had to log on to the system using a unique student ID number and a personal password. The online system prevented students from doing more than one evaluation of their instructor and students could evaluate only classes in which they were enrolled. All students who were asked to evaluate their professor online, independent of their treatment or control group assignment, were informed that they would have to complete the online evaluations on their own time. All students, regardless of whether they were asked to evaluate their professor online or in-class, were assured that their instructors would not have access to individual student evaluations but would be given summary reports only after the course grades were filed. The instructors were asked not to provide any additional information except for comments that pertained to the online treatment to which they were assigned.
6 616 C. J. Dommeyer et al. Table 1. Assignment of online treatments to instructors Courses by area Management Online treatment Accounting/mis Business law Economics Finance Management Marketing science Total Grade incentive Grade feedback incentive Demo None (control group) Total instructors
7 Gathering faculty teaching evaluations 617 Other design considerations A potential source of nuisance variation is differences between the characteristics of students in different sections of the same course. To minimize the potential for this source of variation, the 16 instructors who were selected for this study were ones, for the most part, who taught back-to-back sections of the same course. In a back-toback teaching format it is unlikely that students in the first section of an instructor s course are markedly different from students in the second section of the same course. Nuisance variation can also occur due to the order in which a section is taught, i.e. instructors, because of learning effects, might consistently perform better in the second section of a back-to-back assignment and, consequently, might receive better evaluations in that section. To minimize the section order effect, experimental conditions were randomly assigned to the various sections within the incomplete block design. Research questions One variable of interest is the evaluation response rate. For any given section of a course, the response rate is defined as the ratio of the number of students who completed an evaluation to the number of students who received a grade. When analyzing the response rates, we plan to address the following questions. (1) What is the overall response rate when conducting evaluations in the traditional, in-class manner? (2) What is the overall online response rate? (3) Do the online treatments affect the online response rates? (4) Are there significant differences between the online and corresponding in-class response rates? Additionally, the mean scores of each evaluation form item are of interest. In particular, we attempt to answer the following questions. (1) Does the method of evaluation (in-class versus online) affect an instructor s mean teaching evaluation scores? (2) Does the treatment given to students to complete their online evaluations affect the mean evaluation scores? The response rates Table 2 indicates each instructor s online treatment, online response rate, in-class response rate and class size. Since class size is defined as the number of students who received a grade in the course, it represents the maximum number of students in a class who could participate in a faculty evaluation. The Difference column is the difference between the online and in-class response rates. To determine the statistical significance of the difference between the online and in-class response rates, a test of the difference between two proportions was applied to the in-class and
8 618 C. J. Dommeyer et al. Table 2. Response rates by method of evaluation and online treatment Online In-class Class Response Class Response Significance Instructor Treatment size rate size rate Difference level A Grade NS B Grade NS C Grade NS D Grade NS E Demo NS F Demo G Feedback H Feedback I None J None NS K None L None M None N None O None P None Total NS, not significant. online response rates of the two sections taught by each instructor. The significance levels of the test are given in the last column. The method effect To determine how the method of evaluation (online versus in-class) affected the response rate to the evaluations, one should focus in Table 2 on the results for instructors I P, the eight instructors who did not apply a treatment when the online evaluations were conducted. All but one of these instructors received a significantly lower response rate to the evaluations when the data were collected online. The average response rate for these eight instructors combined was 29% for the online survey and 70% for the in-class survey (P 0.001). These results are consistent with the Layne et al. (1999) study and clearly demonstrate that online evaluations conducted without treatments are likely to experience response rates that are below those of in-class surveys. The method/treatment effect When a treatment was applied to the online method, the treatment did not always produce a response rate that compared favorably with the response rate of the in-class method. When the early grade feedback incentive was applied to the online method, the response rates to that treatment were significantly lower than the
9 Gathering faculty teaching evaluations 619 Table 3. Response rates by online treatment Online treatment Class size Response rate Grade Demo Feedback None Grand total in-class method. Mixed results were obtained when the demo treatment was applied to the online method: in one case it obtained a response rate lower than the in-class method, while in another case there was no difference in response rates. Only in the case of the grade incentive was the online method able to achieve response rates comparable with those of the in-class method. In the four sections that gave grade incentives, the overall online response rate was 86.67%. How does this response rate compare with the overall response rate in the corresponding sections where the same instructors were evaluated in-class? It is virtually identical at 86.99%. Thus, if one wishes to achieve online response rates that are similar to in-class response rates, a very mild grade incentive should be offered. The online treatment effect The overall response rate for each of the online treatments is displayed in Table 3. One-way ANOVA reveals that there are significant differences in response rates among the online treatments (F 3, , P ). To determine which treatment groups had response rates that were significantly different from each other, Bonferroni multiple comparison tests were conducted (see Table 4). The response rate for the grade incentive is significantly higher than the response rate for the other two treatments and the control group. The response rate for the demo treatment is significantly higher than the response rate for the control group, but it is not significantly different from the response rate for the early grade feedback incentive. Finally, the response rate for the control group is significantly Table 4. Bonferroni tests on response rates by online treatment Online treatment Class Response Comparison Class Response Significance size rate treatment size rate Difference level Grade Demo Feedback None Demo Feedback NS None None Feedback
10 620 C. J. Dommeyer et al. lower than the response rate for the early grade feedback incentive. It is clear from these results that if one wants to achieve the highest response rate with the online method, one should use a grade incentive. Mean item scores Since there are multiple questions on each instructor s evaluation form, there is more than one response variable for each instructor. Consequently, to test the hypothesis that the method of evaluation (in-class versus online) has no effect on the mean evaluation scores, a one-way multivariate analysis of variance (MANOVA) was conducted on each instructor separately. No significant multivariate F values were obtained for the eight instructors who applied an online treatment, indicating that there were no significant differences between their online and in-class evaluations. Among the eight instructors who were evaluated online without a treatment, only one showed any significant difference between the online and in-class evaluations. This latter result could easily be a statistical anomaly due to chance. Therefore, the null hypothesis that the mean online and mean in-class evaluation scores are equal is not rejected. This finding suggests that online evaluations of teaching performance do not produce significantly different mean scores than traditional, in-class evaluations, even when different online treatments are used. Discussion and conclusions The online method of collecting teaching evaluations offers numerous advantages over the in-class method of evaluation: it is cheaper to administer, requires less class time, permits the processing of data quickly, is less vulnerable to professorial influence, allows students as much time as they wish to evaluate faculty and allows students multiple opportunities to evaluate faculty. Though there are costs associated with managing the online web site, such as downloading the responses and preparing summary reports, these costs are far less than the material and labor costs associated with in-class evaluations. And while students may find it somewhat inconvenient to evaluate faculty online, they should have little difficulty understanding the online procedures, since students, by now, are quite comfortable with the Internet and web sites. This study and its predecessors demonstrate that gathering teaching evaluations online is a viable alternative to the traditional, in-class method. There is no evidence, in this study or in previous ones, that the online method produces biased evaluations. In fact, prior research has demonstrated that online surveys may produce a higher quality and greater quantity of response to open-ended questions, for online respondents tend to provide more information in their free-form responses and are not worried that someone might identify their handwriting since their responses are typed (Ha & Marsh, 1998; Layne et al., 1999; Hmieleski & Champagne, 2000). The only serious problem posed by the online method is a potentially low response rate. This study, however, illustrated that a mild grade incentive, i.e. one-quarter of 1%,
11 Gathering faculty teaching evaluations 621 can be used to achieve a response rate that is comparable with that of the in-class survey. Not all professors will be willing to use a grade incentive as an online response motivator. Some might argue that the grade incentive could bias the survey results in favor of students who are more concerned about their grades. However, this study found that there was no bias in the evaluations when the grade incentive was used. Others might argue that it is unethical to use a grade incentive, for a student s participation in a faculty evaluation should be a voluntary event that has no bearing on the student s grade. For faculty who feel this way, other response motivators should be considered. It appears that giving reminder messages to students is an effective means of enhancing the response rate to an online faculty evaluation. The literature offers several examples of online faculty evaluations that have achieved high response rates, and every one of these examples described the use of one or more reminder messages (Ha & Marsh, 1998; Ha et al., 1998; Ku, 2002/2003). In a web site survey of students on affirmative action, Crawford et al. (2001) compared the effect of sending two reminder messages, sent 2 and 4 days after the initial mailing, to a single reminder message sent 5 days after the initial mailing. The rate and speed of response were mildly higher for the condition that used two reminders. Should professors wish to use reminder messages to enhance the response to an online faculty evaluation, they can easily make reminder announcements in class and/or they can have the survey web site programmed to automatically reminder notices to survey non-respondents on a regular basis. Previous research has demonstrated that response to a web site survey can be enhanced with the use of a sweepstakes approach (Bosnjak & Tuten, 2003; Cobanoglu & Cobanoglu, 2003). At this point, it is not clear what type of prize would be motivating to students yet not be prohibitively expensive for a professor to offer. Perhaps a discount coupon from the campus bookstore or a gift certificate for a pizza would be a large enough incentive to entice the majority of students into responding. When students log on to a web site to conduct an online evaluation, they should be required to use an access code. The access code ensures that the response is coming from a student in the class and it prevents students from evaluating the class more than once. In many cases, a student s access code is the student s ID number. When students use their ID number to log on to the survey web site, professors and survey administrators must assure students that their identity will never be tied to their online evaluation. Even though students are told that their name or identifying number will never be associated with their response (and this promise is upheld), they may doubt that the online system can protect their identity. It may be necessary to develop strategies that increase the student s perceptions that response to the survey is truly anonymous. One strategy that might work is to develop a set of access codes for the web site survey. In the classroom, the professor could randomly distribute the access codes to his/her students and then explain that it is impossible for the student s access code to be tied to a particular student. Students could then feel confident that their response to the online survey would be anonymous. Should students be required to verify that they have completed the evaluation, the online
12 622 C. J. Dommeyer et al. system can be programmed to generate a proof of completion certificate for the evaluator. Gathering evaluations of faculty through a web site survey is a relatively new technique and many aspects of this survey method still need to be researched and fine tuned. For example, when should the online faculty evaluation begin and how much time should students be given to complete the evaluation? Most of the online evaluations in the literature started a few weeks before the final exam. However, since student evaluations of faculty are fairly stable from mid term to the end of term (Costin, 1968; Feldman, 1979), it is conceivable that the online evaluations could start following mid term. Also, when students respond online, should they be allowed only one attempt at the evaluation or should they be allowed multiple visits to the same questionnaire so that they can either complete the survey or change their previous answers? There is also the question of whether students should be allowed to view the latest aggregate of the survey responses following their individual responses. The web site could be programmed to provide these data to the students as a response motivator, but supplying such results could be an invasion of the professor s privacy. A final question is whether a progress indicator should be used on the survey web site. A progress indicator informs the respondent of the degree to which the survey has been completed. Previous research on progress indicators has produced mixed results. Crawford et al. (2001) found that a progress indicator dampened the response rate to a web site survey that contained both structured and open-ended items. However, the researchers provided anecdotal information that supported the use of a progress indicator when the survey contained only closeended items. Clearly, answers are needed to the above questions so that survey administrators will know the optimal strategy to use when conducting an online evaluation of faculty. The results of the present study are limited to several departments in a large college of business. Future researchers of the online method might benefit by conducting their research within other departments and campuses. Notes on contributors Curt J. Dommeyer is Professor and former Chairman of the Department of Marketing at California State University, Northridge. His research interests include survey methodology, health marketing, consumer privacy and pedagogy. Paul Baum is Professor Emeritus of Systems and Operations Management and former Director of Assessment Outcomes at California State University, Northridge. He was a past president of the Southern California Chapter of the American Statistical Association and is a member of the American Society for Quality. Professor Baum s research interests include statistical methods, quality management and program assessment. Robert W. Hanna is Professor and former Chairman of the Department of Management at California State University, Northridge. His doctoral and post-doctoral work at UCLA s Anderson School of Management focused on individual and human systems development.
13 Gathering faculty teaching evaluations 623 Kenneth S. Chapman is Professor of Economics at California State University, Northridge. He is a member of the American Economics Association and the Western Economics Association and has published in the areas of finance, law and economics, health economics and the economics of sports. References Bosnjak, M. & Tuten, T. L. (2003) Prepaid and promised incentives in Web surveys, Social Science Computer Review, 21(2), Centra, J. A. (1993) Reflective faculty evaluation: enhancing teaching and determining faculty effectiveness (San Francisco, CA, Jossey-Bass). Cobanoglu, C. & Cobanoglu, N. (2003) The effect of incentives in web surveys: application and ethical considerations, International Journal of Market Research, 45(4), Costin, F. (1968) A graduate course in the teaching of psychology: description and evaluation, Journal of Teacher Education, 19(4), Crawford, S. D., Couper, M. P. & Lamias, M. J. (2001) Web surveys: perceptions of burden, Social Science Computer Review, 19(2), Dommeyer, C. J., Baum, P., Chapman, K. S. & Hanna, R. W. (2002a) Attitudes of business faculty toward two methods of collecting teaching evaluations: paper vs. online, Assessment & Evaluation in Higher Education, 27(5), Dommeyer, C. J., Baum, P. & Hanna, R. W. (2002b) College students attitudes towards methods of collecting teaching evaluations: in-class vs. online, Journal of Education for Business, 78(1), Feldman, K. A. (1979) The significance of circumstances for college students ratings of their teachers and courses, Research in Higher Education, 10(2), Franklin, J. & Theall, M. (1989) Who reads ratings: knowledge, attitudes, and practice of users of student ratings of instruction, paper presented at the annual meeting of the American Educational Research Association, San Francisco, April. Ha, T. S. & Marsh, J. (1998) Using the web for student evaluation of teaching: COSSET & OSTEI, paper presented at the Quality in Teaching and Learning Conference, Hong Kong, December. Ha, T. S., Marsh, J. & Jones, J. (1998) A Web-based system for teaching evaluation, paper presented at the NCITT 98 Conference, Hong Kong, May. Henderson, T. (2001) Classroom assessment techniques in asynchronous learning networks, The Technology Source, Sept./Oct. Available online at: article&id 908. Hmieleski, K. & Champagne, M. V. (2000) Plugging in to course evaluation, The Technology Source, Sept./Oct. Available online at: article&id 795. Kronholm, E. A., Wisher, R. A., Curnow, C. K. & Poker, F. (1999) The transformation of a distance learning enterprise to an Internet base: from advertising to evaluation, paper presented at the Northern Arizona University NAU/Web99 Conference, Flagstaff, AZ. Ku, S. (2002/2003) Marshall online teaching evaluation system, Faculty Forum, 4(1). Available online at: article06.shtml Layne, B. H., DeCristoforo, J. R. & McGinty, D. (1999) Electronic versus traditional student ratings of instruction, Research in Higher Education, 40(2), Moss, J. & Hendry, G. (2002) Use of electronic surveys in course evaluation, British Journal of Educational Technology, 33(5), Ory, J. (1990) Student ratings of instruction: ethics and practice, in: Theall, M. & Franklin, J. (Eds) Student ratings of instruction: issues for improving practice (San Francisco, CA, Jossey- Bass), Reid, I. C. (2001) Reflections on using the Internet for the evaluation of course delivery, The Internet and Higher Education, 4(1), Simpson, P. M. & Siguaw (2000) Student evaluations of teaching: an exploratory study of the faculty response, Journal of Marketing Education, 22(3),
ONLINE COURSE EVALUATIONS Table of Contents Introduction Review of published research literature Benefits of online evaluations Administration costs Data quality Student accessibility Faculty influence
The Examination of Strength and Weakness of Online Evaluation of Faculty Members Teaching by Students in the University of Isfahan Ansary Maryam PhD Scholar, Philosophy of Education, Faculty of Educational
Assessment & Evaluation in Higher Education Vol. 32, No. 6, December 2007, pp. 681 691 Are they the same? Comparing the instructional quality of online and faceto-face graduate education courses Andrew
The Journal of Effective Teaching an online journal devoted to teaching excellence Online vs. Paper Evaluations of Faculty: When Less is Just as Good David S. Fike 1ab, Denise J. Doyle b, Robert J. Connelly
College Students Attitudes Toward Methods of Collecting Teaching Evaluations: In-Class Versus On-Line CURT J. DOMMEYER PAUL BAUM ROBERT W. HANNA California State University, Northridge Northridge, California
Best Practices for Increasing Online Teaching Evaluation Response Rates Denise T. Ogden, Penn State University Lehigh Valley James R. Doc Ogden, Kutztown University of Pennsylvania ABSTRACT Different delivery
1 Pilot Study for Assessing the Viability of Using Online Course Evaluations at California State University Sacramento A Report from the Electronic Course Evaluation Task Force Fall 2010 2 Sacramento State
"Traditional and web-based course evaluations comparison of their response rates and efficiency" Miodrag Lovric Faculty of Economics, University of Belgrade, Serbia e-mail: email@example.com Abstract
Concordia University Course Evaluation Survey: Summary Report Introduction In fall 2010, the Vice-Provost, Teaching and Learning engaged the Institutional Planning Office to conduct a survey of student
Administering course evaluations online: A summary of research and key issues Office of the Vice Provost for Undergraduate Education June 2009 Prepared by: J. David Perry IUB Evaluation Services & Testing
Testing the Invariance of Interrater Reliability Between Paper-Based and Online Modalities of the SIR II Student Instructional Report David Klieger, John Centra, John Young, Steven Holtzman, and Lauren
9 Evaluation Online offers a variety of options to faculty who want to solicit formative feedback electronically. Researchers interviewed faculty who have used this system for midterm student evaluation.
Online and In person Evaluations: A Literature Review and Exploratory Comparison Paulette Laubsch Assistant Professor School of Administrative Science Fairleigh Dickinson University Teaneck, NJ 07666 firstname.lastname@example.org
V. Course Evaluation and Revision Chapter 14 - Improving Your Teaching with Feedback There are several ways to get feedback about your teaching: student feedback, self-evaluation, peer observation, viewing
Virtual Teaching in Higher Education: The New Intellectual Superhighway or Just Another Traffic Jam? Jerald G. Schutte California State University, Northridge email - email@example.com Abstract An experimental
office of Academic Planning & Assessment University of Massachusetts Amherst Martha Stassen Assistant Provost, Assessment and Educational Effectiveness 545-5146 or firstname.lastname@example.org Student Response
ACADEMIC AFFAIRS FORUM Student Course Evaluation Management and Application Custom Research Brief Research Associate Anna Krenkel Research Manager Nalika Vasudevan October 2012 2 of 14 3 of 14 Table of
1 Pilot Study for Assessing the Viability of Using Online Course Evaluations at California State University Sacramento A Proposal from the Electronic Course Evaluation Task Force: Interim Report October
www.ncolr.org/jiol Volume 6, Number 3, Winter 2007 ISSN: 1541-4914 Online vs. Traditional Course Evaluation Formats: Student Perceptions Judy Donovan Indiana University Northwest Cynthia Mader and John
Course-based Student Feedback Forms: Proposal for a New System Fall 2005 Introduction As part of Augsburg College s commitment to providing a high quality education, we have solicited feedback from students
Stephanie J Richardson, PhD, RN Director, Center for Teaching & Learning Excellence Associate Professor and Division Chair, College of Nursing Student Course Evaluations at the University of Utah Background.
Answers to Faculty Concerns About Online Versus In- class Administration of Student Ratings of Instruction (SRI) The posting below compares online student ratings of instructors with in- class ratings.
REVIEWS American Journal of Pharmaceutical Education 2005; 69 (1) Article 5. Online Student Course Evaluations: Review of Literature and a Pilot Study Heidi M. Anderson, PhD, Jeff Cain, MS, and Eleanora
Instructional Technology Services Faculty Help Topic: Online Course Evaluation Use the steps in this tutorial to learn how to set up a course evaluation using the Online Course Evaluation tool. Contents
The Videoconferencing Classroom: What Do Students Think? A. Mark Doggett Western Kentucky University Introduction The advantages of video conferencing in educational institutions are well documented. Scholarly
A Comparison of E-Learning and Traditional Learning: Experimental Approach Asst.Prof., Dr. Wanwipa Titthasiri, : Department of Computer Science : Faculty of Information Technology, Rangsit University :
Learning Equity between Online and On-Site Mathematics Courses Sherry J. Jones Professor Department of Business Glenville State College Glenville, WV 26351 USA email@example.com Vena M. Long Professor
Austin Community College Marketing Research Marketing 2348.01 Fall 2009 Distance Learning Professor: Allan W. Hanson, MBA Office: CYP #2204.7 Office Telephone: 223-1790 ext. 26433 Cell Number: 417-5713
Online Collection of Student Evaluations of Teaching James A. Kulik Office of Evaluations and Examinations The University of Michigan December 5, 2005 1 More than a dozen universities now collect all their
Title of the submission: A Modest Experiment Comparing Graduate Social Work Classes, on-campus and at A Distance Topic area of the submission: Distance Education Presentation format: Paper Names of the
Student Course Evaluations: Key Determinants of Teaching Effectiveness Ratings in Accounting, Business and Economics Courses at a Small Private Liberal Arts College Timothy M. Diette, Washington and Lee
A Quality Assessment Initiative for Computer Science, Electrical Engineering, and Physics at Select Universities in Vietnam QUESTIONS FOR FACULTY MEMBERS PLEASE PROVIDE THE FOLLOWING: Name: Title: Faculty/Department:
Comparing Student Performance, Attitudes and Preferences in an Introduction to Business Course: Online, Hybrid and Traditional Delivery Methods Who Makes the A Grade? Daniel Gutierrez, (E-mail: firstname.lastname@example.org)
The Challenges and Potentials of Evaluating Courses Online Mark Troy Texas A&M University Hossein Hakimzadeh Indiana University February 23, 2009 Trav D. Johnson Brigham Young University Dawn M. Zimmaro
Designing & Conducting Survey Research Santa Monica College Fall 2011 Presented by: Hannah Alford, Director Office of Institutional Research 1 Workshop Overview Part I: Overview of Survey Method Paper/Pencil
AC 2011-1266: A STUDY OF TRADITIONAL UNDERGRADUATE STU- DENT ENGAGEMENT IN BLACKBOARD LEARNING MANAGEMENT SYSTEM Julie M Little-Wiles, Purdue University, West Lafayette Doctoral Student, College of Technology,
Ad-Hoc Committee on Academic Integrity Survey Summary Report Presented at the UAA Faculty Senate Retreat 24 August 2011 CONTENTS I. Methodology II. Perceptions of Academically Dishonest Behaviors Table
Abstract Student Preferences for Learning College Algebra in a Web Enhanced Environment Laura Pyzdrowski West Virginia University Anthony Pyzdrowski California University of Pennsylvania It is important
SURVEY RESEARCH AND RESPONSE BIAS Anne G. Scott, Lee Sechrest, University of Arizona Anne G. Scott, CFR, Educ. Bldg. Box 513, University of Arizona, Tucson, AZ 85721 KEY WORDS: Missing data, higher education
T EACHING WITH S TUDENT R ESPONSE S YSTEM T ECHNOLOGY A SURVEY OF K-12 TEACHERS William R. Penuel Valerie Crawford Angela Haydel DeBarger SRI International Christy Kim Boscardin UCLA Katherine Masyn UC-Davis
Comparison of and Perceptions of Best Practices in Online Technology s David Batts Assistant Professor East Carolina University Greenville, NC USA email@example.com Abstract This study investigated the perception
Styles by Gender 1 College Teaching Styles by Gender Gene H. Starbuck, Ph.D. Department of Social and Behavioral Sciences Mesa State College 1100 North Avenue Grand Junction, CO 81501 firstname.lastname@example.org
Report to the Senate: Spring 2013 Online Evaluations Pilot at Humboldt State University Richard W. Bruce Humboldt State University September 2013 2 Table of Contents Introduction... 3 Methods... 3 Results...
Chapter III Supporting Data: Collection & Presentation Since the documentation of teaching must be selective to avoid unnecessary bulk in a tenure file and to increase the efficiency of the review process
Conflicted: Faculty And Online Education, 2012 I. Elaine Allen and Jeff Seaman with Doug Lederman and Scott Jaschik AGREE DISAGREE YOUR PROGRAMS... EVERYWHERE. EXPAND YOUR REACH THROUGH A PARTNERSHIP WITH
Lowenthal 1 Preprint: Student Perceptions of Online Learning: An Analysis of Online Course Evaluations to appear in American Journal of Distance Education. Lowenthal, P. R., Bauer, C., & Chen, K-Z. (2015).
HIRING PREFERENCES FOR HOSPITALITY EDUCATORS 61 Lea D. Wikoff HIRING PREFERENCES FOR HOSPITALITY EDUCATORS Hotel and Restaurant Management This study examined the attitudes of hospitality administrators
KNOWLEDGE AND PERCEPTIONS OF VIRGINIA SECONDARY AGRICULTURE EDUCATORS TOWARD THE AGRICULTURAL TECHNOLOGY PROGRAM AT VIRGINIA TECH Dennis W. Duncan, Assistant Professor Virginia Tech Abstract Identifying
MGT4193.01 Negotiation Skills for Managers WINTER 2015 Location: Garland AMBERTON UNIVERSITY SYLLABUS FOR LECTURE/CLASSROOM COURSE PROFESSOR INFORMATION: Name: Dr. Kimanya Ards Phone Number: 972-279-6511
Compiled by Scott Krajewski, Augsburg College, email@example.com An initial bibliography on online course evaluations This list is not exhaustive but can serve as a starting point for further exploration.
Crafton Hills College RRN 682 July 2013 Research Brief Spring 2013 Structured Learning Assistance (SLA) Program Evaluation Results Prepared by Lorena Guadiana Summary of Main Findings 85% of respondents
The Journal of Educators Online-JEO January 2016 ISSN 1547-500X Vol 13 Number 1 1 Formative Evaluations in Online Classes Jennifer L. Peterson, Illinois State University, Normal, IL., USA Abstract Online
Online Reputation in a Connected World Abstract This research examines the expanding role of online reputation in both professional and personal lives. It studies how recruiters and HR professionals use
M. Scott Norton Planning and Changing Vol. 35, No. 3&4, 2004, pp. 223 235 STUDENT LEARNING PORTFOLIOS: HOW THEY ARE BEING IMPLEMENTED IN EDUCATIONAL ADMINISTRATION PREPARATION PROGRAMS Student learning
Jl. of Technology and Teacher Education (2006) 14(3), 531-541 The Impact of Online Teaching on Faculty Load: Computing the Ideal Class Size for Online Courses LAWRENCE A. TOMEI Robert Morris University
A COMPARISON OF COLLEGE AND HIGH SCHOOL STUDENTS IN AN ONLINE IT FOUNDATIONS COURSE Amy B. Woszczynski Kennesaw State University firstname.lastname@example.org Debra Bass Geist Kennesaw State University email@example.com
EFL LEARNERS PERCEPTIONS OF USING LMS Assist. Prof. Napaporn Srichanyachon Language Institute, Bangkok University firstname.lastname@example.org ABSTRACT The purpose of this study is to present the views, attitudes,
Proceedings of the 2001 Annual ASEE Conference, ASEE, June 2001. Abstract A COMPARISON OF ELECTRONIC SURVEYING BY E-MAIL AND WEB Catherine E. Brawner, Richard M. Felder, Rodney H. Allen, Rebecca Brent,
Title: Transforming a traditional lecture-based course to online and hybrid models of learning Author: Susan Marshall, Lecturer, Psychology Department, Dole Human Development Center, University of Kansas.
An Online Evaluation Pilot Study in the College of Arts, Humanities & Social Sciences Richard W. Bruce Humboldt State University February 2013 2 Table of Contents Introduction... 3 Literature Review...
Organizational Research: Determining Appropriate Sample Size in Survey Research James E. Bartlett, II Joe W. Kotrlik Chadwick C. Higgins The determination of sample size is a common task for many organizational
Assessing Online Asynchronous Discussion in Online Courses: An Empirical Study Shijuan Liu Department of Instructional Systems Technology Indiana University Bloomington, Indiana, USA email@example.com
Procrastination in Online Courses: Performance and Attitudinal Differences Greg C Elvers Donald J. Polzella Ken Graetz University of Dayton This study investigated the relation between dilatory behaviors
Student Perceptions of Learning: A Comparison of Two Different Populations Catharina Daniels 1 Pace University School of Computer Science and Information Systems Technology Systems Department New York,
Do Online Courses Work for Middle Grades and High School Students? Online Students Have Their Say William R. Thomas Can a middle grades or high school student really take a course for credit just by using
Midwest Association of Graduate Schools/ETS Award for Excellence & Innovation in Graduate Education GVSU College of Education Andrew Topper GVSU College of Education 1 Using technology to serve distant
UW Colleges Student Motivations and Perceptions About Accelerated Blended Learning Leanne Doyle Abstract: Nationwide, both online and blended learning enrollments are causing a paradigm shift in higher
GRADUATE STUDENT SATISFACTION WITH AN ONLINE DISCRETE MATHEMATICS COURSE * Amber Settle DePaul University 243 S. Wabash Avenue Chicago, IL 60604 (312) 362-5324 firstname.lastname@example.org Chad Settle University
A Rasch measurement approach to analyzing differences in pencil-and-paper and online formats for higher education course evaluations Leslie A. Sweeney 1, University of Kentucky Kathryn Shirley Akers, University
Students Perception Toward the Use of Blackboard as a Course Delivery Method. By Dr. Ibtesam Al mashaqbeh Abstract: The aim of this study was to investigate students perception toward the use of blackboard
COMPARING STUDENT PERFORMANCE: ONLINE VERSUS BLENDED VERSUS FACE-TO-FACE David K. Larson University of Illinois at Springfield Department of Management Information Systems College of Business and Management
Web-based versus classroom-based instruction: an empirical comparison of student performance ABSTRACT Evelyn H. Thrasher Western Kentucky University Phillip D. Coleman Western Kentucky University J. Kirk
FACULTY GETTING STARTED WITH i>clicker2 Software The i>clicker software is completely packaged in a single folder and includes the i>clicker, i>grader, and searchable help files. The i>clicker software
Traditional Lecture Format Compared to Computer-Assisted Instruction in Pharmacy Calculations Jeffrey C. Delafuente, Oscar E. Araujo and Sue M. Legg College of Pharmacy, Health Science Center, PO Box 100486,
The Impact of Web-Based Instruction on Performance in an Applied Statistics Course Robert J. Hall Department of Educational Psychology Texas A&M University United States email@example.com Michael S. Pilant
Queens College Online Course Evaluation Pilot Report of Results: April 2, 2009 Teaching Excellence and Evaluation Committee -Susan Croll, Chair -Chantal Bruno, student -Claudia Perry -Jill Frohmann, student
STo appear in Proceedings of Workshop on the Impact of Pen-Based Technology on Education (WIPTE) 2008. Assessing the Impact of a Tablet-PC-based Classroom Interaction System Kimberle Koile David Singer
Teaching Without a Classroom: Delivering Courses Online Gary McDonald, Professor Department of Computer Science and Information Systems Northwest Missouri State University firstname.lastname@example.org Merry
SBAJ: Volume 11 Number 1 (Spring 2011) Pages 1-11 A QUALITATIVE ANALYSIS OF STUDENT ADVICE FROM ONLINE QUANTITATIVE MBA COURSES Blake A. Frank, University of Dallas, Irving, TX Robert J. Walsh, University
Paper presented at the annual conference of the American Educational Research Association New Orleans, 2002 Trav Johnson, Brigham Young University Background Online Student Ratings: Will Students Respond?
Edited by M Trotman et al Compilation Ä 2007 Association for Survey Computing The Effect of Reminder Intervals on Response Rates for Web Surveys John S. Lemon Abstract The University of Aberdeen has used
Software Only Biometrics to Authenticate Student ID Report of Pilot with the University of Texas System TeleCampus Prepared by: Lori McNabb MS Assistant Director, Student and Faculty Services University
Designing Effective Online Course Development Programs: Key Characteristics for Far-Reaching Impact Emily Hixon, Ph.D. School of Education email@example.com Janet Buckenmeyer, Ph.D. School of Education
36 CHAPTER 3 RESEARCH DESIGN AND METHODOLOGY Introduction The review of literature has produced reoccurring themes emphasizing the importance of technological literacy for citizens in the 21 st Century