RateMyProfessors.com versus formal in-class student evaluations of teaching
|
|
|
- Leona Taylor
- 9 years ago
- Views:
Transcription
1 A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to the Practical Assessment, Research & Evaluation. Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. Volume 12, Number 6, May 2007 ISSN RateMyProfessors.com versus formal in-class student evaluations of teaching Theodore Coladarci & Irv Kornfield University of Maine Using data for 426 instructors at the University of Maine, we examined the relationship between RateMyProfessors.com (RMP) indices and formal in-class student evaluations of teaching (SET). The two primary RMP indices correlate substantively and significantly with their respective SET items: RMP overall quality correlates r =.68 with SET item, Overall, how would you rate the instructor?; and RMP ease correlates r =.44 with SET item, How did the work load for this course compare to that of others of equal credit? Further, RMP overall quality and RMP ease each correlates with its corresponding SET factor derived from a principal components analysis of all 29 SET items: r =.57 and.51, respectively. While these RMP/SET correlations should give pause to those who are inclined to dismiss RMP indices as meaningless, the amount of variance left unexplained in SET criteria limits the utility of RMP. The ultimate implication of our results, we believe, is that higher education institutions should make their SET data publicly available online. Student evaluations of teaching (SET) are part and parcel of the university classroom experience. This phenomenon can be traced back to the seminal work of Purdue University psychologist Herman Remmers, who, in the 1920s, pioneered the use of rating scales for evaluating instructors. Remmers and his associates arguably developed the first student evaluation form (Remmers, 1927), and they dominated SET research through the 1950s (Centra, 1993, pp ). Today, SET are used formatively to effect instructional improvement as well as summatively to inform decisions regarding promotion and tenure. There is a wealth of empirical research on the reliability and validity of SET, a literature that has been summarized well by others (e.g., Algozzine et al., 2004; Cashin, 1995; Costin, Greenough, & Menges, 1971; D Apollonia & Abrami, 1997; Greenwald, 1997; Marsh & Roche, 1997; McClatchy, 1997; McKeachie, 1997; Wachtel, 1998). Although faculty sentiments regarding the utility of SET are far from consentaneous (e.g., Nasser & Fresko, 2002), we agree with the position articulated by Centra (2003, pp ): No method of evaluating college teaching has been researched more than student evaluations, with well over 2,000 studies referenced in the ERIC system. The preponderance of these study results has been positive, concluding that the evaluations are: (a) reliable and stable; (b) valid when compared with student learning and other indicators of effective teaching; (c) multidimensional in terms of what they assess; (d) useful in improving teaching; and (e) only minimally affected by various course, teacher, or student characteristics that could bias results. Or in the more parsimonious words of McKeachie (1997, p. 1219): student ratings are the single most valid source of data on teaching effectiveness. This is not to say that SET are unimpeachable (e.g., see Johnson, 2002). Rather, when instruments are properly constructed and the resulting data thoughtfully considered, SET can be an important source of
2 Practical Assessment, Research & Evaluation, Vol 12, No 6 2 information for both improving teaching and informing personnel decisions. Relative to the long history of SET, the online publication of student ratings of college instructors, such as RateMyProfessors.com (RMP), is a recent development. Established in 1999, RMP allows students to post 5-point ratings on individual instructors with respect to helpfulness ( Is this professor approachable, nice and easy to communicate with? How accessible is the professor and is he/she available during office hours or after class for additional help? ), clarity ( How well does the professor teach the course material? Were you able to understand the class topics based on the professor's teaching methods and style? ), and easiness ( Is this class an easy A? How much work do you need to do in order to get a good grade? ). Helpfulness and clarity are averaged by RMP to form overall quality. Students also can rate their prior interest in the class, indicate whether they believe the instructor is hot or not hot (in reference to instructor appearance ), and provide general comments. Averaged across student posts, these ratings are summarized by instructor in two locations on the RMP site. First, on the instructor s scorecard one finds the total number of posts, average easiness, average helpfulness, average clarity, and average overall quality. A hotness total also is provided, which is the sum of hot ratings (coded +1) and not-hot ratings (coded -1). Where a student opts not to rate the instructor on this dimension, a value of zero is figured into the hotness calculation. A hotness total greater than 0 indicates that hot ratings outnumber not-hot ratings, by a margin equal to the hotness total. For online presentation, RMP converts negative hotness totals to 0. The second location is the list of all instructors at the institution or in a particular department, where RMP reports for each instructor the total number of posts, overall quality, and ease. If the instructor s hotness total is greater than 0, this achievement is acknowledged by the display of a red chili pepper. 1 RELATED RESEARCH Several investigators have examined the statistical associations among the various RMP indices. 2 Felton, 1 For example, a red chili pepper would be awarded in each of these quite different scenarios (involving a class of 21 students): (a) all students rated the instructor as hot, (b) 10 students rated the instructor as not hot and 11 students rated the instructor as hot, and (c) one student rated the instructor as hot and 20 students provided no rating whatsoever. 2 To distinguish between instructor-level and student-level RMP information available online, we use indices when Mitchell, and Stinson (2004) downloaded RMP data for professors at the 25 institutions having the most student posts, resulting in a sample of 3,190 instructors representing 65,678 posts; only instructors having at least 10 posts were included. With the instructor as the unit of analysis, these researchers obtained a correlation of r =.61 between easiness and quality. This value is not dissimilar to the correlation of r =.50 reported by Davison and Price (2006) in their analysis of RMP indices for instructors at Appalachian State University. Interestingly, the easiness/quality correlation in the Felton et al. (2004) study was smaller for the 481 sexy instructors (pepper possessors) than it was for the 2,709 non-sexy instructors (r =.46 and.61, respectively), prompting these researchers to conclude that the sexier the instructor, the more difficult his or her class can be while obtaining high-marks on student evaluations (p. 106). 3 In an analysis involving non-sexy instructors only, Felton et al. (2004) found also that the correlation between easiness and quality grew stronger when based on increasingly more selective subsets of instructors regarding number of posts. For example, r =.61 for the 2,709 instructors having at least 10 posts, r =.68 for the 454 instructors having at least 30 posts, r =.76 for the 106 instructors having at least 50 posts, and r =.85 for the 37 instructors having at least 70 posts. Finally, these researchers reported that sexiness, a variable they constructed by dividing the hotness total by the total number of posts (J. Felton, personal communication, February 1, 2007), correlated significantly with both quality (r =.30) and easiness (r =.17). In a follow-up study (Felton, Koper, Mitchell, & Stinson, 2006), these correlations increased to r =.64 and r =.39, respectively, when based on a sexiness variable that allowed for the RMP-suppressed negative values (which these researchers obtained from the RMP president). Riniolo, Johnson, Sherman, and Misso (2006) similarly found a relationship between hotness and quality. These researchers contrasted attractive (pepper) and unattractive (no pepper) instructors at the four universities having the most RMP posts; only instructors having at least 25 posts were included. Controlling for academic department, Riniolo et al. found that attractive instructors at each university were rated more favorably than their putatively unattractive counterparts a finding that held for males and females referring to the averages reported for the instructor and ratings when referring to the contributions of individual students. 3 There also may be a statistical artifact at play due to restricted variability among sexy instructors who, relative to their pepperless peers, had higher means on both quality and easiness.
3 Practical Assessment, Research & Evaluation, Vol 12, No 6 3 alike. The corresponding effect sizes were considerable, ranging from.68 to 1.32 for males and.95 to 2.33 for females. As noted above, RMP fashions its measure of overall quality by averaging helpfulness and clarity. Felton et al. (2006) and Kindred and Mohammed (2005) both report a high correlation between helpfulness and clarity (r =.94 and.86, respectively), which provides warrant for the union of these two items as a composite. Although we focus above on the intercorrelations among RMP indices, some researchers also folded student comments into the mix. In their multi-institution study, for example, Kindred and Mohammed (2005) coded posted comments according to various themes (e.g., instructor intelligence, competence, personality ) and, in a student-level analysis, then correlated RMP ratings with variables derived from these codings. For instance, instructor competence correlated r =.84 and.85, respectively, with helpfulness and clarity ratings. Taking a somewhat more qualitative tack, Felton et al. (2004) categorized instructors according to the quadrant they occupied in the easiness-quality scatterplot and then extracted illustrative comments regarding such instructors. For example, a high-quality low-easiness instructor drew the following student comment: This class is a lot of work, but it s certainly not impossible. He really knows his stuff and if you ask for help, you ll get it. One of the best professors I ve had yet. In contrast, a student posted this comment for a low-quality higheasiness instructor: Easy grader, lectures are boring, and you don t really learn anything. PRESENT STUDY From the modest RMP research literature, we see that instructors rated more highly on easiness also are rated more highly on overall quality in comparison to their more-difficult counterparts. Further, instructors deemed hot have somewhat higher ratings on both overall quality and easiness when compared to those who do not enjoy this distinction. To our knowledge, no one has examined the correspondence between RMP indices and SET i.e., the formal, in-class student evaluations of teaching solicited at the instructor s institution. Knowing the concurrent validity of RMP indices in this regard would be valuable information indeed. With over 6,750,000 ratings of over one million instructors across more than 6,000 schools, RMP enjoys considerable use. Anecdotal accounts as well as more systematic study (e.g., Davison & Price, 2006; Kindred & Mohammed, 2005) suggest that students consult RMP to inform course-taking decisions. Such consultation is warranted if RMP indices were shown to correlate highly with SET. On the other hand, using RMP in this manner would be ill-advised if there were poor correspondence between RMP indices and SET. In short, the present study is a first step in throwing empirical light on this concern. Using SET data from the University of Maine (UMaine), we address two fundamental questions: First, how do the RMP indices for an instructor correlate with the instructor s SET ratings? This speaks to the concurrent validity of RMP indices, where the institution s SET serve as the criterion variables. Second, does the general magnitude of these concurrent validity coefficients depend on the number of RMP posts an instructor has? In other words, are RMP indices equally trustworthy whether based on sentiments of the few or the many? Our primary interest is in the overall quality and ease RMP indices, although we examine the pepper distinction as well. Data Sources METHOD For each UMaine instructor present on the RMP site in mid-december 2006, we recorded the overall quality and ease indices, the presence (1) or absence (0) of a red chili pepper, and the number of posts. We included adjunct and fulltime instructors alike, but excluded teaching assistants and graduate-level instructors. We did not impose an inclusion criterion regarding minimum number of RMP posts. The resulting data file was given to the Associate Director of the UMaine Office of Institutional Studies, who added SET data for each instructor. The 29 items on the UMaine SET form each rests on a five-point scale (see Appendix A). For the present study, we selectively reversed item scales so that higher values always represent a more desirable characteristic (greater clarity, fairer grading procedures, much intellectual discipline required, etc.). The two exceptions are Item 17 ( How was the pace at which the materials in the course were covered? ), where 1 = too fast and 5 = too slow, and Item 20 ( How did the work load for this course compare to that of others of equal credit? ), where 1 = much heavier and 5 = lighter. For each instructor, the Associate Director provided each of the 29 items averaged across all undergraduate courses the instructor taught between spring 2000 and spring 2006 (to correspond roughly to the operation of RMP). He then removed identifying information from the records and forwarded the merged database to us for analysis. We eliminated one instructor whose number of postings exceeded the total enrollment, resulting in a final data base comprising 426 instructors.
4 Practical Assessment, Research & Evaluation, Vol 12, No 6 4 Analyses We conducted several analyses to examine the concurrent validity of the primary RMP indices overall quality and ease as well as to explore the correlates of the dichotomous pepper index. First, we correlated each of the three RMP indices with Item 13 ( Overall, how would you rate the instructor? ) and Item 20 ( How did the work load for this course compare to that of others of equal credit? ) the SET items we believed were most relevant to RMP overall quality and RMP ease, respectively. Although Item 13 and Item 20 are our primary item-level interest, for their descriptive value we also report correlations between the RMP indices and the remaining 27 SET items. We also correlated the RMP indices with the three orthogonal factors that obtained from our factor analysis, described below, of the SET item data. Second, we conducted multiple regression analyses to determine whether the strength of association between an RMP index and SET criterion variable changes when the two remaining RMP indices are statistically held constant. Third, by calculating the RMP/SET correlations separately for instructors having relatively few versus relatively many RMP posts, we determined whether the strength of association between the RMP indices and SET criteria is related to the number of RMP posts on which the indices are based. RESULTS We began by conducting a principal components factor analysis of the SET item data, aggregated by instructor across courses and years. A three-factor solution, using varimax rotation, accounted for 76.34% of the total variance. The first factor, which we call Instructor (47.11%), is a general factor largely capturing perceptions of the instructor and instruction. For example, the highest loading item on this factor is Item 13, Overall, how would you rate the instructor? (1 = below average, 5 = excellent). Loading almost equally highly is Item 22, What is your overall rating of this course? (1 = poor, 5 = excellent). The second factor, which we have labeled Assessment (19.56%), reflects sentiments regarding the instructor s assessment practices. Here, the highest loading item is Item 29, Overall, how would you rate the examination procedure? (1 = poor, 5 = excellent). Finally, we call the third factor Facile (9.67%), and it reflects the perceived ease of the course. One of the highest-loading items on this factor is Item 20, How did the work load for this course compare to that of others of equal credit? (1 = much heavier, 5 = much lighter); another is the negatively loading Item 21, How much intellectual discipline was required in this course? (1 = very little, 5 = very much). Table 1 reports the means and standard deviations for the 29 UMaine SET items, grouped according to the primary factor on which they load and in descending order of magnitude. Correlations between the RMP indices and all SET criterion variables are presented as well. Table 1. Means, Standard Deviations, and RMP/SET Correlations (n = 426). Correlation with RMP index a overall M (SD) SET Criterion quality ease pepper Instructor b 0 (1.00) Item 13 Overall, how would you rate the instructor? ( below average... excellent;.91 c ) 4.30 (.44) Item 22 What is your overall rating of this course? (poor... excellent;.89) 4.00 (.44) Item 6 How concerned was the instructor for the quality of his or her teaching? (unconcerned... very concerned;.88) Item 4 How clearly did the instructor present ideas and theories? (often unclear... very clear;.88) 4.25 (.39) (.46)
5 Practical Assessment, Research & Evaluation, Vol 12, No 6 5 Table 1 (continued). Means, Standard Deviations, and RMP/SET Correlations (n = 426). Correlation with RMP index overall M (SD) SET Criterion quality ease pepper Item 16 Did you develop significant skills in the field as a result of taking this course? 3.90 (.44) (very little... very much;.86) Item 2 How clearly were the objectives of the course presented? 4.23 (.40) (unclear... very clear;.85) Item 3 How enthusiastic was the instructor about the subject? 4.58 (.32) (very little... very much;.84) Item 12 How genuinely concerned was the instructor with students progress? 4.18 (.41) (unconcerned... very concerned;.84) Item 11 Did the instructor inspire confidence in his or her knowledge of the subject? 4.58 (.32) (little... very much;.84) Item 14 Were class meetings profitable and worth attending? 4.16 (.45) (not usually... always;.82) Item 7 How orderly and logical were the instructor s presentations of the material? 4.22 (.40) (not at all... very much;.81) Item 15 How would you rate the subject matter of this course? 4.00 (.50) (uninteresting... very interesting;.79) Item 8 How open was the instructor to other viewpoints? 4.38 (.34) (often closed... very open;.78) Item 9 Did the instructor show respect for the questions and opinions of the students? 4.57 (.29) (rarely... always;.78) Item 5. How much were students encouraged to think for themselves? 4.25 (.36) (very little... very much;.77) Item 19 Were students required to apply concepts to demonstrate understanding? 4.22 (.34) (very little... very much;.73) Item 1 How prepared was the instructor for class? (often unprepared... well prepared;.71) 4.49 (.33) Item 10 How often were examples used in class? (rarely... always;.70) 4.56 (.28) Item 18 What is the overall rating of the primary textbook(s)? (poor... excellent;.54) 3.60 (.39)
6 Practical Assessment, Research & Evaluation, Vol 12, No 6 6 Table 1 (continued). Means, Standard Deviations, and RMP/SET Correlations (n = 426). Correlation with RMP index overall M (SD) SET Criterion quality ease pepper Assessment 0 (1.00) Item 29 Overall, how would you rate the examination procedure? 3.99 (.37) (poor... excellent;.78) Item 28 How fair were the grading procedures? (unfair... completely;.78) 4.28 (.34) Item 25 Did the instructor let you know what he or she expected on tests and assignments? (not clear... very clearly;.77) Item 26 Did exams reflect the important aspects of the course? 4.09 (.41) (.32) (very little... very much;.76) Item 27 How clear were examination questions? (unclear... very clear;.73) 4.04 (.37) Item 23 How promptly were assignments and tests returned? (too slow... very prompt;.71) Item 24 Could tests be completed in the allotted time? (rarely... always;.70) 4.10 (.49) (.36) Facile 0 (1.00) Item 21 How much intellectual discipline was required in this course? 3.89 (.35) (very little... very much; -.84) Item 20 How did the work load for this course compare to that of others of equal credit? 2.71 (.41) (much heavier... lighter;.83) Item 17 How was the pace at which the materials in the course were covered? 2.76 (.19) (too fast... too slow;.81) M (SD): 3.69 (.90) 3.10 (.80).17 (.38). a Correlations of r.10 are statistically significant (p <.05, two tailed). b Factor. c Factor loading.
7 Practical Assessment, Research & Evaluation, Vol 12, No SET Item r = RMP overall quality Figure 1. The relationship between RMP overall quality and SET Item 13 (n = 426), with area of high concordance highlighted SET Item r = RMP ease Figure 2. The relationship between RMP ease and SET Item 20 (n = 426).
8 Practical Assessment, Research & Evaluation, Vol 12, No 6 8 Correlations Among RMP Indices Before presenting the primary results of this investigation, we briefly consider the correlations among the RMP indices. The two primary indices, RMP overall quality and RMP ease, correlate r =.40 (p <.001). Consistent with the RMP findings of Felton et al. (2004) and Davison and Price (2006), instructors who are rated more highly in overall quality also are seen as being somewhat easier. Further, the dichotomous pepper index is weakly associated with ease (r =.17, p =.001) and somewhat more substantively associated with overall quality (r =.34, p <.001). Regarding this latter finding, the means on RMP overall quality for the pepper and pepperless instructors are, respectively, 4.37 (SD =.62) and 3.55 (SD =.89), a difference that is statistically significant (t[424] = 7.55, p <.001) and equivalent to an effect size of Thus, pepper instructors are roughly one standard deviation higher on overall quality compared to their pepperless peers. This general finding, too, is consistent with RMP results reported by others (Felton et al., 2004, 2006; Riniolo et al., 2006). Correlations Between RMP Indices and SET Criterion Variables How do RMP indices correlate with the UMaine SET criteria? RMP overall quality correlates r =.68 (p <.001) with its SET item counterpart, Overall, how would you rate the instructor? (Item 13). As the scatterplot in Figure 1 reveals, there is particularly high concordance for instructors at the upper end of each measure (see superimposed box). In contrast, the data points in this figure fan out markedly for remaining values of RMP overall rating. RMP ease correlates r =.44 (p <.001) with its SET item counterpart, How did the work load for this course compare to that of others of equal credit? (Item 20). Statistical significance notwithstanding, this positive association betrays considerable scatter (see Figure 2). We find further convergence when SET factors serve as the criteria: RMP overall quality correlates r =.57 (p <.001) with Instructor, and RMP ease correlates r =.51 (p <.001) with Facile. In this light, of course, it should not be surprising to observe in Table 1 that RMP overall quality demonstrates, with few exceptions, high correlations with Instructor-clustered items, as does RMP ease with respect to Facile-clustered items. There is evidence of discriminant validity as well: RMP overall quality is unrelated to Facile (r =.08, p =.115) and its association with Item 20 is negligible (r =.10, p =.041). And while RMP overall quality correlates significantly with Assessment (r =.35, p <.001), here too the strength of association is less than that with either Item 13 (.68) or Instructor (.57). Similarly, RMP ease correlates less with Item 13 (r =.22, p =.001), Instructor (r =.10, p =.035), and Assessment (r =.28, p <.001) than it does with either Item 20 (.44) or Facile (.51). As for the RMP pepper index, this dichotomous variable demonstrates a statistically significant, albeit weak, relationship with Item 13 (r =.26, p <.001) and Instructor (r =.20, p <.001); its relationship is smaller still with Item 20 (r =.10, p <.05), Facile (r =.10, p <.05), and Assessment (r =.14, p <.01). An examination of the remaining pepper-related correlations in Table 1 confirms that this distinction bears little relationship to the SET criterion variables. Multiple Regression Analyses As seen above, the three RMP indices are intercorrelated. Does the strength of association between an RMP index and SET criterion variable change when the two remaining RMP indices are statistically held constant? Here, we focus on the criterion variables Item 13, Item 20, Instructor, Assessment, and Facile, each of which was regressed on the three RMP indices. Intercorrelations among all variables are shown in Table 2; the standardized regression coefficients and semipartial correlations (sr) for each of the five equations appear in Table 3. RMP overall quality and RMP ease continue to demonstrate association with their respective SET criterion variables even when statistical control is exercised (Table 3). With RMP ease and the pepper index held constant, RMP overall quality is related to both Item 13 (β =.70, p <.001) and Instructor (β =.63, p <.001). By squaring the semipartial correlation for RMP overall quality, one obtains the percentage of variance in each dependent variable that is explained by RMP overall quality alone: 37% and 30%, respectively, for Item 13 and Instructor. Similarly, RMP ease is related to both Item 20 (β =.48, p <.001) and Facile (β =.56, p <.001) when RMP overall quality and the pepper index are held constant. Here, RMP ease explains 19% and 27% of the variance, respectively, in Item 20 and Facile. RMP overall quality and RMP ease each remains associated with Assessment, although the magnitude of association is rather modest: β =.28 ( p <.001) in the case of overall quality, and β =.17 (p =.001) for ease.
9 Practical Assessment, Research & Evaluation, Vol 12, No 6 9 Table 2. Intercorrelations among variables in multiple regression analyses (n = 426). (2) (3) (4) (5) (6) (7) (8) (1) RMP overall quality (2) RMP ease (3) RMP pepper (4) Item (5) Item (6) Instructor (7) Assessment.00 (8) Facile Note. Correlations of r.10 are statistically significant (p <.05, two tailed). Table 3. Multiple regression results (n = 426). SET Criterion Item 13 (Adj. R 2 =.46) Item 20 (Adj. R 2 =.20) Instructor (Adj. R 2 =.35) Assessment (Adj. R 2 =.14) Facile (Adj. R 2 =.27) RMP Indices β sr p overall quality <.001 ease pepper overall quality ease <.001 pepper overall quality <.001 ease <.001 pepper overall quality <.001 ease pepper overall quality <.001 ease <.001 pepper
10 Practical Assessment, Research & Evaluation, Vol 12, No 6 10 The statistically significant, if weak, association between the pepper index and SET criterion variables, noted above, disappears altogether when RMP overall quality and RMP ease are held constant. Though negligible, two additional results warrant mention. First, RMP overall quality is inversely related to Item 20 (β = -.12, p =.021) and Facile (β = -.17, p <.001). That is, once RMP ease and RMP pepper are taken into account, lower RMP ratings of overall quality correspond to slightly higher SET ratings regarding easiness: Easier courses are perceived on RMP to be slightly lower in overall quality. Second, and in a similar vein, RMP ease is inversely related to Instructor (β = -.15, p =.001). Thus, with RMP overall quality and the pepper index held constant, higher RMP ratings of ease correspond to slightly lower SET ratings of the instructor: Superior instructors are perceived on RMP to be slightly more difficult. Number of RMP Posts For these 426 instructors, the number of posts ranges from 1 to 95, with a mean of (SD = 15.52) and a median of roughly 9. Does the magnitude of association between RMP indices and SET criterion variables depend on the number of RMP posts the instructor has? In other words, are RMP indices more trustworthy when based on more posts? We approached this question descriptively by examining RMP/SET correlations for two subgroups of instructors: those above, versus those below, the median number of RMP posts. In an exploratory spirit, we also formed subgroups in reference to the mean number of RMP posts, which, given the positive skew of this distribution, provides a more extreme upper group on this variable. We limited these analyses to RMP overall quality, RMP ease, and the SET criterion variables Item 13, Item 20, Instructor, and Facile. Although we report all 24 resulting correlations, our primary interest is in the correlation of RMP overall quality with Item 13 and Instructor, and the correlation of RMP ease with Item 20 and Facile. When subgroups are formed in reference to the median number of RMP posts, the difference in RMP/SET correlations from one group to the other is inconsistent (see Table 4). For example, the correlation between RMP overall quality and Item 13 is somewhat smaller when based on instructors having more than nine posts (.70 vs..67), the correlation between RMP ease and Item 20 is somewhat larger (.42 vs..48), and comparable results obtain with respect to RMP overall quality and Instructor (.57 vs..58) and RMP ease and Facile (.50 vs..51). However, a clearer pattern emerges when subgroups are based on the mean number of RMP posts, where we see that each RMP/SET correlation is larger for instructors having more posts. To be sure, the difference is negligible in the case of RMP overall quality and Item 13 (.68 vs..71), although somewhat larger for RMP overall quality and Instructor (.56 vs..61) and larger still for RMP ease and Item 20 (.40 vs..57). It is their consistency in direction, not absolute magnitude, that makes these differences noteworthy. DISCUSSION Before discussing these results, we briefly acknowledge the limitations of the present study. First and foremost, we used data for a single institution. Although we have no reason to believe that the import of our results is limited to the University of Maine, one nevertheless must be cautious in making generalizations to other institutions. On a related note, replication studies are needed: It remains to be seen whether similar results are obtained at institutions that differ from ours in size, RMP participation, SET statistics (e.g., central tendency and variability), admissions criteria, and other variables that may moderate the RMP/SET relationship. Second, because RMP indices are averaged over years and courses for each instructor, we followed suit in constructing the SET data base. Insofar as an instructor s effectiveness may vary over time and/or courses, aggregating data in this fashion likely resulted in lower RMP/SET correlations than would obtain had we created RMP indices and corresponding SET criterion variables that were time- and course-specific. Third, RMP has no quality-control mechanisms to prevent multiple posts from a student (for the same instructor and course) or, arguably worse, to prevent nonstudents from posting. The RMP slogan notwithstanding ( Where STUDENTS do the grading! ), instructors are known to make RMP contributions of their own sometimes playfully, sometimes self-servingly (e.g., Montell, 2006). Even when students in fact do the grading, their posts can be provided anytime during or after the course. Although the likely effect of these quality-control problems on RMP/SET correlations is unknown, these problems nevertheless compromise the utility of RMP for students and instructors alike.
11 Practical Assessment, Research & Evaluation, Vol 12, No 6 11 Table 4. RMP/SET Correlations as a Function of the Number of RMP Posts. SET criterion RMP index overall ease quality 9 posts a (n = 217) RMP index overall ease quality > 9 posts (n = 209) Item Item Instructor Assessment Facile < 14.4 posts b (n = 283) > 14.4 posts (n = 143) Item Item Instructor Assessment Facile a 50.9% of instructors had 9 or fewer RMP posts. b The mean number of RMP posts is That said, the two primary RMP indices correlate substantively and significantly with their respective SET criterion variables: RMP overall quality correlates with SET Item 13 ( Overall, how would you rate the instructor? ) and the SET Instructor factor, and RMP ease correlates with SET Item 20 ( How did the work load for this course compare to that of others of equal credit? ) and the SET Facile factor. Moreover, these associations persist when statistical controls are in place. These results, we believe, should give pause to those who are inclined to dismiss RMP indices as meaningless. Nevertheless, the RMP/SET correlations leave much unexplained variance in the criterion measures, which limits the utility of RMP relative to formal student evaluations of teaching. For example, the correlation of central interest (r =.68 between RMP overall quality and SET Item 13) reveals that over half (54%) of the variation in the SET measure is unrelated to variation in the RMP index. This is captured by the considerable scatter in Figure 1. The clear exception, as the superimposed box in this figure highlights, is found where instructors enjoy a very high RMP overall rating: They invariably are high on SET Item 13 as well. But among the many other instructors, there is poorer agreement between their RMP overall quality and the SET criterion. The pattern of this association suggests that when an instructor s RMP overall quality is particularly high, one can infer that the instructor truly is regarded as a laudatory teacher. However, there is considerable uncertainty about the instructor s true status, as measured by SET, when RMP overall quality is anything less than stellar. In a quick tabulation not reported above, for example, we found that all UMaine teaching award recipients had SET Item 13 means of roughly 4.0 or higher. However, only half of the awardees fell in the superimposed box in Figure 1. In short, the inevitability of many false negatives serves as an important cautionary note for RMP users. As for the RMP pepper index, the weak association between this dichotomous variable and the primary SET criterion variables disappears altogether when the remaining RMP indices are held constant. Unlike RMP overall quality and RMP ease, then, the presence or absence of a pepper is unrelated to SET criteria: It
12 Practical Assessment, Research & Evaluation, Vol 12, No 6 12 contributes nothing of substance with respect to formal in-class student evaluations of teaching. In this light, the red chili pepper is a frivolous detraction that compromises the credibility of RMP. This conclusion echoes the findings of Kindred and Mohammed (2005, p. 11) regarding RMP use among their focus-group participants: The chili peppers are generally disregarded; students reported that they do not place importance on whether or not a teacher is regarded as sexy. One student summarized this idea by stating, I think the hot tamale thing kind of takes away from the credibility of the site. If you re looking for a professor, obviously their level of attractiveness isn t really a top priority. Finally, correlations between the two primary RMP indices and their respective SET criterion variables are consistently larger when based on instructors falling above, versus below, the mean number of RMP posts. Although these differences are small and were only examined descriptively, this finding suggests that RMP indices may be more trustworthy when based on many posts. Nevertheless, we confess our surprise at not obtaining much smaller RMP/SET correlations among instructors having relatively few posts, insofar as the students providing these RMP ratings should be less representative of the population of students providing the SET ratings. This is not the case with our results. It is for subsequent research to explain this counterintuitive finding. Implications We believe that our results carry at least two policy implications. The first we offer with some ambivalence; the second, with firm resolve. First, and predicated on the belief that RateMyProfessors.com is not going to go away, higher education institutions should consider encouraging their students to post ratings and comments on RMP. If a large proportion of an institution s student body were to regularly and responsibly contribute to RMP, the potential value of that information to the institution would only be enhanced. The emphasis, however, must be on responsible contributions. For example, the institution could stress to its students the importance of providing RMP ratings for each course taken, and posting comments that are both constructive and respectful. Appealing to students sense of decency and fair play, furthermore, the institution could endeavor to discourage students from rating the hotness of the instructor. This first implication, to be sure, is complicated by the aforementioned quality control problems that beset RMP. These problems doubtless will remain (although probably to a lesser degree), even for an institution that genuinely promotes regular and responsible RMP participation of its students. Quality control notwithstanding, there also is no reason to believe that an institution s RMP participation rate could match, or even approach, the level of student participation in that institution s SET process. Further, the few RMP items on which students rate instructors typically pale in comparison to the many items, and underlying dimensions, found on SET forms. In short, RMP inevitably provides an unsatisfactory representation of both an institution s student body and the desired construct. Hence, our second policy implication: Higher education institutions should make their SET data publicly available online. Although students doubtless would applaud this move, many faculty would oppose it because of genuine concerns about privacy and the negative consequences that published SET data may bring (e.g., see Howell & Symbaluk, 2001). But privacy is a thing of the past in the age of RMP, MySpace, and the like. Moreover, by not making SET data available to students, the negative consequence is greater still: Students will rely on what is publicly available. In light of our results, this inevitably will mischaracterize the true standing of many instructors as measured by formal student evaluations of teaching. References Algozzine, B. Beattie, J., Bray, M., Flowers, C., Gretes, J., Howley, L., et al. (2004). Student evaluation of college teaching: A practice in search of principles. College Teaching, 52, Cashin, W. E. (1995). Student ratings of teaching: the research revisited. IDEA Paper No. 32. Center for Faculty Evaluation & Development, Division of Continuing Education, Kansas State University. Retrieved February 1, 2005, from papers/idea_paper_32.pdf Centra. J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass. Centra, J. A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work? Research in Higher Education, 44, Costin, F., Greenough, W. T., & Menges, R. J. (1971). Student ratings of college teaching: Reliability, validity, and usefulness. Review of Educational Research, 41, D Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, 52,
13 Practical Assessment, Research & Evaluation, Vol 12, No 6 13 Davison, E., & Price, J. (2006, August). How do we rate? An evaluation of online student evaluations. Unpublished manuscript. Retrieved January 22, 2007, from methods/rmp_8_06.pdf Felton, J., Mitchell, J., & Stinson, M. (2004). Web-based student evaluations of professors: the relations between perceived quality, easiness, and sexiness. Assessment & Evaluation in Higher Education, 29(1), ). Felton, J, Koper, P. T., Mitchell, J. B., & Stinson, M. (July, 2006). Attractiveness, easiness, and other issues: Student evaluations of professors on RateMyProfessors.com. Retrieved February 1, 2007, from Greenwald, A. G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52, Howell, A. J., & Symbaluk, D. G. (2001). Published student ratings of instruction: Revealing and reconciling the views of students and faculty. Journal of Educational Psychology, 93(4), Johnson, V. E. (2002). Teacher course evaluations and student grades: An academic tango. Chance, 15(3), Kindred, J. & Mohammed, S. N. (2005). He will crush you like an academic Ninja! : Exploring teacher ratings on Ratemyprofessors.com. Journal of Computer- Mediated Communication, 10(3), article 9. Retrieved October 12, 2006, from indiana.edu/vol10/issue3/kindred.html Marsh, H. W., & Roche, L. A. (1997). Making students evaluations of teaching effectiveness effective. American Psychologist, 52, McClatchy, W. J. (1997). Student ratings: The validity of use. American Psychologist, 52, Montell, G. A. (2006, September 27). The art of the bogus rating. The Chronicle of Higher Education. Retrieved November 22, 2006, from chronicle.com/jobs/news/2006/09/ c/c areers.html Nasser, F., & Fresko, B. (2002). Faculty views of student evaluations of college teaching. Assessment & Evaluation in Higher Education, 27(2), Remmers, H. H. (1927). The Purdue Rating Scale for Instructors. Educational Administration and Supervision, 6, Riniolo, R. C., Johnson, K. C., Sherman, T. R. & Misso, J. A. (2006). Hot or not: Do professors perceived as physically attractive receive higher evaluations? The Journal of General Psychology, 133(1), Wachtel, H. K. (1998). Student evaluation of college teaching: A brief review. Assessment & Evaluation in Higher Education, 23(2),
14 Practical Assessment, Research & Evaluation, Vol 12, No 6 14 Appendix A University of Maine SET form
15 Practical Assessment, Research & Evaluation, Vol 12, No 6 15 Citation Coladarci, Theodore & Irv Kornfield (2007). RateMyProfessors.com versus formal in-class student evaluations of teaching. Practical Assessment, Research & Evaluation, 12(6). Available online: Note The authors are indebted to Phil Pratt, who assembled the de-identified SET database for us and, further, provided insightful comments and suggestions throughout the investigation. We also are grateful to Jim Felton, Pete Kroper, and Janet Spector for their thoughtful feedback on an earlier draft of this article, Maryhaven for support, as well as to the anonymous reviewers for their constructive recommendations. Authors Authors are listed alphabetically. Coladarci ([email protected]) is Professor of Educational Psychology at the University of Maine, where Kornfield ([email protected]) is Professor of Biology and Molecular Forensics. Correspondence regarding this article can be directed to either author.
How To Determine Teaching Effectiveness
Student Course Evaluations: Key Determinants of Teaching Effectiveness Ratings in Accounting, Business and Economics Courses at a Small Private Liberal Arts College Timothy M. Diette, Washington and Lee
Journal College Teaching & Learning June 2005 Volume 2, Number 6
Analysis Of Online Student Ratings Of University Faculty James Otto, (Email: [email protected]), Towson University Douglas Sanford, (Email: [email protected]), Towson University William Wagner, (Email:
Constructive student feedback: Online vs. traditional course evaluations. Judy Donovan, Ed.D. Indiana University Northwest
www.ncolr.org/jiol Volume 5, Number 3, Winter 2006 ISSN: 1541-4914 Constructive student feedback: Online vs. traditional course evaluations Judy Donovan, Ed.D. Indiana University Northwest Cynthia E. Mader,
An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data
Technical Report No. 15 An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data Stephen L. Benton Russell Webster Amy B. Gross William H. Pallett September
Test Anxiety, Student Preferences and Performance on Different Exam Types in Introductory Psychology
Test Anxiety, Student Preferences and Performance on Different Exam Types in Introductory Psychology Afshin Gharib and William Phillips Abstract The differences between cheat sheet and open book exams
Teacher Course Evaluations and Student Grades: An Academic Tango
7/16/02 1:38 PM Page 9 Does the interplay of grading policies and teacher-course evaluations undermine our education system? Teacher Course Evaluations and Student s: An Academic Tango Valen E. Johnson
Student Response to Instruction (SRTI)
office of Academic Planning & Assessment University of Massachusetts Amherst Martha Stassen Assistant Provost, Assessment and Educational Effectiveness 545-5146 or [email protected] Student Response
CURRENT RESEARCH IN SOCIAL PSYCHOLOGY http://www.uiowa.edu/~grpproc/crisp/crisp.html
CURRENT RESEARCH IN SOCIAL PSYCHOLOGY http://www.uiowa.edu/~grpproc/crisp/crisp.html Volume 14, No. 10 Submitted: March 17, 2009 First Revision: March 25, 2009 Accepted: May 20, 2009 Published: June 10,
GMAC. Predicting Success in Graduate Management Doctoral Programs
GMAC Predicting Success in Graduate Management Doctoral Programs Kara O. Siegert GMAC Research Reports RR-07-10 July 12, 2007 Abstract An integral part of the test evaluation and improvement process involves
AN INVESTIGATION OF THE DEMAND FACTORS FOR ONLINE ACCOUNTING COURSES
AN INVESTIGATION OF THE DEMAND FACTORS FOR ONLINE ACCOUNTING COURSES Otto Chang, Department of Accounting and Finance, California State University at San Bernardino 5500 University Parkway, San Bernardino,
Online Collection of Midterm Student Feedback
9 Evaluation Online offers a variety of options to faculty who want to solicit formative feedback electronically. Researchers interviewed faculty who have used this system for midterm student evaluation.
COURSE AND TEACHER SURVEYS (CATS) AT VILLANOVA UNIVERSITY
COURSE AND TEACHER SURVEYS (CATS) AT VILLANOVA UNIVERSITY A guide for new faculty members There are probably more studies of student ratings than of all of the other data used to evaluate college teaching
GRADUATE STUDENT SATISFACTION WITH AN ONLINE DISCRETE MATHEMATICS COURSE *
GRADUATE STUDENT SATISFACTION WITH AN ONLINE DISCRETE MATHEMATICS COURSE * Amber Settle DePaul University 243 S. Wabash Avenue Chicago, IL 60604 (312) 362-5324 [email protected] Chad Settle University
Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction
OEA Report 00-02 Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction Gerald M. Gillmore February, 2000 OVERVIEW The question addressed in this report is
James E. Bartlett, II is Assistant Professor, Department of Business Education and Office Administration, Ball State University, Muncie, Indiana.
Organizational Research: Determining Appropriate Sample Size in Survey Research James E. Bartlett, II Joe W. Kotrlik Chadwick C. Higgins The determination of sample size is a common task for many organizational
Where has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom
Where has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom B. Jean Mandernach, Swinton Hudson, & Shanna Wise, Grand Canyon University USA Abstract While research has examined
Higher Performing High Schools
COLLEGE READINESS A First Look at Higher Performing High Schools School Qualities that Educators Believe Contribute Most to College and Career Readiness 2012 by ACT, Inc. All rights reserved. A First Look
Interpretation Guide. For. Student Opinion of Teaching Effectiveness (SOTE) Results
Interpretation Guide For Student Opinion of Teaching Effectiveness (SOTE) Results Prepared by: Office of Institutional Research and Student Evaluation Review Board May 2011 Table of Contents Introduction...
The Personal Learning Insights Profile Research Report
The Personal Learning Insights Profile Research Report The Personal Learning Insights Profile Research Report Item Number: O-22 995 by Inscape Publishing, Inc. All rights reserved. Copyright secured in
Student Ratings of College Teaching Page 1
STUDENT RATINGS OF COLLEGE TEACHING: WHAT RESEARCH HAS TO SAY Lucy C. Jacobs The use of student ratings of faculty has increased steadily over the past 25 years. Large research universities report 100
Comparison of Student and Instructor Perceptions of Best Practices in Online Technology Courses
Comparison of and Perceptions of Best Practices in Online Technology s David Batts Assistant Professor East Carolina University Greenville, NC USA [email protected] Abstract This study investigated the perception
GMAC. Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1
GMAC Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1 Eileen Talento-Miller & Lawrence M. Rudner GMAC Research Reports RR-05-03 August 23, 2005
Chapter 5: Analysis of The National Education Longitudinal Study (NELS:88)
Chapter 5: Analysis of The National Education Longitudinal Study (NELS:88) Introduction The National Educational Longitudinal Survey (NELS:88) followed students from 8 th grade in 1988 to 10 th grade in
Student Quality Perceptions and Preferences for MBA Delivery Formats: Implications for Graduate Programs in Business and Economics
International Journal of Economics and Management Sciences Vol. 2, No. 6, 2013, pp. 61-66 MANAGEMENT JOURNALS managementjournals.org Student Quality Perceptions and Preferences for MBA Delivery Formats:
Factors affecting professor facilitator and course evaluations in an online graduate program
Factors affecting professor facilitator and course evaluations in an online graduate program Amy Wong and Jason Fitzsimmons U21Global, Singapore Along with the rapid growth in Internet-based instruction
MEMORANDUM. RE: MPA Program Capstone Assessment Results - CY 2003 & 2004
MEMORANDUM TO: CC: FROM: MPA Program Faculty Mark Rosentraub, Dean Dennis Keating, Associate Dean Vera Vogelsang-Coombs Associate Professor & Director DATE: May 29, 2005 RE: MPA Program Capstone Assessment
GRADUATE FACULTY PERCEPTIONS OF ONLINE TEACHING
GRADUATE FACULTY PERCEPTIONS OF ONLINE TEACHING Sharon Santilli and Vesna Beck Nova Southeastern University The participants for this study were 47 doctoral faculty from Nova Southeastern University Fischler
Student Mood And Teaching Evaluation Ratings
Student Mood And Teaching Evaluation Ratings Mary C. LaForge, Clemson University Abstract When student ratings are used for summative evaluation purposes, there is a need to ensure that the information
Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.
Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative
Basel Committee on Banking Supervision. Working Paper No. 17
Basel Committee on Banking Supervision Working Paper No. 17 Vendor models for credit risk measurement and management Observations from a review of selected models February 2010 The Working Papers of the
How To Study The Academic Performance Of An Mba
Proceedings of the Annual Meeting of the American Statistical Association, August 5-9, 2001 WORK EXPERIENCE: DETERMINANT OF MBA ACADEMIC SUCCESS? Andrew Braunstein, Iona College Hagan School of Business,
C. Wohlin, "Is Prior Knowledge of a Programming Language Important for Software Quality?", Proceedings 1st International Symposium on Empirical
C. Wohlin, "Is Prior Knowledge of a Programming Language Important for Software Quality?", Proceedings 1st International Symposium on Empirical Software Engineering, pp. 27-36, Nara, Japan, October 2002.
Is Student Participation in an Honors Program Related to Retention and Graduation Rates? 1
Is Student Participation in an Honors Program Related to Retention and Graduation Rates? 1 Charlie Slavin, Theodore Coladarci, and Phillip A. Pratt University of Maine Introduction Do students who participate
Measuring the response of students to assessment: the Assessment Experience Questionnaire
11 th Improving Student Learning Symposium, 2003 Measuring the response of students to assessment: the Assessment Experience Questionnaire Graham Gibbs and Claire Simpson, Open University Abstract A review
Achievement and Satisfaction in a Computer-assisted Versus a Traditional Lecturing of an Introductory Statistics Course
Australian Journal of Basic and Applied Sciences, 3(3): 1875-1878, 2009 ISSN 1991-8178 2009, INSInet Publication Achievement and Satisfaction in a Computer-assisted Versus a Traditional Lecturing of an
Instructor and Learner Discourse in MBA and MA Online Programs: Whom Posts more Frequently?
Instructor and Learner Discourse in MBA and MA Online Programs: Whom Posts more Frequently? Peter Kiriakidis Founder and CEO 1387909 ONTARIO INC Ontario, Canada [email protected] Introduction Abstract: This
Concordia University Course Evaluation Survey: Summary Report
Concordia University Course Evaluation Survey: Summary Report Introduction In fall 2010, the Vice-Provost, Teaching and Learning engaged the Institutional Planning Office to conduct a survey of student
" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
College Success Workshops Online, On-Demand
College Success Workshops Online, On-Demand Click on the link to Online Success Workshops in your MyCSN portal. All you need is your NSHE ID number to log in. ACADEMIC & CAREER EXPLORATION Maximizing Your
RELATIONSHIPS BETWEEN SELF DIRECTION, EFFORT AND PERFORMANCE IN AN ONLINE INFORMATION TECHNOLOGY COURSE
RELATIONSHIPS BETWEEN SELF DIRECTION, EFFORT AND PERFORMANCE IN AN ONLINE INFORMATION TECHNOLOGY COURSE William D. Armitage, Naomi Boyer, Sarah Langevin, Alessio Gaspar University of South Florida Polytechnic
Examining Students Performance and Attitudes Towards the Use of Information Technology in a Virtual and Conventional Setting
The Journal of Interactive Online Learning Volume 2, Number 3, Winter 2004 www.ncolr.org ISSN: 1541-4914 Examining Students Performance and Attitudes Towards the Use of Information Technology in a Virtual
Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini
NEW YORK UNIVERSITY ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE Course Syllabus Spring 2016 Statistical Methods for Public, Nonprofit, and Health Management Section Format Day Begin End Building
The Impact of Online Quizzes on Student Engagement and Learning
The Impact of Online Quizzes on Student Engagement and Learning Dr. Jennifer Hillman Fall 2011-Spring 2012 Introduction PSYCH 270: Introduction to Abnormal Psychology is typically a large lecture class
Does ratemyprofessor.com really rate my professor?
Assessment & Evaluation in Higher Education Vol. 33, No. 4, August 2008, 355 368 Does ratemyprofessor.com really rate my professor? James Otto*, Douglas A. Sanford, Jr. and Douglas N. Ross Towson University,
Community Colleges. Measuring Internationalization. AMERICAN COUNCIL ON EDUCATION The Unifying Voice for Higher Education
Measuring Internationalization at Community Colleges Funded by the Ford Foundation AMERICAN COUNCIL ON EDUCATION The Unifying Voice for Higher Education Center for Institutional and International Initiatives
A Hands-On Exercise Improves Understanding of the Standard Error. of the Mean. Robert S. Ryan. Kutztown University
A Hands-On Exercise 1 Running head: UNDERSTANDING THE STANDARD ERROR A Hands-On Exercise Improves Understanding of the Standard Error of the Mean Robert S. Ryan Kutztown University A Hands-On Exercise
PROCEDURES AND EVALUATIVE GUIDELINES FOR PROMOTION OF TERM FACULTY DEPARTMENT OF PSYCHOLOGY VIRGINIA COMMONWEALTH UNIVERSITY MARCH 31, 2014
PROCEDURES AND EVALUATIVE GUIDELINES FOR PROMOTION OF TERM FACULTY DEPARTMENT OF PSYCHOLOGY VIRGINIA COMMONWEALTH UNIVERSITY MARCH 31, 2014 This document describes procedures and evaluative guidelines
A revalidation of the SET37 questionnaire for student evaluations of teaching
Educational Studies Vol. 35, No. 5, December 2009, 547 552 A revalidation of the SET37 questionnaire for student evaluations of teaching Dimitri Mortelmans and Pieter Spooren* Faculty of Political and
SOCIETY OF ACTUARIES THE AMERICAN ACADEMY OF ACTUARIES RETIREMENT PLAN PREFERENCES SURVEY REPORT OF FINDINGS. January 2004
SOCIETY OF ACTUARIES THE AMERICAN ACADEMY OF ACTUARIES RETIREMENT PLAN PREFERENCES SURVEY REPORT OF FINDINGS January 2004 Mathew Greenwald & Associates, Inc. TABLE OF CONTENTS INTRODUCTION... 1 SETTING
Earnings Announcement and Abnormal Return of S&P 500 Companies. Luke Qiu Washington University in St. Louis Economics Department Honors Thesis
Earnings Announcement and Abnormal Return of S&P 500 Companies Luke Qiu Washington University in St. Louis Economics Department Honors Thesis March 18, 2014 Abstract In this paper, I investigate the extent
Descriptive Statistics and Measurement Scales
Descriptive Statistics 1 Descriptive Statistics and Measurement Scales Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample
A USABILITY STUDY OF INTERACTIVE WEB-BASED MODULES
A USABILITY STUDY OF INTERACTIVE WEB-BASED MODULES Tulay Girard Penn State Altoona, United States [email protected] Musa Pinar Valparaiso University, United States [email protected] ABSTRACT This research
Correlational Research
Correlational Research Chapter Fifteen Correlational Research Chapter Fifteen Bring folder of readings The Nature of Correlational Research Correlational Research is also known as Associational Research.
EXCHANGE. J. Luke Wood. Administration, Rehabilitation & Postsecondary Education, San Diego State University, San Diego, California, USA
Community College Journal of Research and Practice, 37: 333 338, 2013 Copyright# Taylor & Francis Group, LLC ISSN: 1066-8926 print=1521-0413 online DOI: 10.1080/10668926.2012.754733 EXCHANGE The Community
Overview of Factor Analysis
Overview of Factor Analysis Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August 1,
Teaching in Accelerated Courses: The Faculty Perspective. By Carrie Johnson
Teaching in Accelerated Courses: The Faculty Perspective By Carrie Johnson This study included interviews with 18 faculty members who teach both adult accelerated and traditional courses. Participants
Blended Learning vs. Traditional Classroom Settings: Assessing Effectiveness and Student Perceptions in an MBA Accounting Course
Blended Learning vs. Traditional Classroom Settings: Assessing Effectiveness and Student Perceptions in an MBA Accounting Course Clement C. Chen, University of Michigan - Flint Keith T. Jones, Illinois
Factorial Invariance in Student Ratings of Instruction
Factorial Invariance in Student Ratings of Instruction Isaac I. Bejar Educational Testing Service Kenneth O. Doyle University of Minnesota The factorial invariance of student ratings of instruction across
An exploratory study of student motivations for taking online courses and learning outcomes
An exploratory study of student motivations for taking online courses and learning outcomes ABSTRACT Sarath A. Nonis Arkansas State University Grant H. Fenner Arkansas State University An investigation
STATISTICAL ANALYSIS OF UBC FACULTY SALARIES: INVESTIGATION OF
STATISTICAL ANALYSIS OF UBC FACULTY SALARIES: INVESTIGATION OF DIFFERENCES DUE TO SEX OR VISIBLE MINORITY STATUS. Oxana Marmer and Walter Sudmant, UBC Planning and Institutional Research SUMMARY This paper
Faculty Productivity and Costs at The University of Texas at Austin
Faculty Productivity and Costs at The University of Texas at Austin A Preliminary Analysis Richard Vedder Christopher Matgouranis Jonathan Robe Center for College Affordability and Productivity A Policy
Relating the ACT Indicator Understanding Complex Texts to College Course Grades
ACT Research & Policy Technical Brief 2016 Relating the ACT Indicator Understanding Complex Texts to College Course Grades Jeff Allen, PhD; Brad Bolender; Yu Fang, PhD; Dongmei Li, PhD; and Tony Thompson,
WWC Single Study Review A review of the design and summary of findings for an individual study
What Works Clearinghouse WWC Single Study Review A review of the design and summary of findings for an individual study U.S. DEPARTMENT OF EDUCATION July 2015 WWC Review of the Report Interactive Online
Do Supplemental Online Recorded Lectures Help Students Learn Microeconomics?*
Do Supplemental Online Recorded Lectures Help Students Learn Microeconomics?* Jennjou Chen and Tsui-Fang Lin Abstract With the increasing popularity of information technology in higher education, it has
International Conference on Communication, Media, Technology and Design. ICCMTD 09-11 May 2012 Istanbul - Turkey
OVERCOMING COMMUNICATION BARRIERS IN ONLINE TEACHING: UNDERSTANDING FACULTY PREFERENCES Ertunga C. Ozelkan and Agnes Galambosi Systems Engineering & Engineering Management University of North Carolina
Title: Transforming a traditional lecture-based course to online and hybrid models of learning
Title: Transforming a traditional lecture-based course to online and hybrid models of learning Author: Susan Marshall, Lecturer, Psychology Department, Dole Human Development Center, University of Kansas.
The relationship between goals, metacognition, and academic success
Educate~ Vol.7, No.1, 2007, pp. 39-47 Research Paper The relationship between goals, metacognition, and academic success by Savia A. Coutinho ([email protected]) Northern Illinois University, United States
Moving from Traditional to Online Instruction: Considerations for Improving Trainer and Instructor Performance
Moving from Traditional to Online Instruction: Considerations for Improving Trainer and Instructor Performance R. Lance Hogan, Ph.D., Assistant Professor, Eastern Illinois University Mark A. McKnight,
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
The Opportunity Cost of Study Abroad Programs: An Economics-Based Analysis
Frontiers: The Interdisciplinary Journal of Study Abroad The Opportunity Cost of Study Abroad Programs: An Economics-Based Analysis George Heitmann Muhlenberg College I. Introduction Most colleges and
Online, ITV, and Traditional Delivery: Student Characteristics and Success Factors in Business Statistics
Online, ITV, and Traditional Delivery: Student Characteristics and Success Factors in Business Statistics Douglas P. Dotterweich, East Tennessee State University Carolyn F. Rochelle, East Tennessee State
Communication Humor and Personality: Student s attitudes to online learning
Communication Humor and Personality: Student s attitudes to online learning Originally published in the Academic Quarterly Exchange, Summer 2001 Diane Goldsmith Ph.D. Connecticut Distance Learning Consortium
Success rates of online versus traditional college students
ABSTRACT Success rates of online versus traditional college students Dawn Wilson Fayetteville State University David Allen Fayetteville State University Are students setting themselves up for failure by
ACADEMIC DISHONESTY IN TRADITIONAL AND ONLINE CLASSROOMS: DOES THE MEDIA EQUATION HOLD TRUE?
ACADEMIC DISHONESTY IN TRADITIONAL AND ONLINE CLASSROOMS: DOES THE MEDIA EQUATION HOLD TRUE? Erik W. Black Doctoral Fellow and Candidate The School of Teaching and, The University of Florida Joe Greaser
University Undergraduate Teaching Quality 3.12. Chapter 3 Section. Background. Ministry of Training, Colleges and Universities
Chapter 3 Section 3.12 Ministry of Training, Colleges and Universities University Undergraduate Teaching Quality Background Ontario s 20 publicly assisted universities offer graduate and undergraduate
Best Practices for Increasing Online Teaching Evaluation Response Rates
Best Practices for Increasing Online Teaching Evaluation Response Rates Denise T. Ogden, Penn State University Lehigh Valley James R. Doc Ogden, Kutztown University of Pennsylvania ABSTRACT Different delivery
Assessing Blackboard: Improving Online Instructional Delivery
Assessing Blackboard: Improving Online Instructional Delivery Adnan A. Chawdhry [email protected] California University of PA Karen Paullet [email protected] American Public University System Daniel
Title: Enhancing Student Learning in Introduction to Sport Management. Author: Angela Lumpkin, Health, Sport, and Exercise Sciences
Title: Enhancing Student Learning in Introduction to Sport Management Author: Angela Lumpkin, Health, Sport, and Exercise Sciences Summary: In order to enhance and document students learning, a sports
Single Level Drill Down Interactive Visualization Technique for Descriptive Data Mining Results
, pp.33-40 http://dx.doi.org/10.14257/ijgdc.2014.7.4.04 Single Level Drill Down Interactive Visualization Technique for Descriptive Data Mining Results Muzammil Khan, Fida Hussain and Imran Khan Department
Program Assessment: School of Criminology and Criminal Justice.
Program Assessment: School of Criminology and Criminal Justice. Florida Atlantic University David Binninger, Ph.D. Scott H. Decker, Ph.D. Mark R. Pogrebin, Ph.D. On March 9-12, the external review team
Structuring Self-Evaluations Prepared by Richard A. Holmgren Allegheny College
Structuring Self-Evaluations Prepared by Richard A. Holmgren Allegheny College Self-evaluations play a critical role in faculty review by providing a context for the faculty member s work. To be of service
Chapter 2. Education and Human Resource Development for Science and Technology
Chapter 2 Education and Human Resource Development for Science and Technology 2.1 Evironment for Basic Human Resource Development... 53 2.1.1 Science education in primary and secondary schools... 53 2.1.2
AN ANALYSIS OF STUDENT EVALUATIONS OF COLLEGE TEACHERS
SBAJ: 8(2): Aiken et al.: An Analysis of Student. 85 AN ANALYSIS OF STUDENT EVALUATIONS OF COLLEGE TEACHERS Milam Aiken, University of Mississippi, University, MS Del Hawley, University of Mississippi,
Strategies for Promoting Gatekeeper Course Success Among Students Needing Remediation: Research Report for the Virginia Community College System
Strategies for Promoting Gatekeeper Course Success Among Students Needing Remediation: Research Report for the Virginia Community College System Josipa Roksa Davis Jenkins Shanna Smith Jaggars Matthew
