Journal College Teaching & Learning June 2005 Volume 2, Number 6



Similar documents
Does ratemyprofessor.com really rate my professor?

How To Determine Teaching Effectiveness

Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction

COURSE AND TEACHER SURVEYS (CATS) AT VILLANOVA UNIVERSITY

Student Ratings of College Teaching Page 1

Student Mood And Teaching Evaluation Ratings

Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini

AN ANALYSIS OF STUDENT EVALUATIONS OF COLLEGE TEACHERS

Description. Textbook. Grading. Objective

Evaluating and Grading Student Work

Student Involvement in Computer-Mediated Communication: Comparing Discussion Posts in Online and Blended Learning Contexts

Best Practices for Increasing Online Teaching Evaluation Response Rates

A study of factors affecting online student success at the graduate level

Title: Transforming a traditional lecture-based course to online and hybrid models of learning

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

EFL LEARNERS PERCEPTIONS OF USING LMS

Economic Statistics (ECON2006), Statistics and Research Design in Psychology (PSYC2010), Survey Design and Analysis (SOCI2007)

Strategies for Teaching Undergraduate Accounting to Non-Accounting Majors

Analyzing Research Articles: A Guide for Readers and Writers 1. Sam Mathews, Ph.D. Department of Psychology The University of West Florida


Chapter Four: How to Collaborate and Write With Others

Assessing the quality of online courses from the students' perspective

A Guide to Understanding and Using Data for Effective Advocacy

4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: "What do the data look like?"

RELATIONSHIPS BETWEEN SELF DIRECTION, EFFORT AND PERFORMANCE IN AN ONLINE INFORMATION TECHNOLOGY COURSE

USING STUDENT FEEDBACK TO IMPROVE TEACHING A Reflection

V. Course Evaluation and Revision

High School Psychology and its Impact on University Psychology Performance: Some Early Data

High School Counselors Influence

RateMyProfessors.com versus formal in-class student evaluations of teaching

Stat 411/511 THE RANDOMIZATION TEST. Charlotte Wickham. stat511.cwick.co.nz. Oct

How to Write a Successful PhD Dissertation Proposal

Interpretation Guide. For. Student Opinion of Teaching Effectiveness (SOTE) Results

Using web blogs as a tool to encourage pre-class reading, post-class. reflections and collaboration in higher education

Web-based versus classroom-based instruction: an empirical comparison of student performance

A critique of the length of online course and student satisfaction, perceived learning, and academic performance. Natasha Lindsey

Examining Science and Engineering Students Attitudes Toward Computer Science

Performance Assessment Task Baseball Players Grade 6. Common Core State Standards Math - Content Standards

Evaluating Different Methods of Delivering Course Material: An Experiment in Distance Learning Using an Accounting Principles Course

Cultural Differences in Student Evaluations of Professors

Welcome to this Knowledge Link overview. This document will familiarize you with the basic features of Penn s Knowledge Link learning management

Econometrics and Data Analysis I

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Annual Goals for Math & Computer Science

Converting a Face-to-Face Course to a Hybrid Course

Strategies to Enhance Learner s Motivation in E-learning Environment

MASTER COURSE SYLLABUS-PROTOTYPE PSYCHOLOGY 2317 STATISTICAL METHODS FOR THE BEHAVIORAL SCIENCES

Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations

Internship Guide. M.Ed. in Special Education

LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE

PRE 580/PSYC 598 Positive Psychology (Spring 2008) The University of Kansas, Edwards Campus Mondays, 4:30-7:00pm 153 Regnier Hall

Ferrum College QEP Critical Thinking for Success Critical Thinking Course Mini-grant Final Report Form, Spring Synopsis

How To Know If Online Courses Work For Middle And High School Students

TRAINING PROGRAM INFORMATICS

The Science of Student Ratings. Samuel T. Moulton Director of Educational Research and Assessment. Bok Center June 6, 2012

Outcomes of Preservice Teacher s Technology Use

An Investigation of Graduate Students Reflections on Research

An Introduction to Statistics Course (ECOE 1302) Spring Semester 2011 Chapter 10- TWO-SAMPLE TESTS

Basic Concepts in Research and Data Analysis

USING READING IN CONTENT AREA STRATEGIES TO IMPROVE STUDENT UNDERSTANDING IN FAMILY AND CONSUMER SCIENCES

Letter from the Editor-in-Chief: What Makes an Excellent Professor?

THE FUTURE OF INTERNET-BASED SURVEY METHODS

Fighting Diabetes through a Service Learning Partnership between Turtle Mountain Community College and the University of North Dakota

Designing & Conducting Survey Research

A COMPARISON OF ELECTRONIC SURVEYING BY AND WEB

Student Preferences for Learning College Algebra in a Web Enhanced Environment

Bio 204 Lab reform: End of Second Summer Report 1. Project motivation and goals: 2. Project summary:

An introduction to using Microsoft Excel for quantitative data analysis

An Application of the UTAUT Model for Understanding Student Perceptions Using Course Management Software

University of Connecticut Writing Center Writing a Psychology Literature Review

A Comparison Between Online and Face-to-Face Instruction of an Applied Curriculum in Behavioral Intervention in Autism (BIA)

How To Write A Data Analysis

INSTRUCTOR'S GUIDE PROJECT PLANNING AND MANAGEMENT SKILLS MODULE

Exercise Science Practicum and Internships in Kinesiology

EVALUATION OF THE EFFECTIVENESS OF ACCOUNTING INFORMATION SYSTEMS

Section 7.1. Introduction to Hypothesis Testing. Schrodinger s cat quantum mechanics thought experiment (1935)

A STATISTICS COURSE FOR ELEMENTARY AND MIDDLE SCHOOL TEACHERS. Gary Kader and Mike Perry Appalachian State University USA

Normality Testing in Excel

Thinking of Applying to Graduate School in Environmental Science?

Classroom Participation

Printed textbooks versus E-text: Comparing the old school convention with new school innovation

A Study in Learning Styles of Construction Management Students. Amit Bandyopadhyay, Ph.D., PE, F.ASCE State University of New York -FSC

AN INVESTIGATION OF THE DEMAND FACTORS FOR ONLINE ACCOUNTING COURSES

Comparison of Student and Instructor Perceptions of Best Practices in Online Technology Courses

Active and Collaborative Learning through a Blog Network

Transcription:

Analysis Of Online Student Ratings Of University Faculty James Otto, (Email: jotto@towson.edu), Towson University Douglas Sanford, (Email: dsanford@towson.edu), Towson University William Wagner, (Email: William.wagner@villanova.edu), Villanova University ABSTRACT This article explores and analyzes a new, exciting, and quite different source of faculty evaluation data that has recently become available on the world wide web; online faculty evaluation/ratings. The authors analyze the relationships between different online student rating measures of faculty performance. They investigate how student ratings of faculty easiness, clarity, and helpfulness are interrelated. The results of the analysis provides insights into student perceptions of faculty and suggest implications for the practice of teaching. INTRODUCTION S tudent faculty evaluations are an important measure of teaching effectiveness and are a consideration for promotion and tenure at many educational institutions. Unfortunately, they are an imperfect measure as biases have been detected for such factors as student academic ability (McKeachie, 1997), grading policy leniency (Greenwald & Gillmore, 1997), workload and grades received (Wallace & Wallace, 1998), and class size (Liaw and Goh, 2003). Given the importance that student faculty evaluations can have on a faculty member s promotion and tenure, as well as to our understanding of teaching effectiveness, it is essential that we understand how these biases can impact their construct validity. In line with this need, extensive research has been done conducted on student evaluations of faculty (Read et al., 2001; Martin, 1998). Our research extends this knowledge by exploring and analyzing a new, exciting, and quite different, source of faculty evaluation data that has recently become available on the world wide web; online faculty evaluation/rating sites. These faculty rating sites include RateMyProfessor.com, PassCollege.com, ProfessorPerformance.com, RatingsOnline.com, and Reviewum.com (Stone, 2003; Foster, 2003). One may argue that these sites, since they are not part of the formal faculty evaluation process, can be ignored. However, this argument loses strength when one examines the amount of traffic at these sites. For example, RateMyProfessor.com has received over 2.7 million ratings on over 478,000 professors at better than 4000 schools ( RateMyProfessor Statistics, n.d.). Apparently the site is popular with students and thus should perhaps be taken seriously by faculty. In this paper we have focused on RateMyProfessor.com because it is the largest, fastest growing, and most popular site (Stone, 2003). We discuss differences between the online faculty evaluation data and the typical university evaluation data and we analyze the online data to discover what insights it might provide. To date, a large body of research (Read et al., 2001; Martin, 1998) has been conducted on data derived from faculty evaluation forms that are filled out anonymously by students in the classroom using a formal, well defined, and controlled process. Little research has been done on online faculty rating data, except for the analysis of professor overall quality ratings as a function of easiness and sexiness ratings (Felton et al., 2004). Our research extends Felton s (2004) analysis by examining the relationships amongst the easiness rating and the constituent ratings of overall quality (the helpfulness and clarity ratings). Note that overall quality is a derived 25

rating (the average of helpfulness and clarity ratings). By examining the component variables, we can understand some important relationships amongst easiness, helpfulness, and clarity and their implications for practice. We also extend their work by broadening the sample universe used in the analysis. They analyze professors with at least 10 rating posts that are from the top 25 universities with the most student posts overall. We used a pseudorandom method to pick our ratings for analysis from RateMyProfessor.com. STUDENT EVALUATION DATA SOURCES While online ratings may provide valuable insights for faculty, it must be realized that evaluations from online faculty rating sources may differ from traditional faculty evaluations received at the end of the semester. Additionally there are issues related to the validity of online ratings. These differences and issues (as they relate specifically to RateMyProfessor.com) are discussed below. Sample Representation There are few controls about who can rate a professor online. According to their online RateMyProfessor.com help page, You are not required to log in or enter your name or email address. Thus, theoretically anyone can rate a professor. Thus, a non-student, such as an angry spouse or a professor himself might enter a rating, or even multiple ratings, in order to skew the results. This issue is addressed by RateMyProfessor.com, which states that, Remember, we have no way of knowing who is doing the rating - students, the teacher, other teachers, parents, dogs, cats, etc. This is quite different from traditional faculty evaluations that are completed in the classroom and controlled as to who can fill them out. Additionally, since students are asked to fill out the traditional faculty evaluations in the classroom, and since many students may not be familiar with online rating services, formal evaluations are more likely to have larger and more complete sample sizes. Sample Bias By definition, online raters are a self-selected sample. The raters must know about the web site, have access to a Web browser, and take the time and effort to access and fill out an online rating form. This contrasts with traditional classroom rating forms which are provided to the students in-class and usually given class time to fill out the forms. Results Availability A significant advantage (or perhaps disadvantage) of online rating sites is that the results are instantly publicly available at any time to anyone with access to a browser. This is not typically the case with traditional faculty evaluation results which are usually only available to the rated faculty member and his/her supervisors. Rating Criteria and Student Comments According to RateMyProfessor.com, The purpose of the site is to be a resource for students. Where else can you find out what others think of an instructor? Thus, comments and rating criteria are focused on helping students make informed decisions concerning faculty. On the other hand, traditional evaluation ratings are designed to help the faculty improve their performance. OBJECTIVE, APPROACH, AND RESULTS For our research, we analyzed rating information from RateMyProfessor.com because it is the largest and most well known of the online faculty rating sites. Descriptions of the rating categories is provided in Table 1, where a rating of 1 is worst and a rating of 5 is best. 26

Table 1: RateMyProfessor.com Rating Category Descriptions Easiness: This is definitely the most controversial of the three rating categories, which is why it is NOT included in the "Overall Quality" rating. Although we do not necessarily condone it, it is certainly true that many students decide what class to take based on the difficulty of the teacher. When rating a teacher's easiness, ask yourself "How easy are the classes that this professor teaches? Is it possible to get an A without too much work?" Helpfulness: This category rates the professor's helpfulness and approachability. Is the professor approachable and nice? Is the professor rude, arrogant, or just plain mean? Is the professor willing to help you after class? Clarity: This is the most important of the three categories, at least to many people. How well does the professor convey the class topics? Is the professor clear in his presentation? Is the professor organized and does the professor use class time effectively? Overall Quality: The Overall Quality rating is the average of a teacher's Helpfulness and Clarity ratings, and is what determines the type of "smiley face" that the Professor receives. Due to popular demand, a teacher's Easiness rating is NOT used when computing the Overall Quality rating, since an Easiness of 5 may actually mean the teacher is TOO easy. Source: http://www.ratemyprofessor.com Our analysis objective is to test the following hypotheses: Hypothesis 1: Clarity and helpfulness will be positively correlated. Rationale: If a professor is clear, but unhelpful, his/her helpfulness rating will not suffer as much as it might otherwise, because clear professors do not need to be as helpful. If a professor is unclear, but helpful, then the helpfulness of the professor will improve his/her clarity rating because the professor works with the student to make the material clear. Thus, a high clarity rating will pull up a low helpfulness rating and a high helpfulness rating will pull up a low clarity rating. Hypothesis 2: Easiness will be correlated with clarity. Rationale: If a professor is clear, then this makes a course easier on the student. Hypothesis 3: Easiness will be correlated with helpfulness Rationale: If a professor is willing to help a student understand the material, this will be perceived as easier than when a professor is unwilling to be helpful. Hypothesis 4: Clarity will be more highly correlated with easiness than helpfulness. Rationale: If a student requires a professor to be helpful, then this is more work than if the professor is clear in the first place as summarized in Table 2 below. Table 2: Rationale for Hypothesis 4 Clear Helpful Easiest For Student Clear Not Helpful Easier For Student Not Clear Helpful Harder For Student Not Clear Not Helpful Hardest For Student 27

We sampled 85 ratings from the web site using a two step pseudorandom selection technique. That is, we first randomly selected a school from those available in RateMyProfessor.com. We then randomly selected a professor from the selected school and use his/her average rating values. Descriptive statistics of the samples for the three rating components (Easiness, Helpfulness, and Clarity) are provided in Table 2. Table 3: Descriptive Sample Statistics Average Easiness Average Helpfulness Average Clarity Mean 3.2 3.7 3.76 Standard Error 0.13 0.16 0.15 Median 3 4 4 Mode 3 5 5 Standard Deviation 1.16 1.46 1.39 Sample Variance 1.34 2.1 1.92 Kurtosis -0.63-0.70-0.63 Skewness -0.09-0.84-0.82 Range 4 4 4 Minimum 1 1 1 Maximum 5 5 5 Sum 274.7 316.9 319.5 Count 85 85 85 Largest(1) 5 5 5 Smallest(1) 1 1 1 Confidence Level(95.0%) 0.25 0.32 0.30 We then subjected the sample data (N = 85) to analysis as summarized in Table 3. As can be seen from the R values associating Easiness and Helpfulness (.45), Easiness and Clarity (.53), and Helpfulness and Clarity (.88), all exceed the critical value of 0.217 (alpha =.05). Thus, hypotheses 1 through 3 are accepted. Table 4: Component Rating Correlations Average Easiness: Average Helpfulness: Average Clarity: Average Easiness: 1 Average Helpfulness: 0.45 1 Average Clarity: 0.53 0.88 1 To test hypothesis 4, that the correlation between clarity and easiness will be higher than between helpfulness and easiness, we first transformed the correlations using the Fisher r-to-z transformation and testing for a significant difference between the Z values (Hays, 1994: 849). The t-test shows that hypothesis 4 is not supported (t=0.95, p > 0.10). IMPLICATIONS FOR PRACTICE From our analysis, we can see that there is a relationship between each of the easiness, helpfulness, and clarity factors. The results raise a number of possible implications for practice. Clarity and helpfulness are strongly correlated. This finding is consistent with a large number of studies showing that clarity and helpfulness associate with student learning. This finding may mean that online ratings do reflect student learning. Students may be rating professors because they learned a lot from them, or not, and they want 28

to spread the word about the professor. To investigate this possibility, we are developing a series of focus groups to determine why students put ratings on RateMyProfessor.com. Our results show that easiness is related to both clarity and helpfulness. This agrees with Felton s (2004) finding that easiness on RateMyProfessor.com is related to overall quality (which is the average of the two rating components). It also agrees with Wilson (1998), who found that faculty who were easier graders received better evaluations. The positive relationships between easiness and the other factors suggest that students may find that clear professor s make a course easier and/or that a professor willing to help them makes the course easier. Alternatively, the results could suggest that easier courses are, by their nature, simpler to understand and thus more clear and that any help given by a professor produces results faster (because the material is more straightforward). The results do not suggest that easiness is more related to clarity than to helpfulness. Both clarity and helpfulness are related to easiness with no significant difference. It appears to be easier for the students if the professor is clear up front as well as helpful later to make things clearer. This finding implies that professors have a variety of methods for helping their students. They could work more on being initially clear and/or they could be more helpful. REFERENCES 1. Centra, J. A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work? Research in Higher Education, 44(5), 495-518/ 2. Felton, J., Mitchell, J. & Stinson, M. (2004). Web-based student evaluations of professors: The relations between perceived quality, easiness and sexiness. Assessment & Evaluation in Higher Education, 29(1), 91-108. 3. Foster, A. (2003). Does the popular online service help students find good professors, or just easy A's? Chronicle of Higher Education, 49. 4. Greenwald, A. G., & Gillmore, G. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209 1217. 5. Hays, W. L. (1994). Statistics, 5th Edition. Florida: Harcourt Brace & Company. 6. Liaw, S. & Goh, K. (2003). Evidence and control of biases in student evaluations of teaching. The International Journal of Educational Management, 17(1), 37-43. 7. Martin, J. R. (1998). Evaluating faculty based on student opinions: Problems, implications, and recommendations from Deming's theory of management perspective. Issues in Accounting Education, November, 1079-1094. 8. McKeachie, W. J. (1997). Student ratings: The validity of use. American Psychologist, 52, 1218 1225. 9. Read, W., Rama, D. & Raghunandan, K. (2001). The Relationship Between Student Evaluations of Teaching and Faculty Evaluations. Journal of Education for Business, 76(4), 189-193. 10. RateMyProfessor Statistics. (n.d.). Retrieved December 13, 2004, from http://www.ratemyprofessor.com /index.jsp. 11. Stone, M. (2003). EC Professors fair (somewhat) well on ratemyprofessors.com, The Leader, March 18, 2003, available from http://www.elmhurst.edu/~leader/archive/2003/cultue_03_18/ratemyproffessor.htm. 12. Wallace, J. J., & Wallace, W. A. (1998). Why the costs of student evaluations have long since exceeded their value. Issues in Accounting Education, May, 443-448. 13. Wilson, R. (1998). New research casts doubt on value of student evaluations of professors. Chronicle of Higher Education, 44(19), A12-A14. 29

NOTES 30