Teacher Course Evaluations and Student Grades: An Academic Tango
|
|
|
- Angelina Robinson
- 10 years ago
- Views:
Transcription
1 7/16/02 1:38 PM Page 9 Does the interplay of grading policies and teacher-course evaluations undermine our education system? Teacher Course Evaluations and Student s: An Academic Tango Valen E. Johnson PAMELA L. JOHNSON chance15-3 D uring the past decade or so, many colleges have placed an increasing emphasis on teaching effectiveness in faculty promotion, tenure, and salary reviews. In most cases, the mechanism used to measure teaching effectiveness is a locally developed evaluation form that is completed by an instructor s students toward the end of the course, usually before students have received their final course grades. The practice of using student evaluations of teaching (SETs) to evaluate faculty teaching effectiveness raises a number of concerns, including the basic validity of these forms and their sensitivity to external biases. The question of validity involves the extent to which SETs (or items on these forms) accurately predict student learning. Questions of bias involve the possibility that student responses are influenced by factors unrelated to the faculty member s instructional effectiveness. The topic of this article is the biasing effect that faculty grading practices have on SETs. A broader discussion of this and related issues may be found in my book The GPA Myth, from which most of the following analyses are drawn. Both the validity of SETs and potential biases to SETs have been discussed extensively in the educational literature. A simple search of the ERIC database produces thousands of articles concerning various aspects of SETs, and Greenwald summarizes more than 170 studies that examined the specific issue of whether SETs represented valid measures of student learning. Clearly, a comprehensive review of this literature is not possible here, and so I will simply summarize the current state of knowledge concerning the relation between grading practices and student evalua- tions of teaching by asserting that it is rather confused. Much of this confusion arises from the fact that it is difficult to conduct experiments to investigate this relationship because of constraints involving human subjects. Those experiments that have been conducted tend to confirm the notion that higher student grades cause higher teacher course evaluations, but over the past 2 or 3 decades several other explanations for the correlation between grades and CHANCE 9
2 chance15-3 7/16/02 1:38 PM Page 10 Survey Instrument Listed below is a subset of the items that appeared on the DUET Web site. The instructor s concern for the progress of individual students was 1) Very Poor 2) Poor 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not How effective was the instructor in encouraging students to ask questions and express their viewpoints? 1) Very poor 2) Poor 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not How would you rate this instructor s enthusiasm in teaching this course? 1) Very bad 2) Bad 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not How easy was it to meet with the instructor outside of class? 1) Very difficult 2) Difficult 3) Not Hard 4) Easy 5) Very Easy 6) Not How does this instructor(s) compare to all instructors that you have had at Duke? 1) Very bad 2) Bad 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not How good was the instructor(s) at communicating course material? 1) Very bad 2) Bad 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not To what extent did this instructor demand critical or original thinking? 1) Never 2) Seldom 3) Sometimes 4) Often 5) Always 6) Not How valuable was feedback on examinations and graded materials? 1) Very poor 2) Poor 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not How good was the instructor at relating course material to current research in the field? 1) Very bad 2) Bad 3) Fair 4) Good 5) Very Good 6) Excellent 7) Not SETs have been proposed. Among these are the teacher-effectiveness theory under which observed correlations between student grades and favorable SETs are attributed to quality teaching (i.e., higher teaching leads to both higher grades and SETs) and theories based on the intervention of unobserved student or classroom characteristics. For example, one might expect that highly motivated students enrolled in small, upper-level classes learn more, leading to both higher grades and more appreciation of a teacher s efforts. Understanding the relationship between grades and SETs is made still more difficult by the fact that many educational researchers have been hesitant to acknowledge the biasing effects of grades on SETs because much of their research relies heavily on the use of student evaluations of teaching. Clearly, biases to teacher-course evaluation forms caused by student grades undermine the validity of research based on these forms. Readers interested in a more thorough discussion of these issues and an entry into this literature may refer to the references provided at the end of this article. In this article, the causal effect of grades on student evaluations of teaching are investigated by examining new evidence collected in an online course evaluation experiment conducted at Duke University. In this experiment, changes to a SET were recorded as Duke University freshmen completed a teacher-course evaluation form both before and after they had received their final course grades. By accounting for the grades this cohort of students expected to receive in their initial survey response, it is possible to separate the effects of grades on student evaluations of teaching from other intervening variables. Experimental Design The data used in this investigation were collected during an experiment conducted at Duke University during the fall semester of 1998 and the spring semester of The experiment, called DUET (Duke Undegraduates Evaluate Teaching), utilized a commercially operated Web site to collect student opinions on the courses they either had taken or were taking. The DUET Web site was activated for two 3-week periods, the first beginning the week prior to fall registration in 1998 and the second a week prior to spring registration in Both periods fell approximately 10 weeks into their respective semesters. The survey was conducted immediately before and during registration as an incentive for students to participate; by completing the DUET survey prior to registration, students could view course evaluation data and grade data for courses they planned to take the following semester. An important aspect of the experimental design involved the manner in which data were collected for first-year students. Participating first-year students completed the survey for their fall courses twice: once before completing their fall courses and receiving their final grades, and once after. Because one DUET survey item asked students what grade they received or expected to receive in their courses, the responses collected from freshmen at the two time points provide an ideal mechanism for investigating the influence that expected and received student grades have on student evaluations of teaching. Upon entering the DUET Web site, students were initially confronted with text informing them that course evalu- 10 VOL. 15, NO. 3, 2002
3 chance15-3 7/16/02 1:38 PM Page 11 ation data collected on the site would be used as part of a study investigating the feasibility of collecting course evaluations on the Web. They were also told that their responses would not be accessible to either their instructors, faculty not asssociated with the experiment, other students, or university administrators. After indicating their consent to participate in the study, items on the survey were presented to students in groups of five to seven questions. With only one or two exceptions, all items included a Not response. Thirty-eight items were included on the survey, but, because of space constraints, attention here is restricted to nine questions that relate to studentinstructor interaction. These items and possible student responses appear in the sidebar on the previous page. During the two semesters that the DUET Web site was active, 11,521 complete course evaluations were recorded. Each evaluation consisted of 38 item responses for a single course. Of the 6,471 eligible full-time, degreeseeking students who matriculated at Duke University after 1995, 1,894 students (29 percent) participated in the experiment. Among first-year students, approximately one-half participated in both the fall and the spring surveys. A detailed investigation of the relationship between various demographic variables and response rates of the corresponding groups is provided in The GPA Myth. That analysis suggests that response patterns were not significantly affected by the variables most associated with differential response rates. In particular, although response rates did vary according to students gender, ethnicity, academic year, academic major, and GPA, student response patterns did not vary substantially according to these categories. Statistical Analysis Because student responses to the DUET survey take the form of ordinal data that is, responses fall into ordered categories it is natural to estimate the effects of received and expected grades on SETs using ordinal regression models. To this end, I assume the existence of an unobserved, or latent, variable that underlies each student s perception of strongly disagree disagree neutral Latent variable agree strongly agree Figure 1. Latent variable distributions. The three curves in this figure represent hypothetical density functions (shifted vertically so as to facilitate display on the same horizontal axis) from which latent variables underlying the generation of ordinal data are assumed to be drawn. an item. As the value of this latent variable increases, so too does the probability that the person rates the item in one of the higher categories. This concept is illustrated in Figure 1. The curve at the top of the figure represents the distribution of the latent variable for an individual who is likely to strongly disagree or to disagree with a statement. Individuals whose latent variables are drawn from the distribution in the middle of the figure are more likely to respond in a neutral way, while individuals whose latent distributions are depicted at the bottom of the figure are likely to agree or strongly agree with the question. Statistical issues that arise in analyzing ordinal data using latent variable models involve establishing a scale of measurement (i.e., establishing the location of 0 on the horizontal axis in Figure 1), estimating the locations of the category cutoffs (the vertical, dashed lines in Figure 1 that separate the response categories), and, most important, modeling CHANCE 11
4 chance15-3 7/16/02 1:38 PM Page 12 Table 1 Estimated Effects of Expected s on DUET Items. Item E D E C - E C E C+ E B - E B E B+ E A - E A E A+ Instructor concern (1.03) (0.55) (0.28) (0.42) (0.20) (0.16) (0.14) (0.13) (0.14) (0.21) Encouraged questions (0.71) (0.44) (0.29) (0.42) (0.23) (0.18) (0.17) (0.17) (0.16) (0.23) Instructor enthusiasm (0.72) (0.52) (0.28) (0.42) (0.23) (0.19) (0.18) (0.17) (0.18) (0.25) Instructor availability (0.82) (0.58) (0.29) (0.39) (0.22) (0.16) (0.15) (0.13) (0.13) (0.22) Instructor rating (0.79) (0.72) (0.31) (0.40) (0.22) (0.16) (0.14) (0.13) (0.12) (0.22) Instructor communication (0.70) (0.51) (0.29) (0.39) (0.24) (0.21) (0.21) (0.20) (0.21) (0.26) Critical thinking (0.73) (0.66) (0.29) (0.41) (0.22) (0.17) (0.16) (0.14) (0.13) (0.22) Usefulness of exams (0.80) (0.53) (0.31) (0.43) (0.25) (0.21) (0.20) (0.20) (0.19) (0.28) Related course to research (0.98) (0.69) (0.28) (0.39) (0.22) (0.19) (0.16) (0.16) (0.15) (0.24) the effects that explanatory variables have in shifting the distribution of the latent variables. The first problem establishing a scale of measurement can be solved by setting the shape of the curves used to describe the distributions of the latent variables and by pinning down the value of one of the category cutoffs. For this purpose, in the analyses that follow, the shape of these distributions is assumed to be a standard logistic density function, and the value of the lowest category cutoff is fixed at 0. To model the location of the logistic curve that determines the probability that a particular student responds in a certain way on a given DUET item (i.e., to model the location of the center of the curves depicted in Figure 1), assume that for each DUET item and student, there is a value, say c i, that represents the ith student s perception of the given DUET item for the course in question, excluding influences of expected or received grade. That is, c i is defined to represent the central value of student i s latent perception distribution for that DUET item when no consideration is given to expected or received grade. For brevity, call this value the course effect. For DUET data collected in the fall of 1998, the effects of expected grade on students perception of items are modeled by assuming that the influence of expecting a grade of, say, g, on the ith student s perception of a given DUET item is to shift the latent perception distribution of that student by an amount equal to E g. Student expectations of their grades were defined using their responses to one of the items on the DUET survey. Available responses for this item were A+, A,..., F, but, because of the paucity of responses in categories lower than C-, values of D+, D, D-, and F were pooled together. The central value of the curve describing the ith student s perception of a DUET item, including possible influences of expected grade, was thus assumed to be c i + E g. Assumptions made for data collected in the spring of 1999 are similar. As in the fall, c i denotes the central value of the ith student s latent perception distribution for a given DUET item, where, as before, influences of expected or received grades are excluded. The relative values of c i are assumed to be constant across semesters for the same student/duet item combinations. An important difference between data collected in the fall and spring is that students who completed the survey in the spring had, by that time, received their actual course grades for their fall courses. For this reason, modifications to students perceptions of their fall courses during the spring survey period are attributed to received grade rather than expected grade. Let R g denote the influence that receiving a grade of g has on a student s perception of a course attribute in the spring, where g is coded as before but now represents the grade actually received in the course. With this notation in hand, the center of each student s latent distribution in the spring was assumed to be c i + R g. Finally, distinct category cutoffs were defined separately for the fall and spring data. By so doing, systematic changes in student perceptions of classes over time were naturally integrated into the probability model for the student responses. 12 VOL. 15, NO. 3, 2002
5 chance15-3 7/16/02 1:38 PM Page 13 Table 2 Estimated Effects of Received s on DUET Items. Item R D R C - R C R C+ R B - R B R B+ R A - R A R A+ Instructor concern (0.31) (0.31) (0.21) (0.23) (0.24) (0.13) (0.14) (0.14) (0.13) (0.20) Encouraged questions (0.32) (0.31) (0.21) (0.24) (0.22) (0.12) (0.11) (0.10) (0.10) (0.17) Instructor enthusiasm (0.33) (0.35) (0.24) (0.24) (0.23) (0.16) (0.16) (0.17) (0.16) (0.23) Instructor availability (0.30) (0.37) (0.21) (0.24) (0.22) (0.13) (0.11) (0.10) (0.11) (0.19) Instructor rating (0.34) (0.34) (0.21) (0.24) (0.22) (0.12) (0.12) (0.11) (0.10) (0.18) Instructor communication (0.28) (0.31) (0.22) (0.23) (0.20) (0.12) (0.11) (0.11) (0.10) (0.18) Critical thinking (0.35) (0.33) (0.23) (0.24) (0.22) (0.14) (0.14) (0.13) (0.12) (0.21) Usefulness of exams (0.28) (0.33) (0.22) (0.25) (0.22) (0.13) (0.14) (0.14) (0.14) (0.21) Related course to research (0.38) (0.53) (0.26) (0.27) (0.24) (0.14) (0.14) (0.11) (0.10) (0.22) To summarize this model for the paired student responses, we have assumed that student responses to each DUET survey item depend first on an underlying perceptual variable that determines each student s general impression of the course attribute in question and second on the student s expected or received grade in that course. A possible drawback of this formulation is that, at first glance, there appear to be almost as many parameters in the model as there are paired student responses to the survey. Indeed, if only the fall or the spring data were examined separately, there would be more parameters in the model than observations in the data, since each student response to each DUET item for each course invokes its own value of c i. However, with two semesters of data, there were two observations of each student s response to each DUET item for each course taken, and only one value of c i to estimate for each such pair. For purposes of estimating the remaining model parameters, this results in a surplus of observations in number equal to the response pairs. Because more than 2,500 paired responses were obtained for each item, this leaves ample data available for estimating the fixed effects of grades. An advantage of estimating c i for each paired student response is that it permits the effects of grades on student evaluations of teaching to be separated from course attributes and other student background variables. Presumably, prior student interest, student motivation, academic prowess, and the intellectual level at which an instructor pitched his or her lectures are all absorbed in this variable. The consequence of fitting a random course effect for each paired response is that estimates of grade effects are based only on changes to survey responses that accompany expected and received grades. Posterior means of the expected grade and received grade parameters estimated from this model, obtained using estimation procedures described in Ordinal Data Modeling, are provided in Tables 1 and 2. Posterior standard deviations are indicated in parentheses below each estimate. The coefficients in Tables 1 and 2 can most easily be interpreted in terms of odds and odds ratios. Because the latent student perception variables were assumed to have standard logistic distributions, the ratio of the probability that a student rated an item above a given category, say m, to the probability that he or she rated it less than or equal to category m in the fall 1998 survey can be expressed Pr( response > m) = exp( γ m + c i + E g ). Pr( response m) In this expression, γ m denotes the category cutoff for category m (illustrated in stylized form by a vertical dashed line in Figure 1), and E g denotes the coefficient listed for an expected grade of g in Table 1. This ratio of probabilities is called the odds. The corresponding odds for the probability that a student responded in category m or higher during the spring 1999 survey is given by Pr( response > m) = exp( γ m + c i + R g ), Pr( response m) CHANCE 13
6 chance15-3 7/16/02 1:38 PM Page 14 Instructor concern Encouraged questions Instructor enthusiasm Instructor availability Instructor rating Instructor communication Critical thinking Usefulness of exams Related course to research Figure 2. Comparative effects of expected grade on student evaluations of nine teaching items. Note that only differences in estimated coefficients are relevant since the origin of measurement was determined arbitrarily by assigning the first item category cutoff a value of 0. where R g is the effect of a received grade of g, as listed in Table 2. Such odds depend on specific course effect parameters c i and the category cutoff parameter γ m. However, the impact of grades on student responses to the DUET items can be examined more directly by calculating the odds ratios for different received or expected grades. The odds ratios do not depend on either the course effects c i or category thresholds γ m. For example, suppose interest focuses on the differential effect that a student expectation of a grade of A versus a grade of B has on a student s response to the question regarding instructor concern for students. Then the ratio of the odds that a student would respond in category m or higher when expecting an A to the odds that a student responds in category m or higher when expecting a grade of B is Pr( response > m expecting A) Pr( response m expecting A) exp( γ + ci+ EA) = Pr( response > m expecting B) exp( γ + ci+ EB) Pr( response m expecting B) = exp( EA EB) = exp( ) = 175. Thus, the ratio of the odds that a student expecting an A rates this item higher than (any) category m to the same odds for a student expecting a B is This illustration can be extended directly to probabilities of response. For example, suppose that there is a 30 percent probability that a student who expects a B in a course rates instructor concern as Good or higher (corresponding to odds of 3:7). Then the same student, if he or she had expected an A rather than a B, would, on average, rate the same item as Good or higher with probability 43 percent. Clearly, this is a very substantial effect. For the spring data, received rather than expected grades are relevant. Again considering the item involving instructor concern, the corresponding odds ratio is exp( R R ) = exp( ) = A B In other words, the odds that a student gives an instructor a high rating on concern is increased by a factor of more than 3 if the student receives an A instead of a B! Figure 2 provides an illustration of the coefficients listed in Table 1. In the plots depicted in this figure, the posterior means of the expected grade coefficients for the nine DUET items listed in the sidebar are displayed as a function of increasing expected grade. One striking aspect of the plots in Figure 2 is the nonlinearity of the trend at the lowest grade levels. Apparently, students who expect to receive a grade 14 VOL. 15, NO. 3, 2002
7 chance15-3 7/16/02 1:38 PM Page 15 Instructor concern Encouraged questions Instructor enthusiasm Instructor availability Instructor rating Instructor communication Critical thinking Usefulness of exams Related course to research Figure 3. Comparative effects of received grade on student evaluations of nine teaching items. of D+ or lower are less likely to attribute their poor performance to the instructor than are students who receive grades in the C range. A plausible explanation for this phenomenon, which manifests itself for most of the items in the other groups as well, is that students who expect to receive grades of D+ or lower have not adequately invested themselves in their courses. Keeping in mind that fewer than 3 percent of all grades assigned to firstyear students at Duke are lower than a C-, and that essentially any level of student commitment would earn a grade of at least a C- in a vast majority of Duke courses, it is likely that students who expect to receive such low grades recognize that their poor performance is due to a lack of effort. In other words, students may be willing to attribute poor performance to themselves if such performance results from a lack of effort rather than lack of ability. Even more striking than the nonlinearity of the trend in the grade effects at the lowest grade levels is the monotonicity of this trend at more normal grade levels. For eight out of the nine questions in this group, the tendency of students to rate more highly those courses for which they received higher grades is nearly uniform. The exception occurs for the item involving the extent to which the instructor required critical thinking, which levels out above grades of B. Because this is the only question in this group that tangentially involves course difficulty, the explanation for this deviation might involve a balancing of the presumably negative effect of the use of stringent grading as a tool to require critical thinking and the positive, attributional effect of receiving higher marks. Plots comparable to those in Figure 2, but based on received grade, are displayed in Figure 3. Qualitatively, the two sets of plots are commensurate. One feature of these plots is, however, more pronounced in the plots based on received grade: the differences that accompany changes in student evaluations between students receiving a C+ and a B-. For many items, this difference results in a notable spike in the otherwise dominant trend. This spike is likely caused by the relatively large perceptual difference among undergraduates between the value of a C grade and a B grade. The tendency of students to rate items more highly upon receipt of a higher grade cannot easily be explained by the prevalent theories that have been posed to explain observed correlations between grades and student evaluations of teaching. The experimental design used to measure these trends essentially eliminates the possibility that these effects might have been caused by teacher effectiveness or most intervening variables. The simplest and most reasonable explanation for the trends reported in Figures 2 and 3 is that grade expectations or received grades caused a change in the way students perceived their teacher and the quality of instruction. The extent to which these data support alternative explanations for the CHANCE 15
8 chance15-3 7/16/02 1:38 PM Page 16 As an increasing number of universities use student evaluations of teaching in administrative decisions that affect the careers of their faculty, the incentives for faculty to manipulate their grading policies in order to enhance their evaluations increase. correlations of grades and SETs is discussed in The GPA Myth. Summary Data collected during the DUET experiment provides a unique glimpse into the causal nature of student grades on teacher-course evaluations. Although any number of extraneous factors may act to influence both grades and SETs, the analysis of the DUET data suggests that there is a direct effect of grading policy on SETs beyond what can be explained by such extraneous factors. A unique feature of this analysis was that two student responses were obtained for each student-course combination. The first response was obtained while students were still taking their courses and before they had received their final course grades. The second response was collected after students had completed their courses and after they had received their final course grades. By contrasting the two responses obtained from each student, highly significant, substantively important effects of student grade were discovered on all of the DUET items. These findings corroborate the findings of earlier grademanipulation studies and a preponderance of correlational studies. Because the design of the DUET experiment effectively eliminated the possibility that unobserved environmental factors were responsible for these effects, the results from this analysis provide conclusive evidence of a biasing effect of student grades on student evaluations of teaching. From a policy viewpoint, the findings of this study are important. As an increasing number of universities use student evaluations of teaching in administrative decisions that affect the careers of their faculty, the incentives for faculty to manipulate their grading policies in order to enhance their evaluations increase. Because grading policies affect student enrollment decisions and the amount students learn in their courses, the ultimate consequence of such manipulations is the degradation of the quality of education in the United States. References and Further Reading Costin, F., Greenough, W., and Menges, R. (1971), Student Ratings of College Teaching: Reliability, Validity, and Usefulness, Review of Educational Research, 41, 5, Feldman, K. (1976), s and College Students Evaluations of Their Courses and Teachers, Research in Higher Education, 4, Feldman, K. (1977). Consistency and Variability Among College Students in Rating Their Teachers and Courses: A Review and Analysis, Research in Higher Education, 6, Frey, P., Leonard, D., and Beatty, W. (1975), Student Ratings of Instruction: Validation Research, American Educational Research Journal, 12, Greenwald, A. and Gillmore, G. (1997), Grading Leniency is a Removable Constraint, American Psychologist, 52, Johnson, V. (1999), Ordinal Data Modeling, New York: Springer-Verlag. Johnson, V. (forthcoming), College Grading: A National Crisis in Undergraduate Education, New York: Springer-Verlag. Marsh, H. (1987), Students Evaluations of University Teaching: Research Findings, Methodological Issues, and Directions for Further Research, International Journal of Educational Research, 11, San Francisco San Francisco August 3 7, 2003 San Francisco Hilton and Towers Renaissance Parc 55 Hotel Hotel Nikko San Francisco 16 VOL. 15, NO. 3, 2002
How To Determine Teaching Effectiveness
Student Course Evaluations: Key Determinants of Teaching Effectiveness Ratings in Accounting, Business and Economics Courses at a Small Private Liberal Arts College Timothy M. Diette, Washington and Lee
Student Feedback: Improving the Quality through Timing, Frequency, and Format
Student Feedback: Improving the Quality through Timing, Frequency, and Format Travis Habhab and Eric Jamison This paper was completed and submitted in partial fulfillment of the Master Teacher Program,
Predicting Successful Completion of the Nursing Program: An Analysis of Prerequisites and Demographic Variables
Predicting Successful Completion of the Nursing Program: An Analysis of Prerequisites and Demographic Variables Introduction In the summer of 2002, a research study commissioned by the Center for Student
Examining Science and Engineering Students Attitudes Toward Computer Science
Examining Science and Engineering Students Attitudes Toward Computer Science Abstract Concerns have been raised with respect to the recent decline in enrollment in undergraduate computer science majors.
An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data
Technical Report No. 15 An Analysis of IDEA Student Ratings of Instruction in Traditional Versus Online Courses 2002-2008 Data Stephen L. Benton Russell Webster Amy B. Gross William H. Pallett September
Impact of attendance policies on course attendance among college students
Journal of the Scholarship of Teaching and Learning, Vol. 8, No. 3, October 2008. pp. 29-35. Impact of attendance policies on course attendance among college students Tiffany Chenneville 1 and Cary Jordan
SOCIETY FOR THE STUDY OF SCHOOL PSYCHOLOGY DISSERTATION GRANT AWARDS. Request For Applications, Due October 30, 2015 5PM Eastern Time
SOCIETY FOR THE STUDY OF SCHOOL PSYCHOLOGY DISSERTATION GRANT AWARDS Request For Applications, Due October 30, 2015 5PM Eastern Time (A Spring Request will be distributed in early 2016, with a March deadline)
Online versus traditional teaching evaluation: mode can matter
Assessment & Evaluation in Higher Education Vol. 30, No. 6, December 2005, pp. 581 592 Online versus traditional teaching evaluation: mode can matter Eyal Gamliel* and Liema Davidovitz Ruppin Academic
Assessment Findings and Curricular Improvements Department of Psychology Undergraduate Program. Assessment Measures
Assessment Findings and Curricular Improvements Department of Psychology Undergraduate Program Assessment Measures The Department of Psychology uses the following measures to assess departmental learning
The North Carolina Recent Graduate Survey Report 2012-2013
The North Carolina Recent Graduate Survey Report 2012-2013 Fayetteville State University Education Policy Initiative at Carolina April 2014 One Two Three Four Five One Two Three Four Five Recent Graduate
Required for Review: Ask and answer an assessment-focused question
1. Quality of program outcomes What direct measures of student learning/achievement does the program use: CLA, program assessment (portfolio, common final/question set, licensure exams, exit exam, etc.)
Abstract Title Page Not included in page count.
Abstract Title Page Not included in page count. Title: Measuring Student Success from a Developmental Mathematics Course at an Elite Public Institution Authors and Affiliations: Julian Hsu, University
Student Mood And Teaching Evaluation Ratings
Student Mood And Teaching Evaluation Ratings Mary C. LaForge, Clemson University Abstract When student ratings are used for summative evaluation purposes, there is a need to ensure that the information
Best Practices for Increasing Online Teaching Evaluation Response Rates
Best Practices for Increasing Online Teaching Evaluation Response Rates Denise T. Ogden, Penn State University Lehigh Valley James R. Doc Ogden, Kutztown University of Pennsylvania ABSTRACT Different delivery
A Modest Experiment Comparing HBSE Graduate Social Work Classes, On Campus and at a. Distance
A Modest Experiment 1 Running Head: Comparing On campus and Distance HBSE Courses A Modest Experiment Comparing HBSE Graduate Social Work Classes, On Campus and at a Distance A Modest Experiment 2 Abstract
Student Response to Instruction (SRTI)
office of Academic Planning & Assessment University of Massachusetts Amherst Martha Stassen Assistant Provost, Assessment and Educational Effectiveness 545-5146 or [email protected] Student Response
LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE
LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE MAT 119 STATISTICS AND ELEMENTARY ALGEBRA 5 Lecture Hours, 2 Lab Hours, 3 Credits Pre-
CSU, Fresno - Institutional Research, Assessment and Planning - Dmitri Rogulkin
My presentation is about data visualization. How to use visual graphs and charts in order to explore data, discover meaning and report findings. The goal is to show that visual displays can be very effective
Student Performance in Traditional vs. Online Format: Evidence from an MBA Level Introductory Economics Class
University of Connecticut DigitalCommons@UConn Economics Working Papers Department of Economics 3-1-2007 Student Performance in Traditional vs. Online Format: Evidence from an MBA Level Introductory Economics
ACT CAAP Critical Thinking Test Results 2007 2008
ACT CAAP Critical Thinking Test Results 2007 2008 Executive Summary The ACT Collegiate Assessment of Academic Proficiency (CAAP) Critical Thinking Test was administered in fall 2007 to 200 seniors and
Creating Quality Developmental Education
***Draft*** Creating Quality Developmental Education A Guide to the Top Ten Actions Community College Administrators Can Take to Improve Developmental Education Submitted to House Appropriations Subcommittee
Virtual Teaching in Higher Education: The New Intellectual Superhighway or Just Another Traffic Jam?
Virtual Teaching in Higher Education: The New Intellectual Superhighway or Just Another Traffic Jam? Jerald G. Schutte California State University, Northridge email - [email protected] Abstract An experimental
Mathematics Placement And Student Success: The Transition From High School To College Mathematics
Mathematics Placement And Student Success: The Transition From High School To College Mathematics David Boyles, Chris Frayer, Leonida Ljumanovic, and James Swenson University of Wisconsin-Platteville Abstract
State Grants and the Impact on Student Outcomes
Page 1 of 9 TOPIC: PREPARED BY: FINANCIAL AID ALLOCATION DISCUSSION CELINA DURAN, FINANCIAL AID DIRECTOR I. SUMMARY At its September meeting, the Commission agreed to revisit the financial allocation method
The Success and Satisfaction of Community College Transfer Students at UC Berkeley
The Success and Satisfaction of Community College Transfer Students at UC Berkeley G R E G G T H O M S O N, E X E C U T I V E D I R E C T O R S E R E E T A A L E X A N D E R, P H. D., R E S E A R C H A
Assessing the Impact of a Tablet-PC-based Classroom Interaction System
STo appear in Proceedings of Workshop on the Impact of Pen-Based Technology on Education (WIPTE) 2008. Assessing the Impact of a Tablet-PC-based Classroom Interaction System Kimberle Koile David Singer
Statistics 151 Practice Midterm 1 Mike Kowalski
Statistics 151 Practice Midterm 1 Mike Kowalski Statistics 151 Practice Midterm 1 Multiple Choice (50 minutes) Instructions: 1. This is a closed book exam. 2. You may use the STAT 151 formula sheets and
The North Carolina Recent Graduate Survey Report 2012-2013
The North Carolina Recent Graduate Survey Report 2012-2013 University of North Carolina Asheville Education Policy Initiative at Carolina April 2014 A.1 One Two Three Four Five One Two Three Four Five
The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community
The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community Michele J. Hansen, Ph.D., Director of Assessment, University
Evaluation in Online STEM Courses
Evaluation in Online STEM Courses Lawrence O. Flowers, PhD Assistant Professor of Microbiology James E. Raynor, Jr., PhD Associate Professor of Cellular and Molecular Biology Erin N. White, PhD Assistant
Response Rates in Online Teaching Evaluation Systems
Response Rates in Online Teaching Evaluation Systems James A. Kulik Office of Evaluations and Examinations The University of Michigan July 30, 2009 (Revised October 6, 2009) 10/6/2009 1 How do you get
Where has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom
Where has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom B. Jean Mandernach, Swinton Hudson, & Shanna Wise, Grand Canyon University USA Abstract While research has examined
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Procrastination in Online Courses: Performance and Attitudinal Differences
Procrastination in Online Courses: Performance and Attitudinal Differences Greg C Elvers Donald J. Polzella Ken Graetz University of Dayton This study investigated the relation between dilatory behaviors
Nominal and ordinal logistic regression
Nominal and ordinal logistic regression April 26 Nominal and ordinal logistic regression Our goal for today is to briefly go over ways to extend the logistic regression model to the case where the outcome
Attrition in Online and Campus Degree Programs
Attrition in Online and Campus Degree Programs Belinda Patterson East Carolina University [email protected] Cheryl McFadden East Carolina University [email protected] Abstract The purpose of this study
Comparison of Student Performance in an Online with traditional Based Entry Level Engineering Course
Comparison of Student Performance in an Online with traditional Based Entry Level Engineering Course Ismail I. Orabi, Ph.D. Professor of Mechanical Engineering School of Engineering and Applied Sciences
ONE-YEAR MASTER OF PUBLIC HEALTH DEGREE PROGRAM IN EPIDEMIOLOGY
ONE-YEAR MASTER OF PUBLIC HEALTH DEGREE PROGRAM IN EPIDEMIOLOGY The one-year MPH program in Epidemiology requires at least 42 units of course work, including selected required courses and seminars in epidemiology
UMEÅ INTERNATIONAL SCHOOL
UMEÅ INTERNATIONAL SCHOOL OF PUBLIC HEALTH Master Programme in Public Health - Programme and Courses Academic year 2015-2016 Public Health and Clinical Medicine Umeå International School of Public Health
Example G Cost of construction of nuclear power plants
1 Example G Cost of construction of nuclear power plants Description of data Table G.1 gives data, reproduced by permission of the Rand Corporation, from a report (Mooz, 1978) on 32 light water reactor
Interpretation Guide. For. Student Opinion of Teaching Effectiveness (SOTE) Results
Interpretation Guide For Student Opinion of Teaching Effectiveness (SOTE) Results Prepared by: Office of Institutional Research and Student Evaluation Review Board May 2011 Table of Contents Introduction...
Faculty Evaluation and Performance Compensation System Version 3. Revised December 2004
Faculty Evaluation and Performance Compensation System Version 3 Revised December 2004 2 SUMMARY OF MAJOR CHANGES FROM EVALUATION SYSTEM, VERSION 1, 2003-2004, TO EVALUATION SYSTEM, VERSION 2, 2004-2005
Realizeit at the University of Central Florida
Realizeit at the University of Central Florida Results from initial trials of Realizeit at the University of Central Florida, Fall 2014 1 Based on the research of: Dr. Charles D. Dziuban, Director [email protected]
Unit Plan for Assessing and Improving Student Learning in Degree Programs. Unit: Social Work Date: May 15, 2008 Unit Head Approval:
Unit Plan for Assessing and Improving Student Learning in Degree Programs Unit: Social Work Date: May 15, 2008 Unit Head Approval: Section 1: Past Assessment Results MSW Program The School of Social Work
The Opportunity Cost of Study Abroad Programs: An Economics-Based Analysis
Frontiers: The Interdisciplinary Journal of Study Abroad The Opportunity Cost of Study Abroad Programs: An Economics-Based Analysis George Heitmann Muhlenberg College I. Introduction Most colleges and
Math Placement Acceleration Initiative at the City College of San Francisco Developed with San Francisco Unified School District
Youth Data Archive Issue Brief October 2012 Math Placement Acceleration Initiative at the City College of San Francisco Developed with San Francisco Unified School District Betsy Williams Background This
UNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
Assessing Quantitative Reasoning in GE (Owens, Ladwig, and Mills)
Assessing Quantitative Reasoning in GE (Owens, Ladwig, and Mills) Introduction Many students at CSU, Chico, receive much of their college-level mathematics education from the one MATH course they complete
Strategies for Promoting Gatekeeper Course Success Among Students Needing Remediation: Research Report for the Virginia Community College System
Strategies for Promoting Gatekeeper Course Success Among Students Needing Remediation: Research Report for the Virginia Community College System Josipa Roksa Davis Jenkins Shanna Smith Jaggars Matthew
February 2003 Report No. 03-17
February 2003 Report No. 03-17 Bright Futures Contributes to Improved College Preparation, Affordability, and Enrollment at a glance Since the Bright Futures program was created in 1997, Florida s high
MASTER OF SCIENCE IN SCHOOL BUILDING LEADERSHIP
School of Education / 99 DEPARTMENT OF EDUCATIONAL LEADERSHIP MASTER OF SCIENCE IN SCHOOL BUILDING LEADERSHIP Purpose The Master of Science in School Building Leadership Program prepares practicing teachers
AN ANALYSIS OF STUDENT EVALUATIONS OF COLLEGE TEACHERS
SBAJ: 8(2): Aiken et al.: An Analysis of Student. 85 AN ANALYSIS OF STUDENT EVALUATIONS OF COLLEGE TEACHERS Milam Aiken, University of Mississippi, University, MS Del Hawley, University of Mississippi,
The Impact of Living Learning Community Participation on 1 st -Year Students GPA, Retention, and Engagement
The Impact of Living Learning Community Participation on 1 st -Year Students GPA, Retention, and Engagement Suohong Wang, Sunday Griffith, Bin Ning AIR Forum May 31, 2010 Chicago Presentation Overview
Texas Agricultural Science Teachers Attitudes toward Information Technology. Dr. Ryan Anderson Iowa State University
Texas Agricultural Science Teachers Attitudes toward Information Technology ABSTRACT Dr. Ryan Anderson Iowa State University Dr. Robert Williams Texas A&M- Commerce The researchers sought to find the Agricultural
101. General Psychology I. Credit 3 hours. A survey of the science of behavior of man and other animals, and psychology as a biosocial science.
Head of the Department: Professor Burstein Professors: Capron, McAllister, Rossano Associate Professors: Worthen Assistant Professors: Coats, Holt-Ochsner, Plunkett, Varnado-Sullivan PSYCHOLOGY (PSYC)
TEACHING PRINCIPLES OF ECONOMICS: INTERNET VS. TRADITIONAL CLASSROOM INSTRUCTION
21 TEACHING PRINCIPLES OF ECONOMICS: INTERNET VS. TRADITIONAL CLASSROOM INSTRUCTION Doris S. Bennett, Jacksonville State University Gene L. Padgham, Jacksonville State University Cynthia S. McCarty, Jacksonville
Correlation key concepts:
CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)
Introduction to industrial/organizational psychology
Introduction to industrial/organizational psychology Chapter 1 Industrial/Organizational Psychology: A branch of psychology that applies the principles of psychology to the workplace I/O psychologists
Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction
OEA Report 00-02 Drawing Inferences about Instructors: The Inter-Class Reliability of Student Ratings of Instruction Gerald M. Gillmore February, 2000 OVERVIEW The question addressed in this report is
Criminal Justice and Sociology
Criminal Justice and Sociology Professor Stone (chair); Lecturers Fremgen, Kaiser, Redmann, and Rummel Mission Statement The mission of the Department of Criminal Justice and Sociology at Jamestown College
Chapter 2 Conceptualizing Scientific Inquiry
Chapter 2 Conceptualizing Scientific Inquiry 2.1 Introduction In order to develop a strategy for the assessment of scientific inquiry in a laboratory setting, a theoretical construct of the components
COLLEGE OF VISUAL ARTS AND DESIGN Department of Art Education and Art History DOCTORAL PROGRAM IN ART EDUCATION PROCEDURES MANUAL
COLLEGE OF VISUAL ARTS AND DESIGN Department of Art Education and Art History DOCTORAL PROGRAM IN ART EDUCATION PROCEDURES MANUAL Revised 3/2008 HEJC MANUAL FOR DOCTORAL STUDENTS IN ART EDUCATION The information
Test-Anxiety Program and Test Gains with Nursing Classes
with Nursing Classes by Ginger Evans, MSN, APN, BC. & Gary Ramsey, DNP, RN College of Nursing, University of Tennessee Richard Driscoll, Ph.D. Westside Psychology, Knoxville Tennessee November 2010 Descriptors:
Comparison of Teaching Systems Analysis and Design Course to Graduate Online Students verses Undergraduate On-campus Students
Comparison of Teaching Systems Analysis and Design Course to Graduate Online Students verses Undergraduate On-campus Students Adeel Khalid a1* a Assistant Professor, Systems and Mechanical Engineering
Can Web Courses Replace the Classroom in Principles of Microeconomics? By Byron W. Brown and Carl E. Liedholm*
Can Web Courses Replace the Classroom in Principles of Microeconomics? By Byron W. Brown and Carl E. Liedholm* The proliferation of economics courses offered partly or completely on-line (Katz and Becker,
Attitudes Toward Science of Students Enrolled in Introductory Level Science Courses at UW-La Crosse
Attitudes Toward Science of Students Enrolled in Introductory Level Science Courses at UW-La Crosse Dana E. Craker Faculty Sponsor: Abdulaziz Elfessi, Department of Mathematics ABSTRACT Nearly fifty percent
CHARTER SCHOOL PERFORMANCE IN PENNSYLVANIA. credo.stanford.edu
CHARTER SCHOOL PERFORMANCE IN PENNSYLVANIA credo.stanford.edu April 2011 TABLE OF CONTENTS INTRODUCTION... 3 DISTRIBUTION OF CHARTER SCHOOL PERFORMANCE IN PENNSYLVANIA... 7 CHARTER SCHOOL IMPACT BY DELIVERY
Graduate Student Perceptions of the Use of Online Course Tools to Support Engagement
International Journal for the Scholarship of Teaching and Learning Volume 8 Number 1 Article 5 January 2014 Graduate Student Perceptions of the Use of Online Course Tools to Support Engagement Stephanie
Relating the ACT Indicator Understanding Complex Texts to College Course Grades
ACT Research & Policy Technical Brief 2016 Relating the ACT Indicator Understanding Complex Texts to College Course Grades Jeff Allen, PhD; Brad Bolender; Yu Fang, PhD; Dongmei Li, PhD; and Tony Thompson,
ANOTHER GENERATION OF GENERAL EDUCATION
ANOTHER GENERATION OF GENERAL EDUCATION Peter K. Bol Charles H. Carswell Professor of East Asian Languages and Civilizations I was asked to set forth some personal reflections rather than to summarize
10. Analysis of Longitudinal Studies Repeat-measures analysis
Research Methods II 99 10. Analysis of Longitudinal Studies Repeat-measures analysis This chapter builds on the concepts and methods described in Chapters 7 and 8 of Mother and Child Health: Research methods.
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
Council for Accreditation of Counseling and Related Educational Programs (CACREP) and The IDEA Student Ratings of Instruction System
Council for Accreditation of Counseling and Related Educational Programs (CACREP) and The IDEA Student Ratings of Instruction System The Council for Accreditation of Counseling and Related Educational
WVU STUDENT EVALUATION OF INSTRUCTION REPORT OF RESULTS INTERPRETIVE GUIDE
WVU STUDENT EVALUATION OF INSTRUCTION REPORT OF RESULTS INTERPRETIVE GUIDE The Interpretive Guide is intended to assist instructors in reading and understanding the Student Evaluation of Instruction Report
Systematic Reviews and Meta-analyses
Systematic Reviews and Meta-analyses Introduction A systematic review (also called an overview) attempts to summarize the scientific evidence related to treatment, causation, diagnosis, or prognosis of
Office of the Provost
Office of the Provost Substantive Academic Change to an Existing Degree Program Form Proposed Substantive Change to an Existing Degree: Academic Components 1. Please describe and provide a rationale for
Do Supplemental Online Recorded Lectures Help Students Learn Microeconomics?*
Do Supplemental Online Recorded Lectures Help Students Learn Microeconomics?* Jennjou Chen and Tsui-Fang Lin Abstract With the increasing popularity of information technology in higher education, it has
APEX program evaluation study
APEX program evaluation study Are online courses less rigorous than in the regular classroom? Chung Pham Senior Research Fellow Aaron Diel Research Analyst Department of Accountability Research and Evaluation,
Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets
Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that
A C T R esearcli R e p o rt S eries 2 0 0 5. Using ACT Assessment Scores to Set Benchmarks for College Readiness. IJeff Allen.
A C T R esearcli R e p o rt S eries 2 0 0 5 Using ACT Assessment Scores to Set Benchmarks for College Readiness IJeff Allen Jim Sconing ACT August 2005 For additional copies write: ACT Research Report
University of Minnesota 2011 13 Catalog. Degree Completion
University of Minnesota 2011 13 Catalog Degree Completion Bachelor of Arts Degree... 60 Degree Requirements... 60 Specific Provisions... 61 General Education Requirements... 61 Major or Area of Concentration...
How To Study The Academic Performance Of An Mba
Proceedings of the Annual Meeting of the American Statistical Association, August 5-9, 2001 WORK EXPERIENCE: DETERMINANT OF MBA ACADEMIC SUCCESS? Andrew Braunstein, Iona College Hagan School of Business,
Strategies for Identifying Students at Risk for USMLE Step 1 Failure
Vol. 42, No. 2 105 Medical Student Education Strategies for Identifying Students at Risk for USMLE Step 1 Failure Jira Coumarbatch, MD; Leah Robinson, EdS; Ronald Thomas, PhD; Patrick D. Bridge, PhD Background
