Paper ID #6825 Learning Performance Analysis of Engineering Graduate Students from Two Differently Ranked Universities Using Course Outcomes Lin Li, University of Illinois at Chicago Lin Li received the B.E. degree in mechanical engineering from Shanghai Jiao Tong University in 2001, and the M.S.E. degree in mechanical engineering, M.S.E. degree in industrial and operations engineering, Ph.D. degree in mechanical engineering all from the University of Michigan-Ann Arbor in 2003, 2005 and 2007, respectively. Currently, he is an Assistant Professor in the Department of Mechanical and Industrial Engineering, University of Illinois at Chicago. His research interests are in sustainable manufacturing systems; reliability engineering; intelligent maintenance systems and healthcare systems. Yong Wang, University of Illinois at Chicago Yong Wang received his B.S degree (2003) and Ph.D. (2010) in Energy and Power Engineering from Huazhong University of Science and Technology (HUST), Wuhan, China. He had been with the Department of Mechanical Engineering, University of Michigan, Ann Arbor as a visiting scholar from 2007 to 2009. He is currently a Ph.D. candidate in Mechanical and Industrial Engineering in the University of Illinois at Chicago. His research areas include operations research, reliability estimation and optimization, fault diagnosis and prognosis, and their applications in sustainable manufacturing and renewable energy systems. c American Society for Engineering Education, 2013
2013-6825: Learning Performance Analysis of Engineering Graduate Students from Two Differently Ranked Universities Using Course Outcomes Lin Li, University of Illinois at Chicago Lin Li is an Assistant Professor in the Department of Mechanical and Industrial Engineering, University of Illinois at Chicago. He received the B.E. degree in mechanical engineering from Shanghai Jiao Tong University in 2001, and the M.S.E. degree in mechanical engineering, M.S.E. degree in industrial and operations engineering, Ph.D. degree in mechanical engineering all from the University of Michigan-Ann Arbor in 2003, 2005 and 2007, respectively. His research interests are in sustainable manufacturing systems; reliability engineering; intelligent maintenance systems and healthcare systems. Yong Wang, University of Illinois at Chicago Yong Wang is a Ph.D candidate in the Department of Mechanical and Industrial Engineering, University of Illinois at Chicago. He received his B.S degree in 2003 and Ph.D. in 2010 in Energy and Power Engineering from Huazhong University of Science and Technology (HUST), Wuhan, China. He had been with the Department of Mechanical Engineering, University of Michigan, Ann Arbor as a visiting scholar from 2007 to 2009. His research areas include operations research, reliability estimation and optimization, fault diagnosis and prognosis, and their applications in sustainable manufacturing and renewable energy systems.
Learning Performance Analysis of Engineering Graduate Students from Two Differently Ranked Universities Using Course Outcomes Abstract One of the authors has experience in teaching a graduate-level course at two universities: A and B. The College of Engineering of University A is ranked as a top-ten school in the U.S. while the rank of the College of Engineering of University B is around 50. This experience has provided the author a unique opportunity to compare the learning performance of graduate students in Engineering from these two universities based on the analysis of course outcomes. The objective of this work is to identify whether the difference of learning performance between the graduate students from these two universities is as significant as the difference in the university ranks. The hypothesis testing method has been followed to compare the course outcomes. The analysis results show no strong evidence supporting the hypothesis that the learning performances are distinguishable. The implications of the findings have been discussed as well. 1. Introduction This paper presents a recent effort of the authors in comparing learning performance of students between two universities using course outcomes. One of the authors taught a graduate-level course at both University A (UA) (in 2010) and University B (UB) (in 2011). The course taught is Time Series Analysis, which is a typical course in many engineering programs across American universities 1-6. The course intends to give students an opportunity to apply the time series techniques to the modeling, analysis and forecasting of real-world physical systems. Only graduate students were allowed to enroll in the course at the two universities. The Colleges of Engineering of these two universities have different ranks according to U.S. News 7. The College of Engineering of UA is ranked as a top-ten school in the U.S. while the rank of the College of Engineering of UB is around 60. Students' learning performance may be influenced by many factors. These influences will be reflected in the course outcomes. The following factors have been controlled during both course periods: 1) The courses were taught by the same instructor; 2) The same course syllabus and progress calendar (by weeks) were communicated to the classes at the beginning of the course and followed throughout the course periods; 3) The same textbooks 8,9, lecture notes, homework problems, and exam problems were used; 4) The requirements for the course project (freedom of topic selection, requirement for a written report, requirement for an oral presentation, etc.) were the same; 5) The same evaluation standards for the homework, exams, and project were adopted, and therefore grades were obtained by comparing a student's performance with specified absolute standards rather than with relative standards such as the work of other students in class. Since the two universities use two different letter grade systems (UA: A+, A, A-, B+, B, B-, C+, C, C-, D+, D, D-, or F; UB: A, B, C, D, or F), students' GPAs may not be comparable. Therefore, only raw grades on a scale of 100 points are used for this research, and there is no minimum GPA requirement prior to taking the class.
The course final grade consists of five components, i.e., homework, the course project presentation, the course project report, the midterm exam, and the final exam. The instructor made the largest efforts to ensure that the course grades accurately reflected each student's achievement level. The end-of-course grades assigned to these five components together with the total grades are intended to convey the level of achievement of each student. These grades are used as course outcomes and hypothesis tests have been conducted on the grades of the two parties of students. Every world or national university ranking system bears some limitations 10-12. Existing literature 13 has studied the impact of professors' behaviors on the ranking of the universities, but little research has been done to investigate the relationship between students' learning performance and the university ranking. This paper attempts to provide some initial insights into this relationship. More specifically, the sole objective of this paper is to identify whether the learning performances between the graduate students from these two universities are as significantly different as the university ranking. The rest of this paper is organized as follows. Section 2 presents a brief description of the end-ofcourse grading. Hypothesis tests on the grades of all the course components together with the total grades have been conducted in Section 3. The summary of the findings is provided together with a discussion of future work in Section 4. 2. Course Grading The five course components involved both independent work (i.e. homework, the midterm exam, and the final exam) and team work (the project presentation and the project report). The grade breakdown for the five components was: homework 15%, the project presentation 5%, the project report 25%, the midterm exam 25%, and the final exam 30%. The numbers of students enrolled in this course at UA and UB were 54 and 23, respectively. For the team work, each team consisted of no more than three students. Each team was given full freedom of selecting their own project topic as long as the report and the presentation could cover all the major knowledge points taught in class. Therefore, the topics selected by students reflected a wide range of applications such as daily traffic data, electricity usage trends, CO 2 emission trends, stock prices, machining signals, etc. The report and the presentation were graded based on components such as the background introduction, a brief summary of the data collection process, data analysis procedures, result discussions, and conclusions. The weighted grades of the five components together with the team assignments and total grades are listed in Tables 1 and 2. Table 1. Weighted Course Grades of UA Students HW Presentation Report Midterm Final Total Team 15% 5% 25% 25% 30% 100% -- 14.79 4.60 23 24.75 29.7 96.84 8 14.81 4.60 23 24.25 29.7 96.36 8 14.79 4.37 23 24.5 27.9 94.56 10 14.79 4.74 24.5 23 27 94.03 3 14.87 4.30 22 25 27.6 93.77 15 15.00 4.19 23 23 27.3 92.49 11 14.46 4.37 23 23 27.6 92.43 10 14.78 4.37 23 23.5 26.1 91.75 10
14.68 4.18 22 22.5 27 90.36 2 14.98 4.19 23 18.25 29.7 90.12 11 14.79 4.00 15 24.75 29.1 87.64 19 14.76 4.21 21.5 22.75 24 87.22 5 12.77 4.18 22 22.25 25.5 86.70 2 14.04 4.43 20 20.75 25.5 84.72 4 14.42 4.74 24.5 14.25 26.4 84.31 3 14.74 4.54 23 20 21.6 83.88 17 14.89 4.07 15 23.25 25.5 82.71 12 14.94 4.13 21 20.5 21.9 82.47 18 14.72 4.21 23.5 17.5 22.5 82.43 5 14.76 4.60 23 20 19.5 81.86 8 12.49 4.14 22 21.25 21.6 81.48 9 14.36 4.14 22 15.75 24.3 80.55 9 14.34 4.10 14 21.25 26.4 80.09 20 14.27 4.30 22 20.5 18.9 79.97 15 14.79 4.00 12 22.5 25.5 78.79 19 14.12 3.97 18 20 23.4 79.49 21 14.89 4.21 21.5 22 16.5 79.10 5 14.64 4.03 15 22 22.2 77.87 13 14.36 4.43 20 15.5 23.1 77.39 4 14.51 3.71 13 23 23.1 77.32 1 14.27 4.19 13 23.25 22.5 77.21 7 14.51 3.71 13 21.5 24.3 77.02 1 14.94 4.07 15 23.5 19.2 76.71 12 14.91 4.13 21 17.5 18.6 76.14 18 14.49 4.42 20 14.5 22.5 75.91 14 14.48 4.42 20 20 16.2 75.10 14 12.24 3.97 16 21.25 21.6 75.06 21 14.40 4.43 20 17.5 18.3 74.63 4 14.64 4.00 13 24.25 18.6 74.49 19 14.34 4.32 16 13 26.4 74.06 6 14.61 3.97 18 19 18 73.58 21 14.64 4.32 16 15.5 22.8 73.26 6 14.98 4.07 15 20 18.6 72.65 12 14.63 3.60 14 21.75 17.1 71.08 16 14.18 4.18 22 16 13.8 70.16 2 13.93 4.30 20 12.5 18.9 69.63 15 10.31 4.10 14 20 20.4 68.81 20 12.30 4.10 16 18.75 17.1 68.25 20 13.86 4.54 23 13.5 13.2 68.10 17 12.88 4.19 13 19 18.3 67.37 7 12.51 4.03 13 21.5 15.3 66.34 13 14.89 4.13 21 15 10.2 65.22 18 14.51 4.03 15 16.75 11.1 61.39 13 14.27 3.60 14 13 14.7 59.57 16 Table 2. Weighted Course Grades of UB Students HW Presentation Report Midterm Final Total Team 15% 5% 25% 25% 30% 100% -- 14.74 4.81 25 23.5 27.9 95.95 2 14.79 4.81 25 22.75 27.9 95.25 2 14.91 4.42 21 23.75 28.5 92.58 5
14.93 4.42 20 24.75 27.9 92.00 3 14.91 4.80 22 24.75 24.9 91.36 8 14.78 4.80 22 21.5 26.7 89.78 8 14.55 4.42 20 23 27 88.97 3 14.79 4.42 21 21.75 25.8 87.76 5 13.89 4.80 22 16 26.1 82.79 8 14.70 4.75 20 16.5 26.1 82.05 7 14.63 4.60 20 19.25 23.1 81.58 1 14.21 4.18 17 19 26.4 80.79 4 14.74 4.75 20 15.75 24 79.24 7 14.87 4.60 20 18 21.6 79.07 1 14.89 4.29 16 20 22.8 77.98 9 13.91 4.45 21 13.75 24 77.11 6 14.61 4.60 20 13.75 22.5 75.46 1 13.86 4.45 21 11 25.2 75.51 6 13.46 4.29 16 15.5 23.1 72.35 9 13.41 4.45 21 14 18.6 71.46 6 14.63 4.29 16 15 21 70.92 9 13.86 4.18 17 9.5 18.9 63.44 4 8.85 4.18 17 8.5 18 56.53 4 The information presented in the tables is not intuitive. Further analyses on the course components and total grades have been conducted in the next section. 3. Grade Analysis Hypothesis tests have been conducted on the grades listed in Tables 1 and 2 to make inferences on the difference of students' learning performances corresponding to the course components. For each of the five course components and the total grades, the following procedure was used to facilitate the final decision making on the tests: 1) Draw the boxplots to show the general shape of the grades' distribution for the students from the two universities. 2) Draw the normal probability plots to show whether the grades are from the normal distribution. If the sample is normal, the plot will be approximately linear. Other distribution types will introduce strong nonlinearity in the plots. 3) Draw the quantile-quantile (Q-Q) plot to show whether the two grade samples come from the same unknown distribution. If the two samples do come from the same unknown distribution, the plot will be linear. 4) Use the Shapiro-Wilk test to see if the two grade samples are both from the normal distribution family. If both samples are from the normal distribution family, the F-test and the t-test are further used to see if the variances or means are the same. For each test, the p value is generated. For the significance level α = 5%, p > α leads to a "Yes" as the test result, and p α leads to a "No" as the test result. 5) Use the Kolmogorov-Smirnov test to see if the two grade samples are both from the same unknown distribution. The p value is also generated. For the significance level α = 5%, p > α leads to a "Yes" as the test result, and p α leads to a "No" as the test result. All the plots and testing results are arranged in Figures 1 through 6 and Tables 3 through 8. Note that symbol "--" in the tables means the hypothesis test is not needed for that case.
Figure 1. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Homework Grades Figure 2. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Project Presentation Grades Table 3. Hypothesis Testing on the Homework Grades UA HW No 0.000 -- -- -- -- Yes 0.791 UB HW No 0.000 UA HW and UB HW are from the Inconclusive same unknown distribution Table 4. Hypothesis Testing on the Project Presentation Grades UA Presentation Yes 0.598 UB Presentation Yes 0.827 Yes 0.626 No 0.004 No 0.031 UA Presentation and UB Presentation are UA Presentation and UB Presentation are not from the same normal distribution not from the same unknown distribution
Figure 3. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Project Report Grades Figure 4. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Midterm Exam Grades Table 5. Hypothesis Testing on the Project Report Grades UA Report Yes 0.051 Yes 0.247 Yes 0.351 Yes 0.604 UB Report Yes 0.911 UA Report and UB Report are UA Report and UB Report are from the same normal distribution from the same unknown distribution Table 6. Hypothesis Testing on the Midterm Exam Grades UA Midterm No 0.009 -- -- -- -- Yes 0.146 UB Midterm Yes 0.588 UA Midterm and UB Midterm are not UA Midterm and UB Midterm are from the same normal distribution from the same unknown distribution
Figure 5. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Final Exam Grades Figure 6. The Boxplot (a), Normal Probability Plots (b) & (c), and Q-Q Plot (d) of the Total Grades Table 7. Hypothesis Testing on the Final Exam Grades UA Final Yes 0.337 No 0.021 -- -- Yes 0.142 UB Final Yes 0.285 UA Final and UB Final are not UA Final and UB Final are from the same normal distribution from the same unknown distribution Table 8. Hypothesis Testing on the Total Grades UA Total Yes 0.980 Yes 0.527 Yes 0.497 Yes 0.749 UB Total Yes 0.969 UA Total and UB Total are UA Total and UB Total are from the same normal distribution from the same unknown distribution In summary, only 4 conclusions out of the 11 valid conclusions support the hypothesis that the learning performances of the students from the two universities, represented by the corresponding course component grades, are different. There are a total of 7 conclusions supporting the hypothesis that the learning performances are comparable. More importantly, both the normal distribution tests and the unknown distribution test provide evidence that the overall learning performances of the students from the two universities, represented by the UA total grades and UB total grades, are indistinguishable.
4. and Future Work This paper made a comparison of the learning performances of engineering graduate students from two differently ranked universities based on the analysis of course outcomes. The method used involves an in-depth investigation of students' grades of different course components. The authors come to the conclusion that there are no strong evidences rejecting the hypothesis that the learning performances are comparable. Many reasons may lead to the discrepancy between this analysis result and the expectation that the performances should be as different as the university ranks. One of the reasons is that this work measures students' learning performance, while the U.S. News ranking criteria measure factors such as university reputation, admission acceptance rates, and starting salaries after graduation 10,14-15. This paper provides a new insight into the relationship between students' learning performance and the university ranking. As a result, one may gain better understanding into the capabilities and limitations of the ranking system. The authors plan to continue this research and keep tracking the results for the next few years. In doing so, it is possible to figure out whether future results can overturn the conclusion drawn in this paper. If more evidences emerge from further results supporting the hypothesis of significant difference in learning performances, the authors will try to identify the strengths and weaknesses of the students at the two universities, and strategies could be made on the development of course components to overcome the weaknesses. References 1 2 3 4 5 6 7 8 9 10 11 MIT Open Courseware. 2008. Time Series Analysis. Retrieved February 1, 2013, from http://ocw.mit.edu/courses/economics/14-384-time-series-analysis-fall-2008/index.htm Michigan Engineering. 2013. Industrial and Operations Engineering Courses. Retrieved February 1, 2013, from http://www.engin.umich.edu/bulletin/ioe/courses.html Pennsylvania State University. 2012. Applied Time Series Analysis. Retrieved February 1, 2013, from https://onlinecourses.science.psu.edu/stat510/ The University of Arizona. 2013. Applied Time Series Analysis. Retrieved February 1, 2013, from http://www.ltrr.arizona.edu/~dmeko/geos585a.html S. S. Rao. 2008. A Course in Time Series Analysis. Retrieved February 1, 2013, from http://www.stat.tamu.edu/~suhasini/teaching673/time_series.pdf P. Bartlett. 2010. Introduction to Time Series Analysis. Retrieved February 1, 2013, from http://www.stat.berkeley.edu/~bartlett/courses/153-fall2010/lectures/1.pdf U.S. News. 2013. 2013 Best Engineering Schools. Retrieved February 1, 2013, from http://gradschools.usnews.rankingsandreviews.com/best-graduate-schools/top-engineering-schools/ S. M. Pandit, S. M. Wu. 1990. Time Series and System Analysis with Application. Hoboken, NJ: John Wiley & Sons. D. C. Montgomery, C. L. Jennings, M. Kulahci. 2008. Introduction to Time Series Analysis and Forecasting. Hoboken, NJ: John Wiley & Sons. K. Carey. 2006. College Rankings Reformed: The Case for a New Order in Higher Education. Retrieved February 1, 2013, from http://www.educationsector.org/usr_doc/collegerankingsreformed.pdf F. L. Bookstein, H. Seidler, M. Fieder, G. Winckler. 2010. Too much noise in the Times Higher Education rankings. Scientometrics, 85(1):295-299.
12 13 14 15 C. Tofallis. 2012. A different approach to university rankings. Higher Education, 63(1):1-18. B. Sohrabi, A. Gholipour, S. F. Kuzekanan. 2011. Impact of professors' behavioral variables on performance and ranking of universities. New Educational Review, 25(3):42-52. U.S. News. 2013. How U.S. News Calculated the 2013 Graduate School Rankings. Retrieved February 1, 2013, from http://www.usnews.com/education/best-graduate-schools/articles/2012/03/12/how-us-news-calculated-the- 2013-graduate-school-rankings U.S. News. 2013. How to Use Our Rankings of Graduate Schools Wisely. Retrieved February 1, 2013, from http://www.usnews.com/education/best-graduate-schools/articles/2012/03/12/how-to-use-our-rankings-ofgraduate-schools-wisely