Quality Indicators for Online Programs at Community Colleges Leo Hirner Director Distance Education Services Metropolitan Community College Kansas City Introduction The phenomenal growth in online education over the past 10 years has not been without growing pains. Online education essentially started when the Internet and World Wide Web generally became accessible to the public in 1993, and, in basically a decade, online enrollment in higher education had grown to 2.35 million students as of 2004 (Allen & Seaman, 2005). Online enrollment represents the greatest segment of growth for community colleges, which continue to be pressed with demands for more online courses and programs (Jaschik, 2006). The continued, and rapid, growth in online programs in higher education bring with it a number of concerns regarding support services, learning resources, and effectiveness of instruction; these issues generate necessary questions about the quality of online programs. With this growth come specific concerns about how institutions monitor the quality of online programs. Concerns about the effectiveness of instruction, student learning, and understanding of how the participants perceive online learning led Phipps and Meritosis to raise these questions as a series of concerns with the research of online programs (1999). First they identified concerns with the methodology of various studies investigating online programs, and they subsequently identified a need for some sort of comparative process by which programs and institutions could be compared, whether by academics or prospective students. Unfortunately the concerns first identified by Phipps and Meritosis continue to persist (Hannafin, Oliver, Hill, Glazer, & Sharma, 2003; Sherlock & Pike, 2004). An attempt to address these concerns with the need for some instrument that could be used to compare institutions and to collect perceptions from the participants led to the following study with two research goals: The first goal was to identify a set of quality indicators for online programs at community colleges and, given the particular importance of online programs to the growth in community colleges, this seemed a logical place to start; the second goal was to collect input from stakeholders (students, faculty, support staff, and program administrators) on how important they perceive the indicators collected in response to the first goal. Research Method As stated, the two goals of this study were to identify a set of quality indicators specific to community college online programs and to collect stakeholders perceived importance of each quality indicator. To achieve these goals a two-stage research project was planned and conducted. The process began with a comprehensive literature review that identified the common standards and best practices for online courses and programs, with each accrediting organization or policy group organizing these according to their needs; however, a consistent set of five categories (institutional support, curriculum and instruction, faculty support, student support, and evaluation and assessment) had been identified in the literature (Meyer, 2002). A sixth category focusing on technical services was added due to persistent concerns with system reliability expressed by all users (Dirr, 2003; Hengehold & Schultz, 2000; Klingenstein, 1998) and the decline in technology funding that resulted from the dot com impact on the market in 2001(Phipps & Wellman, 2001; Wilson, Varnhagen, Krupa, Kasprzak, Hunting, & Taylor, 2003). The various standards and corresponding six categories were then used to guide the development of a Delphi study to collect a set of potential indicators. 1
Delphi Study Phase I utilized a Delphi Study to compile a set of indicators. A group of 20 distance education program administrators from both community colleges and 4-year institutions (ranging from baccalaureate to Research I institutions) agreed to participate in the study. The panelists were sent an open-ended survey to start the Delphi process, and 15 submitted responses; the 15 sets of submissions resulted in more than 300 potential indicators, which were synthesized into a final set of 129 items. A total of three consensus surveys were required to refine the set of 129 items into a final set of 77 potential indicators of quality. Stakeholder Survey In phase II the quality indicators identified through the Delphi study were used to create a stakeholder survey, which consisted of three sections. The first section included the 77 potential indicators from the Delphi study, broken down into the six categories. Each category began with an anchor statement, and participants were asked to rate how important the other items within that category were in comparison to the anchor statement. Perception of importance was rated using magnitude estimation scaling, with a set value of 40 for the anchor statement. The second section of the survey consisted of an open-ended opportunity for participants to recommend any indicators not included in the survey, and the final section consisted of questions designed to collect demographic data. The stakeholder survey was then distributed to students and faculty, technical support staff, and program administrators participating in online courses offered by a community college system in the Midwest, which consisted of five campuses serving a metropolitan area of more than 1.2 million people. The college system serves more than 18,000 students every semester, and the student population included a mix of urban, suburban, exurban, and rural communities with all major ethnic groups represented. A total of 100 surveys were collected, including those from 5 administrators, 5 support staff, more than 20 faculty, and more than 60 students. Factors versus Indicators As the results from the stakeholder survey were being analyzed, it became apparent that the potential indicators were actually a mix of factors, indicators and other parameters, so a third research step was added in an effort refine the results. A group of distance learning experts identified through their scholarly research and professional activity were asked to review the set of 77 items and classify each as a factor, indicator, or other according to the following definitions: Indicators are signs of success that the community college has a quality program and can be identified as outputs that are directly related to the quality of the program; Factors are inputs consciously made by the institution that contribute to the quality of the online program, whether through resources, policies, or requirements. Each expert received either 38 or 39 of the total set of items in an effort to limit the demands on their time. Delphi Study Results As noted, the Delphi Study identified a total of 129 possible items, and this set was refined to a set of 77 potential quality indicators through the Delphi consensus process. Eight of the potential indicators, listed in Table 1, had not been identified in the literature review. 2
Table 1 New Potential Indicators Identified through the Delphi Study 1. The college s marketing plan includes promotion of online courses and programs. 2. Marketing of online programs emphasizes the skills needed for student success and clearly articulates that the academic expectations and time commitment in online courses are consistent with traditional classroom instruction. 3. Articulation agreements are pursued with area four-year colleges to create seamless transfer opportunities for students in online programs. 4. The tuition and fees of online courses and programs are comparable to those on campus. 5. Faculty advancement criteria recognize online instruction and reward faculty for innovation and risk-taking. 6. The college recognizes work that instructors have done advancing their own degree (or other professional development activities) obtained through online programs. 7. Student academic honor and service programs accept online and traditional coursework. 8. Periodic program evaluations are used for program improvement, to aid in institutional decision-making, to provide program outcomes for funding agencies, to ensure stakeholders access to technology, to assess the range of services provided, course offerings, and barriers and challenges to online instruction. The new potential indicators may have resulted from the specific goals and missions of the community college or the growth in online course and program acceptance as many of the standards and best practices were first identified between 1999 and 2001. One of the standards or best practices found in the literature was not identified through the Delphi process. The use of faculty mentors was identified in the 129 items submitted to the Delphi panel, but was eliminated through the consensus process. Stakeholder Survey The results of the Stakeholder Survey did not add anything of significance to the study; however, some interesting new questions requiring future investigation were identified. The perception results did not eliminate any of the potential indicators, and the open-ended question did not identify any new potential indicators. The relatively equal importance placed upon the 77 potential indicators did serve to provide validation for the results of the Delphi Study. In general, perspectives had not changed much from those identified in an earlier study of the literature. Students in online courses still expressed concern about the timeliness and amount of communication, the need to provide feedback to faculty, and the need for additional testing services. Students did indicate a need for more online services ranging from enrollment and fees, to library services, and online tutoring. Similar to the newly identified indicators, some perceptions identified in the stakeholder survey may be the result of the community college mission and the population served. There was a particular alignment between the student and program administrator perceptions of importance, which differed from earlier studies identifying alignment between faculty and administration. This may be explained by the shallower administrative structure of the community college, which places administrators in closer proximity to students, or it could be due to the overall mission of the community college. 3
Other perceptions appeared to be linked to the expanded use of online programs over time, such as a need for greater acceptance of online work or professional development by institutions, representing new perspectives that have developed with the growth of online education. Analysis of the demographic data indicated the possibility that some perceptions could be linked to gender, age, or ethnicity; however, the small population and certain statistics indicate the likelihood of Type I errors. Factors versus Indicators The 10 respondents to the factors versus indicators study identified only eight of the 77 items as indicators, listed in Table 2. Of the remaining items 62 were classified as factors, 2as others, and the remaining 5 items did not achieve any consensus. It was interesting to note that the final indicators are almost exclusively composed of items from the category of Evaluation and Assessment. Table 2 The Eight Quality Indicators 1. The department or program reviews new online courses to insure quality of subject matter and verify that program outcomes are met. 2. Regular evaluations of distance learning courseware, instructional philosophy, pedagogical methods, and faculty use of the technology take place. 3. Student and faculty regularly complete satisfaction surveys about the online courses and programs. 4. The college utilizes assessment methods recommended by accrediting bodies for distance courses (e.g. North Central Association, Higher Education Commission, Distance Education Standards). 5. Student learning outcomes in online courses are assessed and compared with student outcomes achieved by other delivery methods. 6. Periodic program evaluations are used for program improvement, to aid in institutional decisionmaking, to provide program outcomes for funding agencies, to ensure stakeholders access to technology, to assess the range of services provided, course offerings, and barriers and challenges to online instruction. 7. The college requires periodic review of courses delivered online. 8. Student persistence and attrition in online classes is monitored in comparison to institutional trends. The factors identified through this study point to inputs that an institution might make in support of a quality online program. The eight indicators identify data that an institution would collect when measuring the effectiveness of its online programs and services. Both the factors and indicators as identified represent a pair of tools that one might use when examining how an institution supports its programs or for comparing programs across institutions. What these factors and indicators do not address is how an institution uses the data it collects on its programs. The full set of 77 potential indicators, instruments used in this study, and related literature may be found on the MCC Distance Education web at http://distance.mcckc.edu/hirner/quality.htm. 4
References Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. The Sloan Consortium. Retrieved from: http://www.sloanc.org/publications/survey/pdf/growing_by_degrees.pdf Dirr, P.J. (2003). Distance education policy issues: Towards 2010. Handbook of Distance Education. M. G. Moore & W. G. Anderson (Eds.). Mahwah, NJ: Lawrence Erlbaum. Hengehold, L., & Schultz, C. (2000). Crossing the James River: Infrastructure for distance education. Proceedings of Educause 2000, Nashville, TN Oct. 2000. Retrieved from: http://www.vccs.va.us/its/resources/ Jaschik, J (2006). Surge in distance ed at community colleges. Inside Higher Ed. Retrieved from: http://insidehighered.com/news/2007/04/16/aacc Klingenstein, K. J. (1998). The technical realities of virtual learning: An overview for the nontechnologists. Cause/Effect, 21(1), 24-27, 32-34, 47. Meyer, K. A. (2002). Quality in distance education; Focus on on-line learning. ASHE-Eric Higher Education Report, 29, 4. San Francisco: Jossey-Bass. Phipps, R., & Merisotis, J. (1999). What s the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Institute for Higher Education Policy, Washington, DC. Retrieved from: http://www.ihep.com Phipps, R. A., & Wellman, J. V. (2001). Funding the infostructure : A guide to financing technology infrastructure in higher education. New Agenda Series, 3(2). Wilson, D., Varnhagen, S., Krupa, E., Kasprzak, S., Hunting, V., & Taylor, A. (2003 Fall). Instructors adaptation to online graduate education in health promotion: A qualitative study. Journal of Distance Education, 18(2), 1-15. Biographical Sketch Leo Hirner currently serves as the Director of Distance Education Services at the Metropolitan Community College Kansas City (MCC-KC). Prior to taking this role in 1999, he served for nine years as a Physics faculty at MCC-KC s Longview campus. He is currently completing his Ph.D. research on quality in online programs at community colleges at the University of Missouri-Columbia. Address: Metropolitan Community College Kansas City 3200 Broadway Kansas City, MO 64111 E-mail: leo.hirner@mcckc.edu URL: http://distance.mcckc.edu/hirner/index.htm Phone: 816-759-4490 Fax: 816-759-4673 5