Ahigher quality education has become a springboard to

Size: px
Start display at page:

Download "Ahigher quality education has become a springboard to"

Transcription

1 Educational Measurement: Issues and Practice Fall 2011, Vol. 30, No. 3, pp. 2 9 Outcomes Assessment in Higher Education: Challenges and Future Research in the Context of Voluntary System of Accountability Ou Lydia Liu, Educational Testing Service ABSTRACT Higher education institutions are becoming increasingly accountable to stakeholders at many levels as the federal government, accrediting organizations, and the public place a stronger emphasis on student learning outcomes. Evidence of student learning that is comparable across institutions is of urgent demand. Various accountability initiatives such as the Voluntary System of Accountability (VSA) mark a milestone in the evaluation of instructional effectiveness of higher education using standardized outcomes assessment. Although the current accountability initiatives hold great potential in evaluating institutional effectiveness, some design and methodological issues need to be addressed before outcomes assessment can be best utilized. In this paper, I (a) review the need for outcomes assessment in higher education, (b) discuss the VSA initiative, (c) summarize the most prominent challenges facing outcomes assessment for accountability purposes, and (d) discuss future directions of research in advancing accountability assessment in higher education. Key words: accountability, higher education, outcomes assessment, voluntary system of accountability Ahigher quality education has become a springboard to opportunity and success in the 21st century. Between 1995 and 2005, student enrollment at degree-granting postsecondary institutions increased by 23% in the United States, from 14.3 million to 17.5 million (U.S. Department of Education, National Center for Education Statistics [NCES], 2008). College expenses also accelerated with the increase in enrollment numbers. The annual cost of attending a 4-year public institution doubled during the past three decades, from $7,181 in 1978 to $16,140 in 2010 (College Board, 2010). As vast investments and resources are being poured into public higher education, accountability in higher education has received unprecedented attention from the public. Students, parents, and public policy makers seek to understand how public colleges and universities operate and whether they have done a satisfactory job preparing students for the challenges in the 21st century. In 2005, the Commission on the Future of Higher Education was established by the Department of Education with the aim to maintain the leadership of higher education in the United States. The Commission identified several key areas in higher education that need urgent reform: expanding access by bridging the gap between high school requirements and college requirements, improving affordability of college education, enhancing the quality of college instruction, and improving accountability by providing evidence of learning. The Commission s first report released in September 2006 (U.S. Ou Lydia Liu, Associate Research Scientist, Foundational and Validity Research Center, Educational Testing Service, 666 Rosedale Road, MS 16-R, Princeton, NJ 08541; lliu@ets.org. Department of Education, 2006) comments on the nation s remarkable absence of accountability mechanisms to ensure that colleges succeed in educating students (p. x) and points out that accountability is vital to ensuring the success of reforms in the other three key areas. Although accountability encompasses multiple dimensions such as student experiences and graduation rate, evidence of student learning is at its core. This report also calls for solid evidence of how much students have learned in college and emphasizes that the evidence of learning should be comparable across institutions. Concurrently, the American Association of State Colleges and Universities (AASCU) and the Association of Public and Land-Grant Universities (APLU) proposed a program called the Voluntary System of Accountability (VSA; to provide higher education institutions with the opportunity to inform the public of the learning outcomes of their students. The overriding purpose of VSA is to evaluate core educational outcomes in higher education and improve public understanding about the functions and operations of public colleges and universities. An important component of the VSA program is the College Portrait of Undergraduate Education, a web-based system that allows institutions to present standardized information about their students and learning outcomes. In the VSA College Portrait system, critical thinking and written communication have been selected as the core learning outcomes. Since its inception, the VSA program has received sweeping attention from university administrators, researchers, and public policy makers. As of April 2011, 320 institutions had joined the VSA initiative, 2 Copyright C 2011 by the National Council on Measurement in Education Educational Measurement: Issues and Practice

2 representing 64% of the AASCU and APLU member institutions ( In fact, VSA has gained participation from institutions across all states, except for Hawaii and the District of Columbia (DC). Despite the great potential of the VSA program, its implementation has been thwarted due to the lack of sufficient research evidence on how assessments designed to measure learning outcomes should be used and how scores should be interpreted. For example, it remains unknown whether students make serious efforts in taking the assessment as the test results do not have a direct impact on individual students. Also, it is unclear how an institution should test students to demonstrate learning gain between freshmen and seniors should the same group of students be tracked for 4 years or is it adequate to test groups of different students in their first year and senior year? Due to the relative recency of the VSA program, very few research studies have been conducted to address these critical issues. Many institutional researchers have expressed concerns about the effectiveness of VSA due to existing methodological caveats. Although full of good intentions, the VSA effort could be like trying to clothe the emperor (Banta, 2008) without solving existing methodological issues. Solid research evidence that can provide immediate answers to these questions is urgently needed. In the following sections, I (a) review the need for outcomes assessment in higher education, (b) discuss the VSA initiative, (c) summarize the most prominent challenges facing outcomes assessment for accountability purposes, and (d) discuss future directions of research in advancing accountability assessment in higher education. The Need for Outcomes Assessment In higher education, outcomes assessment has been a prevalently used term to refer to the assessments designed to measure student learning outcomes in college. The assessment is usually in the form of a standardized test. The term outcomes assessment also appears in the title of some organizations (e.g., National Institute for Learning Outcomes Assessment). There are two main categories of outcomes assessment: one is focused on discipline-specific outcomes (e.g., physics, English) and the other is focused on general college-level skills (e.g., critical thinking). The ETS Major Fields Tests are an example of the former and the ETS Proficiency Profile is an example of the latter. For the purpose of this study, only the general outcomes assessment is discussed. Many American colleges and universities use standardized outcomes assessment to evaluate student learning. For the past 5 years, at least 1,387 institutions (41% are 4-year public universities and 25% are 2-year public colleges) have used at least one form of standardized outcomes assessment (Educational Testing Service, 2010). A recent survey of 2,809 regionally accredited, undergraduate-degree-granting, 2- and 4-year public and private institutions revealed that 39% of the institutions use one form of standardized measures of learning outcomes that are available on the market (National Institute for Learning Outcomes Assessment, 2009). Institutions use outcomes assessment data for a variety of purposes: fulfilling accreditation requirements, responding to accountability calls, determining student readiness for college, changing admission policy, evaluating programs, informing strategic planning, modifying general education curriculum, and public reporting. The need for outcomes assessment in higher education comes from a number of sources. First, most accreditation organizations require institutions to provide some form of statistical report of student learning. Currently, there are about 80 national and regional accrediting organizations in higher education the United States, which are routinely scrutinized either by the Council for Higher Education Accreditation (CHEA; 2006) or by the Department of Education or both. The accrediting organizations are responsible for accrediting either higher education institutions or specific academic programs and schools. The fact that an accrediting organization is authorized suggests that the standards and processes it uses to accredit institutions or programs are consistent with the academic quality and accountability expectations established by CHEA. In the United States, there are six regional accrediting organizations (CHEA, 2010): the Middle States Association of Colleges and Schools, the North Central Association of Colleges and Schools, the New England Association of Schools and Colleges, the Northwest Commission on Colleges and Universities, the Southern Association of Colleges and Schools, and the Western Association of Schools and Universities. The six regional accrediting organizations are responsible for accrediting colleges and universities located in their region. Other main categories of accrediting organizations include national faith-related accrediting organizations (e.g., Association for Biblical Higher Education), national career-related accrediting organizations (e.g., Distance Education and Training Council), and programmatic accrediting organizations (e.g., Accreditation Council for Business Schools and Programs). The large number of accrediting organizations is a result of diverse programs (e.g., business, law, medical, distance education) that need accreditation in the United States. In the accreditation process, both federal and state-level agencies have placed an increasingly stronger emphasis on accountability for public institutions to demonstrate improved student achievement. This emphasis has created a demand for quality outcomes assessment that can produce objective results of student achievement in college. Second, various accountability calls have demanded institutions to provide evidence of student learning. These accountability initiatives are generally developed by higher education organizations and call for the participation of institutions on a voluntary basis. The VSA ( voluntarysystem.org) initiative targets the learning outcomes of students at public higher institutions and is sponsored by the APLU and the AASCU. The Transparency by Design initiative, initiated by a division of the Western Interstate Commission for Higher Education (WICHE), called the WICHE Cooperative for Educational Technologies (WCET, TransparencyByDesign), aims to provide quality assurance on the learning outcomes of distance education institutions serving adult learners. Similarly, the Voluntary Framework of Accountability (VFA; resources/aaccprograms/vfa/pages/default.aspx), sponsored by the American Association of Community Colleges, the Association of Community College Trustees and the College Board, tends to measure outcomes and processes at the community college level. The rapid development of accountability initiatives that emphasize student learning outcomes propels higher education institutions to demonstrate evidence of student academic achievement. Fall

3 Third, parents and students seek information on learning outcomes to facilitate their decision in school selection. They want to obtain solid evidence of how much students learn in college or whether students learn more in some colleges than others. Results on standardized outcomes assessment provide such information and have the potential to be comparable across institutions. Last, there is a continuing interest from the public to understand how institutions prepare students. The evaluation of institutional efficiency has been traditionally focused on what institutions have in terms of financial inputs and resources, but currently there has been a shift from evaluating inputs to outcomes. With the ever-increasing costs in attending colleges, the taxpaying public demands stronger accountability from public higher institutions. The Voluntary System of Accountability Since VSA is the most developed one among the three aforementioned accountability initiatives (i.e., VSA, Transparency by Design, and VFA), the discussion on outcomes assessment in this study will be situated in the context of VSA. Since its inception, the VSA marked an important milestone in the national effort to standardize student learning outcomes. As described earlier, the VSA was developed by the two leading organizations in higher education, the AASCU and the APLU. Through its College Portrait online program, VSA provides an opportunity for institutions to report their student learning outcomes in standardized test scores in writing and critical thinking (VSA, 2008). These two skills are selected as they are deemed necessary for citizens to survive and thrive in a global economy. According to interviews with 413 human resource managers from some of the largest international companies, critical thinking is rated as one of the most important qualities valued by employers in a modern economy (The Conference Board, Inc. et al., 2006). Besides learning outcomes, the College Portrait also provides other important institutional information, such as admission standards, size and settings, student experiences, and graduation rate. To obtain student scores in writing and critical thinking, VSA selected three standardized tests from 16 candidate tests as its outcomes assessment. These three tests are the ETS Proficiency Profile (formerly known as the Measure of Academic Proficiency and Profile [MAPP]; ETS, 2007; Marr, 1995), the Collegiate Assessment of Academic Proficiency (CAAP: ACT, 2009), and the Collegiate Learning Assessment (CLA: Klein, Benjamin, Shavelson, & Bolus, 2007). These instruments were selected because they are the most widely used tests in the field and were believed to adequately measure improvement in the core skill areas emphasized by VSA. The psychometric properties of the tests have been evaluated in many previous studies (e.g., Hendel, 1991; Klein, Liu, Sconing et al., 2009; Klein, Benjamin, Shavelson, & Bolus, 2007; Klein, Freedman, Shavelson, & Bolus 2008; Liu, 2008, 2009b, 2011; Marr, 1995). The Proficiency Profile is a multiple-choice test with 108 items and measures the following skill areas: reading, critical thinking, writing, and math. The Proficiency Profile takes about 2 hours to complete. The CAAP is also a multiple-choice test measuring reading, critical thinking, writing, science, and math with the exception of an essay writing component. The entire CAAP measures take about 4 hours to complete and the tests are modularized. The CLA is a construct-response test that contains three tasks: Performance Task, Make-An-Argument, and Critique-An-Argument, with the first two tasks focusing on critical thinking skills and the last one focusing on writing skills. The CLA tasks take about 2 hours and 45 minutes to complete. See Klein, Liu, Sconing et al. (2009) for a more detailed description of the three tests. The participating institutions of VSA have the flexibility to choose from the three tests to report their institutional learning outcomes. Note that although in theory institutions can choose more than one, most institutions only choose one. Value-Added Methodology Used in VSA An important goal of VSA is to measure growth in student academic achievement in college. A value-added model was adopted by VSA to indicate adjusted student learning gain. In VSA, value-added of a particular institution is defined as the performance difference on a standardized outcomes test (e.g., ETS Proficiency Profile) between freshmen and seniors at that institution, after controlling for admission test scores (i.e., SAT or ACT scores) for both freshmen and seniors. In current VSA practice, the group of freshmen and seniors are not the same group of students. Therefore, it is a cross-sectional design. In contrast to this cross-sectional design, a longitudinal design is where the same group of students is tested twice, once when they are in their freshman year and again when they are in their senior year. The longitudinal design is the preferred design, but VSA currently uses a cross-sectional design because of its practical advantages. It is easier and also costs less to test two groups of students at the same time than to track the same group of students over 4 years. Given that VSA is a relatively recent development, no studies have been conducted to investigate whether the results of institutional value-added ranking from a crosssectional design are able to well approximate the results from a longitudinal design. The method for calculating value-added scores within the VSA is based on the method used with the CLA test (CAE, 2007). This method employs Ordinary Least Squares (OLS) regression models that use the institution level Proficiency Profile, CAPP, or CLA scores as the dependent variables and mean admission scores (i.e., SAT or ACT scores) as the independent variables (VSA, 2008). The regression model is tested for freshmen and seniors separately and estimates a residual for freshmen and seniors, respectively. The difference in the freshman and senior residual determines the final valueadded of an institution. Essentially, the value-added score is a difference in differences. Since the current method uses the institution as the unit of analysis, it is only suitable to be used by testing programs with data available from a considerable number of institutions. Currently, the three testing programs selected by VSA (Proficiency Profile, CLA, and CAAP) use this method to report institutional value-added in the score report to their respective users (i.e., institutions). Current Challenges in Outcomes Assessment in Higher Education As the VSA holds great potential in providing direct assessment of student learning and making results comparable across institutions, several challenges exist in its design and value-added methodology, which has an impact on the validity and the interpretation of the value-added scores. In this 4 Educational Measurement: Issues and Practice

4 section, I summarize the most prominent challenges in the implementation and interpretation of outcomes assessment. Challenge One: Insufficient Evidence of What Outcomes Assessment Predicts Besides showing student progress on core educational outcomes, it is critical for the outcomes measures to demonstrate predictive validity with regard to other important success indicators such as cumulative GPA, degree-completion, graduate school application, and job placement. Collectively more than one thousand institutions have used Proficiency Profile, CAAP, or CLA to test their students, but not many studies have examined the relationship between the scores on the standardized tests and other success indicators of college education. Among the few studies, Hendel (1991) examined the relationship between student performance on the Academic Profile 1 (Proficiency Profile s predecessor) and college GPA and reported a statistically significant.38 correlation between the two. Marr (1995) also reported that students with higher GPA tended to obtain higher Proficiency Profile scores. Before inception of the CLA test, Klein et al. (2005) investigated the relationship between prototype CLA test batteries and student GPA. The test batteries included combinations of GRE-type writing prompts, a critical thinking test (Ewell, 1994), and performance tasks (Klein, 1996). The reported correlation between the test batteries and cumulative GPA ranged from.51 to.64 (Klein et al., 2005). Except for results from these few studies, very little is known about how student scores on outcomes assessment are related to other academic achievements. In addition, various stakeholders (e.g., institutional administrators, business leaders, and others) need to understand whether scores on outcomes tests can predict student success beyond undergraduate education. For example, they are interested in knowing whether high-performing students on outcomes assessment are more likely to apply for graduate programs or to be more successful in the labor market. Such information is needed for the stakeholders to make good use of the results from outcomes assessment. Challenge Two: Methodological Issues with the Current Value-Added Method As described earlier, the current method used to determine value-added scores in VSA employs OLS regression models, with institution-level mean standardized test score as the dependent variable and mean admission test score as the independent variable. There are a few potential problems with this method. First, all of the student-level information is ignored using this method since the calculation is at the mean institution level (Liu, 2009a, b, 2011). Klein, Freedman, Shavelson, and Bolus (2008) also raised concerns about this method and proposed using student as the unit of analysis for further research. The results could be more reliable using student-level information given that the number of students far exceeds the number of institutions in the equation. Second, the current method includes only admission scores as a control for previous academic ability. This decision was made due to a number of factors: (a) mean admission score is considered one of the important institutional characteristics; (b) institution is the unit of analysis that precludes a straightforward inclusion of student-level characteristics such as gender in the model; and (c) the research on valueadded is relatively new in higher education and the model has room for improvement. As many researchers argue, besides admission scores, there are many other factors that may influence student learning in college, such as gender, ethnicity, language status, and socio-economic status (Liu, 2008, 2009a, b; Borden & Young, 2008). These important variables should also be considered in determining student progress. Finally, the current method uses OLS regression models to analyze student performance on outcomes tests. Given the hierarchical structure of the data with students nested in an institution, multi-level modeling (Rabe-Hesketh & Skrondal, 2008; Raudenbush & Bryk, 2002) may be more appropriate than OLS models. One of the assumptions of the OLS models is that all the observations are independent (Stone, 1995). In the case of the value-added calculation between freshmen and seniors, student experience and learning are likely affected by the unique characteristics of the institution they attend. Therefore, test scores of students attending the same institution cannot be considered independent among themselves. As a solution to this issue, multi-level modeling is able to relax this constraint by differentiating the variance in student performance between within-institution factors and between-institution factors (Rabe-Hesketh & Skrondal, 2008; Raudenbush & Bryk, 2002; Singer, 1998). Although relatively new to higher education, multi-level models have been widely applied in K-12 for value-added research (Gorard, 2008; Lee, 2000; Lockwood, McCaffrey, & Hamilton, 2007; Palardy, 2008; Yeh & Ritter, 2009). In the field of K-12, multi-level models have been used to determine the effect of a school or a teacher in improving student academic achievement. The outcomes variable is generally student grades in a subject matter (e.g., math) and the multi-level models factor in both the studentlevel characteristics (e.g., gender, entering grades) and the school (or teacher)-level variables (e.g., percentage of students receiving free lunch). Despite the different contexts, value-added research in K-12 is similar to such research in higher education in that both tend to estimate how much a school or an institution is accountable for the change in student test scores, either on a subject-matter test or on a general outcomes assessment. Challenge Three: Comparability of the Three Tests Since institutions have the flexibility to choose from any one of the three tests (i.e., ETS Proficiency Profile, CAAP, CLA) to fulfill the VSA report requirement, more research evidence is needed to support the comparability of scores on the three tests. As noted by the VSA (2008), since the range and nature of the scores varies across the three tests, their results do not allow direct comparisons between measures. There are also other major differences among the tests that may preclude any direct comparisons. The three tests differ in item format both the Proficiency Profile and the CAAP tests are multiplechoice tests, while the CLA is a constructed-response test. The difference in item format leads to differences in scoring: while the scoring of the Proficiency Profile and the CAAP items is straightforward using dichotomous scores, the scoring of CLA items involves subjective human judgment and machine evaluation of essays. These tests also differ in delivery format: CAAP is offered via paper-and-pencil and CLA is available online. The Proficiency Profile test offers both venues. Given the major differences among the three selected tests, there was an urgent need to collect research evidence on the comparability and appropriate use of the three measures. In Fall

5 2007, the U.S. Department of Education awarded a grant to the AASCU, AAC&U, and APLU to investigate the relationship among three tests. The study is referred to as the Test Validity Study (TVS). With more than 1,100 students from 13 institutions taking all the three tests in a counter-balanced design, TVS examined the construct validity, the impact of item format (multiple-choice vs. constructed-response) on test performance, and school-level reliability of the three tests. For the investigation of construct validity, TVS compared the correlations between measures designed to assess the same construct with correlations between measures designed to assess different constructs. According to the jointly released report (Klein, Liu, Sconing et al., 2009), the correlations generally support the construct validity of the tests in that measures of the same constructs are highly correlated with the mean correlation being.88 and standard deviation.05, and the correlations between measures of the same constructs are higher than the correlations between measures of different constructs. The impact of item format on test performance is not significant. For example, the mean correlation among the constructed-response measures (e.g., CLA Performance Task, CAAP Writing Essay) was.84, which was very close to the mean correlation of.85 between multiple-choice and constructed-response measures. All but two measures (i.e., CAAP Writing Essay for freshmen [r =.68] and CLA Performance Task for seniors [r =.64]) showed reasonable school-level reliability among the three tests, with reliabilities ranging from.81 to.95. In general, multiple-choice measures showed higher reliability than constructed-response measures as expected. Despite the seeming differences in item format and testing time, results from the three tests tend to be comparable in the TVS. Challenge Four: No Evidence of the Comparability of Results Between the Preferred Longitudinal Design and the Current Cross-Sectional Design VSA currently adopts a cross-sectional design in value-added calculation. Institutions are required to test a group of freshmen and a group of seniors for the institutional value-added report. The freshmen and seniors are not the same group of students. Thus, the results of the cross-sectional analysis are confounded to an unknown degree by the differences between these two groups of students. Although adjustments are made for cohort differences related to SAT score levels, the groups could still differ on other important, but unmeasured variables, such as gender, ethnicity, or socio-economic status. As the VSA overview (VSA, 2008) points out, the cross-sectional design was selected because it is quicker, simpler, and less costly to implement (p. 4). However, there is no research evidence showing that results from a cross-sectional design are as valid or reliable as results from a longitudinal design. Such evidence is needed for a large-scale use of a cross-sectional design in determining institutional value-added. In addition, the cross-sectional design assumes that the freshman and senior classes have comparable student characteristics. However, the senior class is generally more selective than the first-year class due to reasons such as attrition. Senior students are found to have a higher mean SAT score than freshmen and the score difference could be as large as.57 standard deviation (Liu, 2008). Although SAT scores are controlled for in the cross-sectional design, freshmen and seniors could differ significantly in other factors, such as motivation and persistence to degree. According to ACT (2008), as of 2008, only 43.8% of freshmen who enroll in a 4-year public university obtain a degree within 5 years of entry. The relatively small percentage of students who actually earn a degree suggests that the senior class is often more selective than the freshman class. Given the lack of comparability between freshmen and seniors at an institution, a longitudinal study is needed to track the same group of students from enrollment to graduation. This way, we will be able to compare value-added results obtained from the longitudinal and cross-sectional design. If it becomes apparent that the longitudinal design is superior to the cross-sectional design, the VSA method for selecting students for freshmen and seniors comparison should be revisited. Challenge Five: Unclear Evidence of Student Motivation in Taking Low-Stakes Tests Researchers are rightfully concerned about the levels of student motivation when there are no personal consequences for the individual student (Banta, 2008; Borden & Young, 2008; Liu, 2008, 2009, 2011). The validity of the test scores is questionable if the test results fail to accurately reflect student abilities (Wainer, 1993). Lack of student motivation in test taking also leads to inaccurate estimates of student achievement and thus affects decisions about institutional efficiency. Motivated students could outperform unmotivated students by at least half a standard deviation (Wise & DeMars, 2005). Therefore, institutions with more motivated students may appear to have produced more value-added learning in the VSA evaluation. Currently, institutions use a wide range of incentives to recruit students to take the outcomes tests, ranging from course credits, bookstore coupons, copy cards, cash, and in some cases, a free smoothie. There is no guarantee that students who take the tests are equally motivated to make their best effort in test taking. Researchers have proposed various solutions to examine student motivation in taking low-stakes tests (Wise & DeMars, 2005; Wise & Kong, 2005; Wise, Wise, & Bhola, 2006). For example, we can examine response time effort for students who take a computer-based test and determine if a student has made a serious effort based on the amount of time they took to answer each question. An alternative way of monitoring student motivation is to administer a motivation scale after students have taken the low-stakes test (Sundre & Wise, 2003). Researchers at James Madison University developed a motivation scale called the Student Opinion Survey (SOS; Sundre, 1997, 1999; Sundre & Wise, 2003). The SOS has been proven to be effective in identifying low motivation students (Wise & DeMars, 2005; Wise, Wise, & Bhola, 2006). Students reporting low effort will be flagged and their performance will be examined. Challenge Six: Lack of Evidence on the Implications of Outcomes Assessment for Different Types of Institutions It is very important to consider the types of institutions in institutional research. According to the 2005 Carnegie Classifications ( classifications/index.asp?key=809), there are mainly three types of 4-year public higher education institutions: (1) Research universities: this type of university often offers doctoral degrees and has a strong focus on research. Institutions such as University of California, Berkeley, and University of 6 Educational Measurement: Issues and Practice

6 Michigan are examples of research universities; (2) Masters colleges and universities: this type of university is usually less focused on research than type 1 universities and offers both bachelor s and master s degrees. SUNY College at Buffalo and California State University at Long Beach are examples of this type of university; and (3) baccalaureate colleges: this type of institution usually offers only bachelor s degrees and the degrees they offer can be in Arts & Sciences only or in more diverse fields. St. Mary s College in Maryland and Western State College of Colorado are examples of baccalaureate colleges. For the current VSA practice, institutions of all kinds are grouped together for value-added reporting. However, the three types of public institutions described earlier differ significantly in terms of admission requirements, selectivity, enrollment size, and student-to-faculty ratio. These important characteristics could be taken into consideration in the evaluation of instructional effectiveness, including the amount of accessible resources institutions have. To that end, it makes more sense to compare institutions with similar student profile, size, and settings. Research evidence is needed on the possible differential implications that outcomes assessment has on different types of institutions. For example, if students from a selective research institution are compared with their peers from a less selective baccalaureate institution, the students in the research institution could show relatively limited value-added growth from freshman to senior year due to the ceiling effect. In addition, we should pay attention to certain universities with special populations such as historically black colleges and universities. For example, Morgan State University is one of the leading national institutions in preparing African American students for bachelor s and more advanced degrees. It would be important to understand how students at institutions such as Morgan State progress from freshman to senior year and whether their value-added performance differs from that of students at other institutions. Directions of Future Research The challenges summarized above are also opportunities for future research in the area of outcomes assessment in higher education. While solutions to all of the issues raised above are needed, I identify three areas of research that require the most urgent attention. Specifically, future research should attempt to address the following three questions: (1) what are the differences in results between a longitudinal design and a cross-sectional design? (2) what statistical models should be used to determine institutional value-added? And (3) how does student motivation affect the results of outcomes assessment? The three questions are of foremost importance since they have a direct impact on the validity of the test scores and the interpretation of the results. To address the first question, we need two cohorts of students, a longitudinal cohort and a cross-sectional cohort, from the same institutions. Institutional value-added results calculated from the longitudinal data should be compared to such results obtained from the cross-sectional data. For example, we can examine the correlation between the two sets of value-added scores produced from the two sets of data. Also, evidence is needed on the extent to which an individual institution can be affected in terms of value-added ranking by switching the design. The comparison will provide information on how a cross-sectional design is able to approximate a longitudinal design. To address the second question, more research is needed on both advancing the statistical methods used to compute institutional value-added and identifying the student-level and institutional-level variables that should be included in the statistical equations. Given the known advantages of multi-level models, a number of researchers have conducted research using multi-level models in deriving institutional value-added scores and comparing the results between different methods. For example, Liu (2011) reanalyzed data on the Proficiency Profile test using a two-level model with the student being the unit of the first-level equation and the institution being the unit of the second-level equation. The same data set was analyzed using the OLS method currently recommended by VSA (Liu, 2008). Besides using admission scores as a predictor, the reanalysis included important institutional variables at the second level model, such as selectivity and type of institution (e.g., undergraduate university or university with graduate programs). The findings suggest that although the correlation is high (e.g.,.76 for critical thinking and.84 for writing) between the value-added scores obtained from the multi-level and the OLS models, the change in value-added ranking can impact individual institutions significantly. The difference in ranking can be as large as five out of ten decile levels for an institution, which suggests that depending on the statistical method used, an institution can be ranked as above or below average in the cohort comparison in terms of producing value-added for their college students. The randomness in the value-added results may be fully comprehended by measurement experts, but is likely to be ignored by the general public, who turn out to be the main consumer of the value-added information. The change in ranking may influence policy makers funding decisions, parents school selection decision, and the general reputation of that institution. There is no straightforward answer to the question of which model is considered the best to estimate institutional value-added. Multi-level models have theoretical advantages over OLS models. But even within the family of multi-level models, there are various selections of models with different assumptions and specifications. The solution lies in both the soundness of the statistical model and meaningfulness of the selection of variables to be factored in the evaluation of institutional value-added. Finally, to address the last question, the first step is to investigate to which degree students are motivated to take the low-stakes outcomes assessment. We can either administer a self-report motivation scale to obtain direct information from students, or, for online tests where student item response time is available, analyze the time students spend on each item to gauge their levels of motivation. We can then correlate student motivation and their test scores to determine whether lower motivation is associated with lower test scores. It is also important that we identify a few practical strategies that institutions can easily adopt to enhance student motivation. We can experiment with a few different motivation conditions to see whether variation in motivation conditions is associated with differential performance on outcomes assessment. Students may be motivated by monetary incentives or by patriotic reasons. There are a number of ways we can motivate students with monetary incentives. For example, students can be made aware that those who score above a certain percentile shall receive an additional payment. Or, we can randomly select an Fall

7 item from the outcomes assessment and offer an additional amount of money to those who answer it correctly. For patriotic reasons, we can remind students of the importance of the results to their institution (e.g., institutional reputation). A comparison of student performance under these various motivational conditions will provide information of effective motivation strategies. One potential caveat is that the institutions aware of the effective incentives and willing (or able) to provide such incentives may have an advantage in the test results. Students at institutions with more effective incentives may exert more efforts in the test taking than students at other institutions. However, the reality is that institutions are already using a variety of incentives to attract students to take outcomes assessment, which calls for research to at least investigate what strategies are effective and to provide such information to all institutions. Since most institutions are likely to continue to use outcomes assessment in a lowstakes way, practical methods that can be used to motivate students are valuable. Going forward, we also need to think about how faculty members at institutions can be engaged in the conversation. Outcomes assessment serves education in a twofold manner: it provides information of student learning through college education, and it also points to the directions of future improvement of our college education. Currently, most of the attention and discussions have been focused on the former. Not enough thought has been given to how results from outcomes assessment can be used by faculty members to improve instruction. Part of the challenge lies in that the knowledge and skills targeted by the outcomes assessment are general skills and are not taught in any specific classes. For example, students do not acquire critical thinking skills in one single class. Instead, they develop critical thinking skills through class lectures, quizzes, presentations, and other hands-on projects in many courses. Therefore, it is difficult for a faculty member to make a direct connection between the results on outcomes assessment and his/her instruction. However, this challenge should not prevent us from exploring the utility of outcomes assessment. It is crucial to communicate findings from outcomes assessment to institution administrators, deans and chairs, and faculty members for a coherent understanding of the roles of outcomes assessment. Being aware of the strengths and weaknesses of their students is the first step for institutional stakeholders to move towards improving academic curriculum and instruction. Notes 1 Since there is little difference between Academic Profile and Proficiency Profile, Academic Profile will be referred to as ETS Proficiency Profile in the following sections of the paper. References ACT (2008) Retention/Completion Summary Tables. IowaCity, IA: ACT. ACT (2009). CAAP guide to successful general education outcomes assessment. Iowa City, IA: ACT. Banta, T. (2008). Trying to clothe the emperor. Assessment Update, 20(2), 3 4, Borden, V. M. H., & Young, J. W. (2008). Measurement validity and accountability for student learning. In V. M. H. Borden & G. R. Pike (Eds.), Assessing and accounting for student learning: Finding a constructive path forward. San Francisco: Jossey-Bass. College Board (2010). Trends in college pricing. New York: Author. Council for Aid to Education (2007). CLA institutional report New York: Author. Council for Higher Education Accreditation (2006). Accrediting organizations in the United States: How do they operate to assure quality? Washington, DC: Council for Higher Education Accreditation. Council for Higher Education Accreditation (2010) directory of CHEA-recognized organizations. Washington, DC: Council for Higher Education Accreditation. Educational Testing Service (2007). MAPP user s guide. Princeton, NJ: Author. Educational Testing Service (2010). Market research of institutions that use outcomes assessment. Princeton, NJ: Author. Ewell, P. T. (1994). A policy guide for assessment: Making good use of the tasks in critical thinking. Princeton, NJ: ETS. Gorard, S. (2008). The value-added of primary schools: What is it really measuring? Educational Review, 60(2), Hendel, D. D. (1991). Evidence of convergent and discriminant validity in three measures of college outcomes. Educational and Psychological Measurement, 51(2), Klein, S. (1996). The costs and benefits of performance testing on the bar examination. The Bar Examiner, 65(3), Klein, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32(6), Klein, S., Kuh, G., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher-education institutions.journal of Research on Higher Education, 46(3), Klein, S., Liu, O.L., Sconing, J., Bolus, R., Bridgeman, B., Kugelmass, H., Nemeth, A., Robbins, S., & Steedle, J. (September 29, 2009). Test Validity Study (TVS) Report. Supported by the Fund for Improvement of Postsecondary Education (FIPSE). ( = research). Lee, V. E. (2000). Using hierarchical linear modeling to study social contexts: The case of school effects. Educational Psychologist, 35(2), Liu, O. L. (2008). Measuring learning outcomes in higher education using the Measure of Academic Proficiency and Progress (MAPP TM ) (ETS Research Report No. RR ). Princeton, NJ: ETS. Liu, O. L. (2009a). R&D connections: Measuring learning outcomes in higher education (Report No. RDC-10). Princeton, NJ: ETS. Liu, O. L. (2009b). Measuring value-added in higher education: Conditions and caveats. Results from using the Measure of Academic Proficiency and Progress (MAPP TM ). Assessment and Evaluation in Higher Education, 34(6), Liu, O. L. (2011). Value-added assessment in higher education: A comparison of two methods. Higher Education, 61(4), Lockwood, J. R., McCaffrey, D. F., & Hamilton, L. S. (2007). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement, 44(1), Marr, D. (1995). Validity of the academic profile. Princeton, NJ: Educational Testing Service. National Institute for Learning Outcomes Assessment (NILOA) (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: NILOA. Palardy, G. J. (2008). Differential school effects among low, middle, and high social class composition schools: A multilevel, multiple group latent growth curve analysis. School Effectiveness and School Improvement, 19, Rabe-Hesketh, S., & Skrondal, A. (2008). Multilevel and longitudinal modeling using Stata (2nd ed.). College Station, TX: Stata Press. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage. Singer, J. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Education and Behavioral Statistics, 24(4), Educational Measurement: Issues and Practice

8 Stone, C. (1995). A course in probability and statistics. Belmont, CA: Duxbury Press. Sundre, D. L. (1997, April). Differential examinee motivation and validity: A dangerous combination. Paper presented at the meeting of the American Educational Research Association, Chicago, IL. Sundre, D. L. (1999, April). Does examinee motivation moderate the relationship between test consequences and test performance? Paper presented at the meeting of the American Educational Research Association, Montreal, Canada (ERIC Document Reproduction Service No. ED432588). Sundre, D. L., & Wise, S. L. (2003, April). Motivation filtering: An exploration of the impact of low examinee motivation on the psychometric quality of tests. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL. The Conference Board, Inc., the Partnership for 21st Century Skills, Corporate Voices for Working Families, and the Society for Human Resource Management (2006). Are they really ready to work? New York: The Conference Board ( accessed May 5, 2009). U.S. Department of Education, National Center for Education Statistics (2006). Digest of Education Statistics, 2005 (NCES ). U.S. Department of Education, National Center for Education Statistics. (2008). Digest of Education Statistics, 2007 (NCES ). Voluntary System of Accountability (2008). Background on learning outcomes measures ( index.cfm?page=about_cp, accessed May 18, 2009). Wainer, H. (1993). Measurement problems. Journal of Educational Measurement, 30, Wise, S. L., & DeMars, C. D. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10(1), Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18(2), Wise, V. L., Wise, S. L., & Bhola, D. S. (2006). The generalizability of motivation filtering in improving test score validity. Educational Assessment, 11(1), Yeh, S. S., & Ritter, J. (2009). The cost-effectiveness of replacing the bottom quartile of novice teachers through value-added teacher assessment. Journal of Education Finance, 34(4), Fall

Examining American Post-Secondary Education

Examining American Post-Secondary Education Research Report ETS RR 11-22 Examining American Post-Secondary Education Ou Lydia Liu May 2011 Examining American Post-Secondary Education Ou Lydia Liu ETS, Princeton, New Jersey May 2011 As part of its

More information

Sampling, Recruitment, & Motivation

Sampling, Recruitment, & Motivation Sampling, Recruitment, & Motivation SAMPLING Purposeful sampling of students is an important part of obtaining valid and reliable results. While each institution is responsible for student sampling and

More information

Using the ETS Proficiency Profile Test to Meet VSA Requirements Frequently Asked Questions

Using the ETS Proficiency Profile Test to Meet VSA Requirements Frequently Asked Questions Using the ETS Proficiency Profile Test to Meet VSA Requirements Frequently Asked Questions What is the ETS Proficiency Profile? The ETS Proficiency Profile from ETS is an outcomes assessment of four core

More information

In the past two decades, the federal government has dramatically

In the past two decades, the federal government has dramatically Degree Attainment of Undergraduate Student Borrowers in Four-Year Institutions: A Multilevel Analysis By Dai Li Dai Li is a doctoral candidate in the Center for the Study of Higher Education at Pennsylvania

More information

CHEA. Accreditation and Accountability: A CHEA Special Report. CHEA Institute for Research and Study of Acceditation and Quality Assurance

CHEA. Accreditation and Accountability: A CHEA Special Report. CHEA Institute for Research and Study of Acceditation and Quality Assurance CHEA Institute for Research and Study of Acceditation and Quality Assurance Accreditation and Accountability: A CHEA Special Report CHEA Occasional Paper Special Report December 2006 CHEA The Council for

More information

Comparing Alternatives in the Prediction of College Success. Doris Zahner Council for Aid to Education. Lisa M. Ramsaran New York University

Comparing Alternatives in the Prediction of College Success. Doris Zahner Council for Aid to Education. Lisa M. Ramsaran New York University COMPARING ALTERNATIVES 1 Comparing Alternatives in the Prediction of College Success Doris Zahner Council for Aid to Education Lisa M. Ramsaran New York University Jeffrey T. Steedle Council for Aid to

More information

OUTDATED. 1. A completed University of Utah admission application and processing fee.

OUTDATED. 1. A completed University of Utah admission application and processing fee. Policy 9-6 Rev 12 Date October 14, 2002 Subject: FACULTY REGULATIONS - Chapter VI UNDERGRADUATE ADMISSION SECTION 1. APPLICATION FOR ADMISSION All prospective undergraduate students must apply through

More information

A C T R esearcli R e p o rt S eries 2 0 0 5. Using ACT Assessment Scores to Set Benchmarks for College Readiness. IJeff Allen.

A C T R esearcli R e p o rt S eries 2 0 0 5. Using ACT Assessment Scores to Set Benchmarks for College Readiness. IJeff Allen. A C T R esearcli R e p o rt S eries 2 0 0 5 Using ACT Assessment Scores to Set Benchmarks for College Readiness IJeff Allen Jim Sconing ACT August 2005 For additional copies write: ACT Research Report

More information

ETS Research Spotlight. Measuring Learning Outcomes in Higher Education: The Role of Motivation

ETS Research Spotlight. Measuring Learning Outcomes in Higher Education: The Role of Motivation ETS Research Spotlight Measuring Learning Outcomes in Higher Education: The Role of Motivation ISSUE 9, AUGUST 2013 Foreword ETS Research Spotlight ISSUE 9 AUGUST 2013 Senior Editor: Jeff Johnson Contributing

More information

Assessment Policy. 1 Introduction. 2 Background

Assessment Policy. 1 Introduction. 2 Background Assessment Policy 1 Introduction This document has been written by the National Foundation for Educational Research (NFER) to provide policy makers, researchers, teacher educators and practitioners with

More information

Basic Assessment Concepts for Teachers and School Administrators.

Basic Assessment Concepts for Teachers and School Administrators. ERIC Identifier: ED447201 Publication Date: 2000-11-00 Basic Assessment Concepts for Teachers and School Administrators. Author: McMillan, James H. ERIC/AE Digest. Source: ERIC Clearinghouse on Assessment

More information

MAGNA ONLINE SEMINARS

MAGNA ONLINE SEMINARS MAGNA ONLINE SEMINARS How Good Is Good Enough?: Setting Benchmarks or Standards Wednesday, Presented by: Linda Suskie Linda Suskie is vice president of the Middle States Commission on Higher Education.

More information

Nebraska School Counseling State Evaluation

Nebraska School Counseling State Evaluation Nebraska School Counseling State Evaluation John Carey and Karen Harrington Center for School Counseling Outcome Research Spring 2010 RESEARCH S c h o o l o f E d u c a t i o n U n i v e r s i t y o f

More information

Kyndra V. Middleton, Ph.D. 2441 4 th St. NW Washington, DC 20059 (202) 806-5342 Email: kyndra.middleton@howard.edu

Kyndra V. Middleton, Ph.D. 2441 4 th St. NW Washington, DC 20059 (202) 806-5342 Email: kyndra.middleton@howard.edu , Ph.D. 2441 4 th St. NW Washington, DC 20059 (202) 806-5342 Email: kyndra.middleton@howard.edu Education Ph.D., Educational Measurement and Statistics, The University of Iowa, Iowa City, IA, July 2007

More information

Does College Matter? Measuring Critical-Thinking Outcomes Using the CLA

Does College Matter? Measuring Critical-Thinking Outcomes Using the CLA Does College Matter? Measuring Critical-Thinking Outcomes Using the CLA January 2013 This research was conducted by the Council for Aid to Education (CAE), supported by internal funds While there is disagreement

More information

How To Use The College Learning Assessment (Cla) To Plan A College

How To Use The College Learning Assessment (Cla) To Plan A College Using the Collegiate Learning Assessment (CLA) to Inform Campus Planning Anne L. Hafner University Assessment Coordinator CSULA April 2007 1 Using the Collegiate Learning Assessment (CLA) to Inform Campus

More information

Defining and Measuring College Success

Defining and Measuring College Success 1 Defining and Measuring College Success Wayne Camara Krista Mattern Andrew Wiley The College Board Research and Development February 24, 2009 2 Remediation and Post Secondary Success 41.4% of post secondary

More information

College. Of Education

College. Of Education College Of Education Contact Us 00971-2-5993111 (Abu Dhabi) 00971-4-4021111 (Dubai) 00971-2- 5993783 (College of Education) @Zayed_U zayeduniversity ZayedU www.zu.ac.ae Introduction and Mission Educators

More information

College Participation Rates of Maine s Recent High School Graduates: Examining the Claims

College Participation Rates of Maine s Recent High School Graduates: Examining the Claims College Participation Rates of Maine s Recent High School Graduates: Examining the Claims David L. Silvernail James E. Sloan Amy F. Johnson Center for Education Policy, Applied Research and Evaluation

More information

Evaluating Analytical Writing for Admission to Graduate Business Programs

Evaluating Analytical Writing for Admission to Graduate Business Programs Evaluating Analytical Writing for Admission to Graduate Business Programs Eileen Talento-Miller, Kara O. Siegert, Hillary Taliaferro GMAC Research Reports RR-11-03 March 15, 2011 Abstract The assessment

More information

Testing Services. Association. Dental Report 3 Admission 2011 Testing Program. User s Manual 2009

Testing Services. Association. Dental Report 3 Admission 2011 Testing Program. User s Manual 2009 American Dental Association Department of Testing Services Dental Report 3 Admission 2011 Testing Program User s Manual 2009 DENTAL ADMISSION TESTING PROGRAM USER'S MANUAL 2009 TABLE OF CONTENTS Page History

More information

Journal of College Teaching & Learning July 2008 Volume 5, Number 7

Journal of College Teaching & Learning July 2008 Volume 5, Number 7 Student Grade Motivation As A Determinant Of Performance On The Business Major Field ETS Exam Neil Terry, West Texas A&M University LaVelle Mills, West Texas A&M University Marc Sollosy, West Texas A&M

More information

Relating the ACT Indicator Understanding Complex Texts to College Course Grades

Relating the ACT Indicator Understanding Complex Texts to College Course Grades ACT Research & Policy Technical Brief 2016 Relating the ACT Indicator Understanding Complex Texts to College Course Grades Jeff Allen, PhD; Brad Bolender; Yu Fang, PhD; Dongmei Li, PhD; and Tony Thompson,

More information

College. Of Education COLLEGE OF EDUCATION

College. Of Education COLLEGE OF EDUCATION College Of Education COLLEGE OF EDUCATION Contact Us 00971-2-5993111 (Abu Dhabi) 00971-4-4021111 (Dubai) 00971-4-4021278 (College of Education) @Zayed_U www.facebook.com/zayeduniversity www.zu.ac.ae Introduction

More information

GMAC. Predicting Success in Graduate Management Doctoral Programs

GMAC. Predicting Success in Graduate Management Doctoral Programs GMAC Predicting Success in Graduate Management Doctoral Programs Kara O. Siegert GMAC Research Reports RR-07-10 July 12, 2007 Abstract An integral part of the test evaluation and improvement process involves

More information

Graduate Student Perceptions of the Use of Online Course Tools to Support Engagement

Graduate Student Perceptions of the Use of Online Course Tools to Support Engagement International Journal for the Scholarship of Teaching and Learning Volume 8 Number 1 Article 5 January 2014 Graduate Student Perceptions of the Use of Online Course Tools to Support Engagement Stephanie

More information

Introduction to Data Analysis in Hierarchical Linear Models

Introduction to Data Analysis in Hierarchical Linear Models Introduction to Data Analysis in Hierarchical Linear Models April 20, 2007 Noah Shamosh & Frank Farach Social Sciences StatLab Yale University Scope & Prerequisites Strong applied emphasis Focus on HLM

More information

ACT CAAP Critical Thinking Test Results 2007 2008

ACT CAAP Critical Thinking Test Results 2007 2008 ACT CAAP Critical Thinking Test Results 2007 2008 Executive Summary The ACT Collegiate Assessment of Academic Proficiency (CAAP) Critical Thinking Test was administered in fall 2007 to 200 seniors and

More information

The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community

The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community Michele J. Hansen, Ph.D., Director of Assessment, University

More information

Coastal Carolina University Catalog 2004/2005 ADMISSIONS

Coastal Carolina University Catalog 2004/2005 ADMISSIONS ADMISSIONS 25 ADMISSION INFORMATION The Office of Admissions is committed to marketing the University and attracting students who seek to attend a comprehensive liberal arts institution. As a team, we

More information

Admissions. Office of Admissions. Admission. When to Apply. How to Apply. Undergraduate Admission Directly from High School

Admissions. Office of Admissions. Admission. When to Apply. How to Apply. Undergraduate Admission Directly from High School Iowa State University 2015-2016 1 Admissions Office of Admissions Director Katharine Johnson Suski Admission When to Apply Applicants for the fall semester are encouraged to apply during the fall of the

More information

Compliance with Federal Regulations

Compliance with Federal Regulations chapter 11 Compliance with Federal Regulations The University of Wyoming is dedicated to fulfilling the requirements of the Higher Education Opportunity Act of 2008. To that end, we have established systems

More information

Evaluating the Impact of Charter Schools on Student Achievement: A Longitudinal Look at the Great Lakes States. Appendix B.

Evaluating the Impact of Charter Schools on Student Achievement: A Longitudinal Look at the Great Lakes States. Appendix B. Evaluating the Impact of Charter Schools on Student Achievement: A Longitudinal Look at the Great Lakes States Appendix B June 27 EPRU EDUCATION POLICY RESEARCH UNIT Education Policy Research Unit Division

More information

Encouraging Faculty Participation in Student Recruitment

Encouraging Faculty Participation in Student Recruitment Academic Affairs Forum Encouraging Faculty Participation in Student Recruitment Custom Research Brief eab.com Academic Affairs Forum Noorjahan Rahman Research Associate 202-266-6461 NRahman@eab.com Kevin

More information

INSTRUCTION AND ACADEMIC SUPPORT EXPENDITURES: AN INVESTMENT IN RETENTION AND GRADUATION

INSTRUCTION AND ACADEMIC SUPPORT EXPENDITURES: AN INVESTMENT IN RETENTION AND GRADUATION J. COLLEGE STUDENT RETENTION, Vol. 5(2) 135-145, 2003-2004 INSTRUCTION AND ACADEMIC SUPPORT EXPENDITURES: AN INVESTMENT IN RETENTION AND GRADUATION ANN M. GANSEMER-TOPF JOHN H. SCHUH Iowa State University,

More information

A National Study of School Effectiveness for Language Minority Students Long-Term Academic Achievement

A National Study of School Effectiveness for Language Minority Students Long-Term Academic Achievement A National Study of School Effectiveness for Language Minority Students Long-Term Academic Achievement Principal Investigators: Wayne P. Thomas, George Mason University Virginia P. Collier, George Mason

More information

Institutional and Student Characteristics that Predict Graduation and Retention Rates

Institutional and Student Characteristics that Predict Graduation and Retention Rates Institutional and Student Characteristics that Predict Graduation and Retention Rates Dr. Braden J. Hosch Director of Institutional Research and Assessment Central Connecticut State University, New Britain,

More information

The Importance of Community College Honors Programs

The Importance of Community College Honors Programs 6 This chapter examines relationships between the presence of honors programs at community colleges and institutional, curricular, and student body characteristics. Furthermore, the author relates his

More information

The Earlier the Better? Taking the AP in 10th Grade. By Awilda Rodriguez, Mary E. M. McKillip, and Sunny X. Niu

The Earlier the Better? Taking the AP in 10th Grade. By Awilda Rodriguez, Mary E. M. McKillip, and Sunny X. Niu Research Report 2012-10 The Earlier the Better? Taking the AP in 10th Grade By Awilda Rodriguez, Mary E. M. McKillip, and Sunny X. Niu Awilda Rodriguez is a doctoral student in Higher Education at the

More information

Examining the Relationship between Early College Credit and Higher Education Achievement of First- Time Undergraduate Students in South Texas

Examining the Relationship between Early College Credit and Higher Education Achievement of First- Time Undergraduate Students in South Texas Electronic Journal for Inclusive Education Volume 2 Number 6 Electronic Journal for Inclusive Education Vol. 2, No. 6 (Spring/Summer 2010) Article 4 Spring 2010 Examining the Relationship between Early

More information

The College of Saint Elizabeth Report Narrative

The College of Saint Elizabeth Report Narrative Program Overview and Mission The College of Saint Elizabeth Report Narrative The College of Saint Elizabeth has been recognized as a leader in teacher education since its founding over 100 years ago. In

More information

October 2008 Research Brief: What does it take to prepare students academically for college?

October 2008 Research Brief: What does it take to prepare students academically for college? October 2008 Research Brief: What does it take to prepare students academically for college? The research is clear on the connection between high school coursework and success in college. The more academically

More information

BEST PRACTICES IN COMMUNITY COLLEGE BUDGETING

BEST PRACTICES IN COMMUNITY COLLEGE BUDGETING BEST PRACTICES IN COMMUNITY COLLEGE BUDGETING SUMMARY Key Points: By conducting an environmental analysis, a college can identify goals to pursue by evaluating where performance improvement and adaptation

More information

Bowen, Chingos & McPherson, Crossing the Finish Line

Bowen, Chingos & McPherson, Crossing the Finish Line 1 Bowen, W. G., Chingos, M. M., and McPherson, M. S. (2009). Crossing the Finish Line: Completing College at America s Public Universities. Princeton, N.J.: Princeton University Press. The authors begin

More information

NSSE S BENCHMARKS ONE SIZE FITS ALL?

NSSE S BENCHMARKS ONE SIZE FITS ALL? NSSE S BENCHMARKS ONE SIZE FITS ALL? Nava Lerer Kathryn Talley Adelphi University Prepared for the Northeast Association for Institutional Research 32th Annual Conference, November 2005 NSSE S BENCHMARKS

More information

Michigan Technological University College Portrait. The Huskies Community. Carnegie Classification of Institutional Characteristics

Michigan Technological University College Portrait. The Huskies Community. Carnegie Classification of Institutional Characteristics Page 1 of 11 College Portrait Michigan Tech was founded in 1885 and today is a leading public research university developing new technologies and preparing students to create the future for a prosperous

More information

2014 EPP Annual Report

2014 EPP Annual Report 2014 EPP Annual Report CAEP ID: 24736 AACTE SID: Institution: Tulane University EPP: Teacher Preparation and Certification Program Section 1. AIMS Profile After reviewing and/or updating the Educator Preparation

More information

Higher Performing High Schools

Higher Performing High Schools COLLEGE READINESS A First Look at Higher Performing High Schools School Qualities that Educators Believe Contribute Most to College and Career Readiness 2012 by ACT, Inc. All rights reserved. A First Look

More information

Policy Capture for Setting End-of-Course and Kentucky Performance Rating for Educational Progress (K-PREP) Cut Scores

Policy Capture for Setting End-of-Course and Kentucky Performance Rating for Educational Progress (K-PREP) Cut Scores 2013 No. 007 Policy Capture for Setting End-of-Course and Kentucky Performance Rating for Educational Progress (K-PREP) Cut Scores Prepared for: Authors: Kentucky Department of Education Capital Plaza

More information

Inspirational Innovation

Inspirational Innovation A HIGHER EDUCATION PRESIDENTIAL THOUGHT LEADERSHIP SERIES 2014-2015 Series: Inspirational Innovation CHAPTER 2 Goal Realization: Diverse Approaches for a Diverse Generation Goal Realization: Diverse Approaches

More information

California State University, Los Angeles College Portrait. The Cal State LA Community. Carnegie Classification of Institutional Characteristics

California State University, Los Angeles College Portrait. The Cal State LA Community. Carnegie Classification of Institutional Characteristics Page 1 of 8 College Cal State L.A. has been a dynamic force in the education of students, setting a record of outstanding academic achievement for more than 60 years within the California State University

More information

NATIONAL CENTER FOR EDUCATION STATISTICS

NATIONAL CENTER FOR EDUCATION STATISTICS Cover Image End of image description. NATIONAL CENTER FOR EDUCATION STATISTICS What Is IPEDS? The Integrated Postsecondary Education Data System (IPEDS) is a system of survey components that collects data

More information

Co-Curricular Activities and Academic Performance -A Study of the Student Leadership Initiative Programs. Office of Institutional Research

Co-Curricular Activities and Academic Performance -A Study of the Student Leadership Initiative Programs. Office of Institutional Research Co-Curricular Activities and Academic Performance -A Study of the Student Leadership Initiative Programs Office of Institutional Research July 2014 Introduction The Leadership Initiative (LI) is a certificate

More information

Kurt F. Geisinger. Changes in Outcomes Assessment for Accreditation: The Case of Teacher Education in the United States

Kurt F. Geisinger. Changes in Outcomes Assessment for Accreditation: The Case of Teacher Education in the United States Kurt F. Geisinger Changes in Outcomes Assessment for Accreditation: The Case of Teacher Education in the United States Overview of Accreditation 2 Accreditation 3 Three accreditation agencies for tests

More information

Abstract Title Page. Title: Conditions for the Effectiveness of a Tablet-Based Algebra Program

Abstract Title Page. Title: Conditions for the Effectiveness of a Tablet-Based Algebra Program Abstract Title Page. Title: Conditions for the Effectiveness of a Tablet-Based Algebra Program Authors and Affiliations: Andrew P. Jaciw Megan Toby Boya Ma Empirical Education Inc. SREE Fall 2012 Conference

More information

Virginia s College and Career Readiness Initiative

Virginia s College and Career Readiness Initiative Virginia s College and Career Readiness Initiative In 1995, Virginia began a broad educational reform program that resulted in revised, rigorous content standards, the Virginia Standards of Learning (SOL),

More information

SC T Ju ly 2 0 0 0. Effects of Differential Prediction in College Admissions for Traditional and Nontraditional-Aged Students

SC T Ju ly 2 0 0 0. Effects of Differential Prediction in College Admissions for Traditional and Nontraditional-Aged Students A C T Rese& rclk R e p o rt S eries 2 0 0 0-9 Effects of Differential Prediction in College Admissions for Traditional and Nontraditional-Aged Students Julie Noble SC T Ju ly 2 0 0 0 For additional copies

More information

Utah Comprehensive Counseling and Guidance Program Evaluation Report

Utah Comprehensive Counseling and Guidance Program Evaluation Report Utah Comprehensive Counseling and Guidance Program Evaluation Report John Carey and Karen Harrington Center for School Counseling Outcome Research School of Education University of Massachusetts Amherst

More information

Developmental education and college readiness at the University of Alaska Michelle Hodara Monica Cox Education Northwest

Developmental education and college readiness at the University of Alaska Michelle Hodara Monica Cox Education Northwest May 2016 What s Happening Developmental education and college readiness at the University of Alaska Michelle Hodara Monica Cox Education Northwest Key findings Developmental education placement rates among

More information

THE EFFECTIVENESS OF SHORT-TERM STUDY ABROAD PROGRAMS IN THE DEVELOPMENT OF INTERCULTURAL COMPETENCE AMONG PARTICIPATING CSU STUDENTS.

THE EFFECTIVENESS OF SHORT-TERM STUDY ABROAD PROGRAMS IN THE DEVELOPMENT OF INTERCULTURAL COMPETENCE AMONG PARTICIPATING CSU STUDENTS. THE EFFECTIVENESS OF SHORT-TERM STUDY ABROAD PROGRAMS IN THE DEVELOPMENT OF INTERCULTURAL COMPETENCE AMONG PARTICIPATING CSU STUDENTS Rafael Carrillo Abstract Study abroad programs have recently been shortening

More information

Academic Performance of IB Students Entering the University of California System from 2000 2002

Academic Performance of IB Students Entering the University of California System from 2000 2002 RESEARCH SUMMARY Academic Performance of IB Students Entering the University of California System from 2000 2002 IB Global Policy & Research Department August 2010 Abstract This report documents the college

More information

Peer Reviewed. Abstract

Peer Reviewed. Abstract Peer Reviewed The three co-authors are members of the faculty at the University of South Carolina Upstate. William R. Word and Sarah P. Rook (srook@uscupstate.edu) are Professors of Economics. Lilly M.

More information

January 6, 2012. The American Board of Radiology (ABR) serves the public.

January 6, 2012. The American Board of Radiology (ABR) serves the public. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 January 6, 2012 ABR s EXAM SECURITY The American Board of Radiology (ABR) serves the

More information

Angela L. Vaughan, Ph. D.

Angela L. Vaughan, Ph. D. Angela L. Vaughan, Ph. D. Director, First Year Curriculum and Instruction Academic Support & Advising University of Northern Colorado 501 20 th Street Greeley, CO 80639 (970)351 1175 angela.vaughan@unco.edu

More information

Florida s Plan to Ensure Equitable Access to Excellent Educators. heralded Florida for being number two in the nation for AP participation, a dramatic

Florida s Plan to Ensure Equitable Access to Excellent Educators. heralded Florida for being number two in the nation for AP participation, a dramatic Florida s Plan to Ensure Equitable Access to Excellent Educators Introduction Florida s record on educational excellence and equity over the last fifteen years speaks for itself. In the 10 th Annual AP

More information

Striving for Success: Teacher Perspectives of a Vertical Team Initiative

Striving for Success: Teacher Perspectives of a Vertical Team Initiative VOLUME 16 NUMBER 3, 2006 Striving for Success: Teacher Perspectives of a Vertical Team Initiative Dr. Lisa Bertrand Educational Administration and Counseling Southeast Missouri State University Dr. Ruth

More information

Living in the Red Hawks Community

Living in the Red Hawks Community http://www.collegeportraits.org/nj/msu 1 of 1 7/23/2014 9:58 AM Founded in 1908, is New Jersey s second largest university. It offers all the advantages of a large university a comprehensive undergraduate

More information

Develop and implement a systematic process that assesses, evaluates and supports open-access and equity with measurable outcome improvements

Develop and implement a systematic process that assesses, evaluates and supports open-access and equity with measurable outcome improvements STRATEGIC GOAL 1: Promote Access and Equity Develop and implement a systematic process that assesses, evaluates and supports open-access and equity with measurable outcome improvements Increase pre-collegiate

More information

Test Bias. As we have seen, psychological tests can be well-conceived and well-constructed, but

Test Bias. As we have seen, psychological tests can be well-conceived and well-constructed, but Test Bias As we have seen, psychological tests can be well-conceived and well-constructed, but none are perfect. The reliability of test scores can be compromised by random measurement error (unsystematic

More information

MASSACHUSETTS SMALL AND RURAL SCHOOLS TASK FORCE THE EFFECTIVENESS, VALUE AND IMPORTANCE OF SMALL DISTRICTS

MASSACHUSETTS SMALL AND RURAL SCHOOLS TASK FORCE THE EFFECTIVENESS, VALUE AND IMPORTANCE OF SMALL DISTRICTS MASSACHUSETTS SMALL AND RURAL SCHOOLS TASK FORCE THE EFFECTIVENESS, VALUE AND IMPORTANCE OF SMALL DISTRICTS DEFINITION OF SMALL SCHOOLS The vast majority of educational research supports the concept of

More information

Moving the Needle on Access and Success

Moving the Needle on Access and Success National Postsecondary Education Cooperative (NPEC) Moving the Needle on Access and Success Student Success Symposium November 2, 2006 David A. Longanecker Executive Director Western Interstate Commission

More information

The Condition of College & Career Readiness l 2011

The Condition of College & Career Readiness l 2011 The Condition of College & Career Readiness l 2011 ACT is an independent, not-for-profit organization that provides assessment, research, information, and program management services in the broad areas

More information

LOUISIANA SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING

LOUISIANA SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING LOUISIANA SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING NEW TECH NETWORK New Tech Network is a non-profit school development organization dedicated to ensuring all students develop the skills and

More information

Results of the Spring, 2011 Administration of the National Survey of Student Engagement at New Mexico Highlands University

Results of the Spring, 2011 Administration of the National Survey of Student Engagement at New Mexico Highlands University Results of the Spring, 2011 Administration of the National Survey of Student Engagement at New Mexico Highlands University Prepared by the Office of Institutional Effectiveness and Research The National

More information

Examining College Students Gains in General Education

Examining College Students Gains in General Education Examining College Students Gains in General Education Dena A. Pastor and Pamela K. Kaliski James Madison University Brandi A.Weiss University of Maryland Abstract Do students change as a result of completing

More information

Power Calculation Using the Online Variance Almanac (Web VA): A User s Guide

Power Calculation Using the Online Variance Almanac (Web VA): A User s Guide Power Calculation Using the Online Variance Almanac (Web VA): A User s Guide Larry V. Hedges & E.C. Hedberg This research was supported by the National Science Foundation under Award Nos. 0129365 and 0815295.

More information

TEXAS SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING

TEXAS SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING TEXAS SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING NEW TECH NETWORK A LEARNING COMMUNITY New Tech Network is a non-profit school development organization dedicated to ensuring all students develop

More information

International Baccalaureate

International Baccalaureate Preparation for International Baccalaureate Camdenton High School 2007-2008 Frequently Asked Questions Who should attempt to earn the IB Diploma? Students seeking the International Baccalaureate should

More information

ED301140 1988-00-00 The Master's Degree. ERIC Digest.

ED301140 1988-00-00 The Master's Degree. ERIC Digest. ED301140 1988-00-00 The Master's Degree. ERIC Digest. ERIC Development Team www.eric.ed.gov Table of Contents If you're viewing this document online, you can click any of the topics below to link directly

More information

Summary of the Case University of Southern Maine Teacher Education Program October 28-30, 2014 Authorship and approval of the Inquiry Brief:

Summary of the Case University of Southern Maine Teacher Education Program October 28-30, 2014 Authorship and approval of the Inquiry Brief: Summary of the Case University of Shern Maine Teacher Education Program October 28-30, 2014 The Summary of the Case is written by the auditors and approved by program faculty. The Summary reflects the

More information

Which Path? A Roadmap to a Student s Best College. National College Access Network National Conference Mary Nguyen Barry September 16, 2014

Which Path? A Roadmap to a Student s Best College. National College Access Network National Conference Mary Nguyen Barry September 16, 2014 Which Path? A Roadmap to a Student s Best College National College Access Network National Conference Mary Nguyen Barry September 16, 2014 THE EDUCATION TRUST WHO WE ARE The Education Trust works for the

More information

Predicting Long-Term College Success through Degree Completion Using ACT

Predicting Long-Term College Success through Degree Completion Using ACT ACT Research Report Series 2012 (5) Predicting Long-Term College Success through Degree Completion Using ACT Composite Score, ACT Benchmarks, and High School Grade Point Average Justine Radunzel Julie

More information

Use of Placement Tests in College Classes

Use of Placement Tests in College Classes By Andrew Morgan This paper was completed and submitted in partial fulfillment of the Master Teacher Program, a 2-year faculty professional development program conducted by the Center for Teaching Excellence,

More information

Economic inequality and educational attainment across a generation

Economic inequality and educational attainment across a generation Economic inequality and educational attainment across a generation Mary Campbell, Robert Haveman, Gary Sandefur, and Barbara Wolfe Mary Campbell is an assistant professor of sociology at the University

More information

Are high achieving college students slackers? Brent J. Evans

Are high achieving college students slackers? Brent J. Evans Motivation & Research Question: Are high achieving college students slackers? Brent J. Evans There is a growing body of evidence that suggests college students are not academically challenged by or engaged

More information

Policy Alert PUBLIC POLICY AND HIGHER EDUCATION THE NATIONAL CENTER FOR STATE POLICIES ON 2/4 TRANSFERS ARE KEY TO DEGREE ATTAINMENT.

Policy Alert PUBLIC POLICY AND HIGHER EDUCATION THE NATIONAL CENTER FOR STATE POLICIES ON 2/4 TRANSFERS ARE KEY TO DEGREE ATTAINMENT. THE NATIONAL CENTER FOR PUBLIC POLICY AND HIGHER EDUCATION Policy Alert May 2004 QUICK LOOK Key Issue State policies must be assessed and updated to effectively support the success of students transferring

More information

NORWIN SCHOOL DISTRICT 105. CURRICULUM PROCEDURES OPTIONS TO ACHIEVING CREDITS

NORWIN SCHOOL DISTRICT 105. CURRICULUM PROCEDURES OPTIONS TO ACHIEVING CREDITS NORWIN SCHOOL DISTRICT 105. CURRICULUM PROCEDURES OPTIONS TO ACHIEVING CREDITS The Board recognizes the need to allow students flexibility to accelerate through courses and has established the following

More information

Measuring Quality 1. Measuring Quality: A Comparison of U. S. News Rankings and NSSE Benchmarks

Measuring Quality 1. Measuring Quality: A Comparison of U. S. News Rankings and NSSE Benchmarks Measuring Quality 1 Measuring Quality: A Comparison of U. S. News Rankings and NSSE Benchmarks Gary R. Pike Director of Institutional Research Mississippi State University P. O. Box EY Mississippi State,

More information

Home Schooling Achievement

Home Schooling Achievement ing Achievement Why are so many parents choosing to home school? Because it works. A 7 study by Dr. Brian Ray of the National Home Education Research Institute (NHERI) found that home educated students

More information

Online Graduate Certificate in Supply Chain Management

Online Graduate Certificate in Supply Chain Management Online Graduate Certificate in Supply Chain Management Message from the Dean It is my honor and pleasure to welcome you to the D Amore-McKim School of Business at Northeastern University. Our unique educational

More information

College. Of Education

College. Of Education College Of Education Contact Us 00971-2-5993111 (Abu Dhabi) 00971-4-4021111 (Dubai) 00971-2- 5993783 (College of Education) @Zayed_U www.facebook.com/zayeduniversity www.zu.ac.ae Introduction and Mission

More information

Assessment Practices in Undergraduate Accounting Programs

Assessment Practices in Undergraduate Accounting Programs ABSTRACT Assessment Practices in Undergraduate Accounting Programs Anna L. Lusher Slippery Rock University This study examined accounting program assessment plans at 102 colleges and universities in the

More information

A Synopsis of Chicago Freshman Enrollment at DePaul University Fall 2004 2008

A Synopsis of Chicago Freshman Enrollment at DePaul University Fall 2004 2008 A Synopsis of Chicago Freshman Enrollment at DePaul University Fall 2004 2008 Prepared and presented in May 2009 by David H. Kalsbeek, Ph.D. Senior Vice President for Enrollment Management and Marketing

More information

CRITICAL THINKING ASSESSMENT

CRITICAL THINKING ASSESSMENT CRITICAL THINKING ASSESSMENT REPORT Prepared by Byron Javier Assistant Dean of Research and Planning 1 P a g e Critical Thinking Assessment at MXC As part of its assessment plan, the Assessment Committee

More information

ACT Research Explains New ACT Test Writing Scores and Their Relationship to Other Test Scores

ACT Research Explains New ACT Test Writing Scores and Their Relationship to Other Test Scores ACT Research Explains New ACT Test Writing Scores and Their Relationship to Other Test Scores Wayne J. Camara, Dongmei Li, Deborah J. Harris, Benjamin Andrews, Qing Yi, and Yong He ACT Research Explains

More information

RECRUITING FOR RETENTION: Hospitality Programs. Denis P. Rudd Ed.D. Robert Morris University Pittsburgh Pennsylvania

RECRUITING FOR RETENTION: Hospitality Programs. Denis P. Rudd Ed.D. Robert Morris University Pittsburgh Pennsylvania RECRUITING FOR RETENTION: Hospitality Programs Denis P. Rudd Ed.D. Robert Morris University Pittsburgh Pennsylvania Abstract Colleges and universities must take into account the impact the dwindling economy

More information

College Transition Programs: Promoting Success Beyond High School

College Transition Programs: Promoting Success Beyond High School College Transition Programs: Promoting Success Beyond High School I s s u e P a p e r s T h e H i g h S c h o o l L e a d e r s h i p S u m m i t Parents have higher educational aspirations for their children

More information

Using Value Added Models to Evaluate Teacher Preparation Programs

Using Value Added Models to Evaluate Teacher Preparation Programs Using Value Added Models to Evaluate Teacher Preparation Programs White Paper Prepared by the Value-Added Task Force at the Request of University Dean Gerardo Gonzalez November 2011 Task Force Members:

More information

Mapping State Proficiency Standards Onto the NAEP Scales:

Mapping State Proficiency Standards Onto the NAEP Scales: Mapping State Proficiency Standards Onto the NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005 2009 NCES 2011-458 U.S. DEPARTMENT OF EDUCATION Contents 1 Executive

More information

GMAC. Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1

GMAC. Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1 GMAC Which Programs Have the Highest Validity: Identifying Characteristics that Affect Prediction of Success 1 Eileen Talento-Miller & Lawrence M. Rudner GMAC Research Reports RR-05-03 August 23, 2005

More information