Accreditation and Assessment Annotated Bibliography. Thomas W. Zane PhD November, 2011

Size: px
Start display at page:

Download "Accreditation and Assessment Annotated Bibliography. Thomas W. Zane PhD November, 2011"

Transcription

1 Accreditation and Assessment Annotated Bibliography Thomas W. Zane PhD November, 2011 Please Send Your Suggestions for Improving This Bibliography Substantive errors. Other articles, reports, books, etc. Your opinions (pro or con) about any of the annotations. Thanks, Thomas W. Zane Assessment Department Salt Lake Community College Index American Association of University Professors. (1991). Mandated Assessment of Educational Outcomes. American Association of University Professors. (1940) Statement of Principles on Academic Freedom and Tenure. American Association of University professors. (1975). Statement on Teaching Evaluation. American Association of University professors. (2009). Statement on Professional Ethics. Author. (This is an update of the original statement on ethics adopted in 1966). Association of American Colleges and Universities. (2008). New leadership for student learning and accountability. Banta, T. W., Griffin, M., Flateby, T. L., and Kahn, S. (2009). Three promising alternatives for assessing college students knowledge and skills. (NILOA Occasional Paper No.2). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education 145(Spring), Bers, T. H. (2008). The role of institutional assessment in assessing student learning outcomes. New Directions for Higher Education, 141(Spring), Blaich, C. and Wise, K. (2011). From gathering to using assessment results: Lessons from the Wabash National Study. (Occasional Paper No.8). Washington, D.C.: National Institute for Learning Outcomes Assessment. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA.: Stylus. {Book} 8 1

2 Council for Higher Education Accreditation. (2006). Accreditation and accountability: A CHEA special report. Author. Council of Regional Accrediting Commissions. (2004). Regional accreditation and student learning: A guide for institutions and evaluators. Washington, D.C. Duffy, D. K., Duffy, J. J., and Jones, J. W. (1997). Tuning up your class: Using assessment for optimal student performance. Journal of Excellence in College Teaching, 8(2), Ewell, P. T. (2001). Accreditation and student learning outcomes A proposed point of departure. (CHEA Occasional Paper). Washington, D.C.: Council for Higher Education Accreditation. Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth anniversary report from the council for higher education accreditation. Washington: D.C.: The Council for Higher Education Accreditation. Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension. (Occasional Paper #1). Washington, D.C.: National Institute for Learning Outcomes Assessment. Ewell, P., Paulson, K., and Kinze, J. (2011). Down and in: Assessment practices at the program level. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Fulks, J., Caballero de Cordero, A., Alancraig, M., Bearden, K., Lee, S., Pacheco, R., Tomson, J., and Williams, G. (2007). Agents of change: Examining the role of student learning outcomes and assessment coordinators in California community colleges. The Accreditation of Student Learning Outcomes Committee of the Academic Senate for California Community Colleges, Sacramento, CA. Georgia State University. (2011). Georgia State University rubrics for program assessment. Gibbs, G., and Simpson, C. (2005). Conditions under which assessment supports students learning. Learning and Teaching, 1(1), Gilbert, G. (2010). Making faculty count in higher education assessment. Academe, 96(5), p Gill, W. E. (2006). Outcomes assessment in the accreditation process. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA, (April, 2006). Gillmore, G. M. (2004). The evaluation of general education: Lessons from the USA State of Washington experience. (Report 04-03). Washington D.C.: Office of Educational Assessment. Gold, L., Rhoades, G., Smith, M. and Kuh, G. (2011). What faculty unions say about student learning outcomes assessment. (NILOA Occasional Paper No. 9). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Haertel, E. H. (1999). Performance assessment and education reform. Phi Delta Kappan, 80(9),

3 Hatfield, S. (2011). Assessing your program-level assessment plan. (IDEA Paper No.45). Manhattan, KS: The IDEA Center. Hutchings, P. (2010, April). Opening doors to faculty involvement in assessment. (NILOA Occasional Paper No.4). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Katz, S. N. (2010). Beyond crude measurement and consumerism. Academe Online, September- October. Keeling, R. P., Wall, A. F., Underhile, R., & Dungy, G. J. (2008). Assessment reconsidered: Institutional effectiveness for student success. Washington D.C.: National Association of Student Personnel Administrators (NASPA). Knight, P. T. (2002). The Achilles' heel of quality: the assessment of student learning. Quality in Higher Education, 8(1), pp Kuh, G. D., and Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy, 22(1), Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: learning outcomes assessment in American Higher Education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Leskes, A., & Wright, B. D. (2005). The art and science of assessing general education outcomes. Washington: D.C.: Association of American Colleges and Universities. Maki, P. L. (2009). Moving beyond a national habit in the call for accountability. Peer Review, 11(1), Maki, P. L. (2010). Coming to terms with student outcomes assessment: Faculty and administrators journeys to integrating assessment in their work and institutional culture. Sterling, VA.: Stylus. McGuire, P. A. (2009). Accreditation s benefits for individuals and institutions. New Directions for Higher Education, 45(Spring), Miller, R. (2007). Assessment in cycles of improvement: Faculty designs for essential learning outcomes. Washington, D.C.: Association of American Colleges and Universities. Murray, F. B. (2009). An accreditation dilemma: The tension between program accountability and program improvement in programmic accreditation. New Directions for Higher Education, 145(Spring), Murray, M. Perez, J., and Guimaraes, M. (2008). A model for using a capstone experience as one method of assessment of an information systems degree program. Journal of Information Systems Education, 19(2), Powell, J. W. (2011). Outcomes assessment: Conceptual and other problems. AAUP Journal of Academic Freedom, Volume Two. Rhodes, T. (2009). The VALUE project overview. Peer Review, 11(1), 4-7. Rogers, G. (2003). Do grades make the grade for program assessment? Communications Link Accreditation Board for Engineering and Technology, Baltimore: MD. 3

4 Rogers, G. M. (2007). Self assessment: Quality assurance of program level assessment of student learning. Accreditation Board for Engineering and Technology, Baltimore: MD. Ruben, B. D. (2007). Higher education assessment: Linking accreditation standards and the Malcolm Baldrige criteria. New Directions for Higher Education, 137(Spring), Schuh, J. H. and Gansemer-Topf, A. M. (2010). The role of student affairs in student learning assessment. (Occasional Paper No.7). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Smith, D. S., & Katz, M. A. (2007). Academic freedom in an age of assessment and accountability. Midwest Law Journal, 22, Swing, R. L., and Coogan, C. S. (2010). Valuing assessment: Cost-benefit considerations. (Occasional Paper No.5). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education. Washington: D.C. Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass. Wellman, J. V. (2010). Connecting the dots between learning and resources. (Occasional Paper No.3). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Wergin, J. F. (2003). Regional accreditation and student learning: Preparing teams for effective deliberation. Washington, D. C.: Council of Regional Accrediting Commissions. Wright, R. E. (2010). Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests. College Student Journal, 44(1), Zane, T. W., Schnitz, J. W., and Abel, M. H. (2010). Western Governors University: A radical model for preservice teacher education. In L. B. Erickson, N. Wentworth, (Eds.), Tensions in teacher preparation: Accountability, assessment, and accreditation. United Kingdom: Emerald. 4

5 Annotated References American Association of University Professors. (1991). Mandated Assessment of Educational Outcomes. Author. Retrieved August 15, 2011 from Position Paper} Documents referenced within the above article include: American Association of University Professors. (1940) Statement of Principles on Academic Freedom and Tenure. Author. Retrieved August 15, 2011 from American Association of University professors. (1975). Statement on Teaching Evaluation. Author. Retrieved August 15, 2011 from American Association of University professors. (2009). Statement on Professional Ethics. Author. (This is an update of the original statement on ethics adopted in 1966). Author. Retrieved August 15, 2011 from I reviewed the listed documents from the American Association of University Professors (AAUP) because accreditation standards and just about any discussion of assessment in higher education touch on the principles of Academic Freedom (AF). My purpose for reading these documents was to better understand whether AF supports assessment or not, and what, if any guidance the AAUP might offer to those of us involved with this work. Disclaimer: I will focus on the Mandated Assessment of Educational Outcomes document because it is more germane to the scope and purpose of this annotated bibliography. I will not attempt to review all the standards that I found within these documents because they deal with so many aspects of the educational endeavor and are not directly pertinent to the current discussion. In addition, I will not attempt to enter (much less try to settle!) ongoing and vigorous debate among faculty over the meaning of AF. One need only review Gilbert (2010), Katz (2010), and Powell (2011) {in this bibliography} to see how differently various faculty interpret the Principles and Ethics and how they feel about assessment. I will therefore limit my comments to a few threads that provide foundational positions on assessment in higher education as discussed in the AAUP Mandated Assessment of Educational Outcomes document. First and foremost, the writers make it clear that they are concerned about mandated assessment (such as specific exams) imposed from outside the academy. They point out that this sort of assessment often uses inappropriate means for collecting data and may lead to unfair interpretations. I agree. For example, some outside agencies call for using a specific vendor s exam, regardless of institutional mission and regardless of differences among student populations. Another concept I agree is the committee s assertion that assessment is not a new or alien concept to the academy. Any of us in higher education know that we have been using various types of assessment in almost every classroom for many years. However, I did find one thing to disagree with. I felt that pairing mandated assessment with bad assessment was a bit confining. Assessment can be good or bad regardless of who is driving it. 5

6 I ve seen some mandated assessments that were wholly appropriate (e.g., licensure tests for preservice teachers) and I ve also seen inappropriate assessment developed inside the academy. So the real issue is the appropriateness and quality of a given assessment regardless of where it was created. The authors address three relevant issues for us dealing with who, why, and how we should assess student learning outcomes. I will briefly touch on each of these and will provide the original text from the cited AAUP standards. Should Faculty Assess? The AAUP Outcomes Committee state that properly conducted assessment is appropriate and consistent with the concepts of AF. The committee s statement includes the following which I believe affirms the legitimacy of good assessment practices in higher education: Public agencies charged with the oversight of higher education, and the larger public and the diverse constituencies that colleges and universities represent, have a legitimate stake in the effectiveness of teaching and learning. Their insistence that colleges and universities provide documented evidence of effectiveness is appropriate to the extent that such agencies and their constituencies do not: (a) make demands that significantly divert the energies of the faculty, administration, or governing board from the institution s primary commitment to teaching, research, and public service; or (b) impose additional fiscal and human burdens beyond the capacity of the responding institution to bear. There are conflicting opinions among faculty on these issues. Some faculty believe that assessment is simply not appropriate (see Powell, 2011 for an example in this bibliography). Interestingly, I did not find any other articles in my research that suggested assessment (in general) conflicted with AF. Rather, I found concerns over specific bad practices in assessment and how those conflicted with elements of AF. Why should faculty assess? The purposes of any assessment (regardless of source and quality) are a continuing point for debate among faculty. For example, the near dichotomy between assessment for learning and assessment for accountability is an ever-present point for discussion. The AAUP was quite clear on why faculty should assess in its Ethics document. Professors, guided by a deep conviction of the worth and dignity of the advancement of knowledge, recognize the special responsibilities placed upon them. Their primary responsibility to their subject is to seek and to state the truth as they see it. To this end professors devote their energies to developing and improving their scholarly competence. They accept the obligation to exercise critical self-discipline and judgment in using, extending, and transmitting knowledge. They practice intellectual honesty. From this passage I believe we see backing for the idea that faculty are professionals in that they continually investigate, reflect, and improve which suggests a commitment to assessment in some form which leads us to the next question. How should faculty assess? What remains unresolved to date are questions over who controls assessment and what form it should take. The authors of the Mandated Assessment article successfully make the case that when and how best to assesses student learning and success is extremely complex. I agree. Like the authors, I believe that assessment methods that work at one institution may not fit another. The authors go on to explain how assessment of overall institutional goals, general education goals, program effectiveness, and student learning may differ even within a single institution. I 6

7 agree with these points as well. They argue that whatever assessment methods we use should be conducted in a climate free of external constraint or threats, however vaguely disguised (p. 12). I partially agree to this issue. I feel there is more to this point than the authors provided. Because faculty attitudes differ, including across departments here at my institution, the line between constraints and helpful guidelines is a moving target. Take for example the planning and reporting templates we provide online. Some faculty members are grateful for these templates while others feel we are imposing a format on them. Thus, our institution makes such things as professional development opportunities, templates for plans and reports, and other guides available, but optional. The authors begin their last section by clearly restating their position: assessment of student learning outcomes may provide the most valid measure though also the most difficult to obtain reliably for the evaluation of teaching effectiveness. The association (AAUP) has also recognized that such assessment is the responsibility of the faculty (p. 13). The balance of this last section reads like a list of best practices that could be found in just about any educational measurement textbook or in the standards for testing (see Standards for psychological testing, 1999, The committee presents several assessment suggestions that I wholeheartedly support. The suggestions include such things as faculty primacy in assessment decision making, adherence to due process, clear and delineated purposes for use of assessment data, protections against misuse of results, the need to assess what matters (as opposed to what is easy to assess), and the need for a mix of quantitative and qualitative data collection to better understand the results. The audience for this document seems to be all higher education faculty and administrators because the standards outline useful rules for all educators regarding how assessment should be accomplished. The sum of all of the documents listed above clearly state that Academic Freedom is supportive of valid assessment and provides guidance about why and how assessment ought to occur in higher education. 7

8 Association of American Colleges and Universities. (2008). New leadership for student learning and accountability. Author. Retrieved August 19, 2011 from {Position Paper} This is a very short document that simply states the AAC&U principles related to assessment in higher education. 1. The primary responsibility for achieving excellence falls on colleges and universities themselves. 2. To that end, each college and university (and major divisions, schools, and programs within them) should develop ambitious, specific, and clearly stated goals for student learning appropriate to its mission, resources, tradition, student body, and community setting. 3. Each college and university should gather evidence about how well students in various programs are achieving learning goals across the curriculum and about the ability of its graduates to succeed in a challenging and rapidly changing world. 4. Each college and university should provide information about its basic characteristics, clearly communicate its educational mission, and describe its strategies for achieving its educational goals and their effectiveness. 5. Understanding that the federal government has a responsibility to see that its funds are properly used, we recognize the importance of its careful monitoring of expenditures of its funds and its reliance on independent accrediting organizations to encourage systematic improvement of educational results. 6. As educational associations, we are committed to high standards for our institutions of higher education and their students. To achieve this new configuration of American higher education with its emphasis on transparency and accountability, colleges and universities will need ongoing support from governmental agencies, philanthropic foundations, corporations, and private donors. Then they present what they promise to do: 1) We will present these principles to our members and others and encourage a vigorous discussion with the goal of implementing them on campuses as fully as possible in the near future. 2) We will disseminate this document as widely as possible and encourage broad discussion of it. 3) We will keep pressing for the highest possible standards and expectations for student learning at every level. 4) We welcome the progress various associations of colleges and universities have made in developing widely agreed upon templates that will provide college applicants, their parents, legislators, and the general public with important data about demographics, admission and completion information, costs and financial aid, student engagement, and other relevant information. 5) We will encourage our members to augment such templates with clear descriptions of their educational goals for student accomplishment and with information about the practices they use to foster, assess and report student attainment of those goals. 6) We commend those organizations and their philanthropic supporters that have in recent years developed promising means of assessing important outcomes of higher education. 7) Recognizing that setting high standards is not the same as standardization, we also support and encourage the development of assessments of student attainment that are anchored in the curriculum that faculty teach. 8) We will regularly report to the public on the overall progress made in achieving these actions. We stand ready to be judged by the high standards this document sets for American higher education. Our students deserve nothing less. Like the CHEA (2006) document discussed later in this bibliography, this seems to be an open letter to anyone involved in accreditation. However, it also sounds like a message meant for federal officials. 8

9 Banta, T. W., Griffin, M., Flateby, T. L., and Kahn, S. (2009). Three promising alternatives for assessing college students knowledge and skills. (NILOA Occasional Paper No.2). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved May 31, 2011 from {Report} The authors present descriptions of three good alternatives for assessment that meet both internal student learning issues and external accountability requirements. The focus throughout the paper is on describing the assessment methods and on how those methods will affect/impact students, faculty, institutions, and accreditation organizations. They discuss portfolio assessment with accompanying methods and rubric scoring. Then they discuss a shared rubric approach known as the Cognitive Level and Quality of Writing Assessment which is designed to work across courses, departments, and even across institutions. Their description of rubrics throughout the paper is consistent with what many other authors included in this bibliography say about rubric uses and best practices. Moreover, this article is one of the top ten I reviewed because it clearly links and illustrates issues in learning theory, rubric literature, and accreditation literature. The authors place theoretical rules for good assessment I ve seen listed in other assessment documents into meaningful context thus providing faculty members with a better understanding of why certain rules for building and using rubrics exist. For example, they show that the same rubrics if designed for this purpose can easily support student learning via formative evaluation and feedback to students and still support summative evaluation that would be meaningful for curriculum decisions and for meeting accountability requirements. The audience for this article is supposed to be faculty at any institution. However, the tone of the paper suggests this was written to persuade accreditation organizations and perhaps even legislators that higher education has answers to meet their needs. 9

10 Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education 145(Spring), Not available online. {Journal Article} This article provides a good description of the changes occurring in higher education. Colleges and universities are now under constant scrutiny (p. 47). The author lists some key areas of change including: The changing accreditation climate he feels that government will continue to ratchet up regulation of higher education and demand increases in accountability for student learning and cost containment (p. 49). The need to focus on assessment across all areas of the curriculum he cites how accreditation standards are shifting from inputs to outputs. Organizational approaches to continuing accreditation he discusses annual cycles of assessment, the need for meaningful measures, and brings up the considerable concerns over using standardized tests. A big question in the authors mind (which is consistent with other authors comments - see Ewell, 2008) is whether accreditation standards and assessment types will be driven by faculty as they have been in the past or by government regulation. He cites the 2008 federal legislation giving these decisions to the accreditation organizations instead of the government as a positive indicator but warns that pressure continues to build. An important message that is more germane to this bibliography is how higher education will begin to measure the sorts of things that multiple-choice knowledge exams can t. He warns that universities will need to figure out how to operationalize these paramount (p. 56) outcomes and link them to curriculum. The article is written to college and university administrators. The author presents accreditationbased changes that he sees from a Chancellor s point of view. However, it is quite readable and presents information that any faculty member involved in assessment might find useful. 10

11 Bers, T. H. (2008). The role of institutional assessment in assessing student learning outcomes. New Directions for Higher Education, 141(Spring), Not available online. {Journal Article} The author argues for having the Institutional Assessment (AKA institutional research) department take the lead on assessing student learning in ways that support institution-level information. This is a complete reversal compared to the beliefs expressed by a majority of authors in this bibliography (see Ewell, 2009 for an example). Bers suggests that classroom embedded assessment should not be used as substitutes for more broadly based learning outcomes assessments (p. 33). Conversely, Ewell and many others suggest that assessment must be embedded within courses to ensure that students do their best on them and to increase efficiency (over adding other assessments). Also in a reversal to nearly every other author I ve reviewed, Bers argues for using course grades. However, as other authors will confide, course graders have already been shown to be insufficient for these purposes for example see Rogers (2003). In another reversal, Bers suggests that because interpreting findings of assessment is so difficult for faculty, IR can supply interpretations of results (p. 35) and IR should be brought to the table when it comes to measuring learning outcomes to overcome an overemphasis on faculty owning assessment, creating a monopolistic situation where no one but faculty are perceived as legitimate stakeholders in assessment (p. 36).. The idea of taking the interpretation of learning out of the hands of faculty is alien to me, to other authors in this bibliography, to the principles of academic freedom, and to validity in measurement. The audience for this article is unclear. It does serve to provide a very different view of assessment in higher education. I d call this a lightning rod paper because the author s viewpoint is so alien to the academy. Her arguments and examples are thin and seem to be based more on a wish to enhance the visibility of institutional researchers than on facts. The inflammatory language was certainly another drawback. 11

12 Blaich, C. and Wise, K. (2011). From gathering to using assessment results: Lessons from the Wabash National Study. (Occasional Paper No.8). Washington, D.C.: National Institute for Learning Outcomes Assessment. Retrieved June 27, 2011 from {Paper} Blaich and Wise continue the national discourse in the literature lamenting how many institutions are collecting assessment data but how few are turning their data into useful information for making improvements. He mentions that in some departments only one or two faculty ever see the assessment results (must less understand and act on them) while at other institutions, the departments provide reports, but few (if any) ever read them. I generally agree with the authors. There is evidence that we are not always doing a good job of closing the loop with our data. However, the authors don t make the point that the sort of assessment they are discussing is at the institutional rather than the course level. We know that a good assessment plan contains data from the course level, but also looks across the program/department to see if the mission is being realized. The authors suggest that it is incredibly difficult to translate assessment evidence into improvements in student learning (p. 11). This may well be true of some institutional data. In addition, translating data into action requires collaborative discussion among the pertinent stakeholders as opposed to simply reporting the findings. closing the loop and using evidence to promote improvements is as much a political and socio-cultural process as it is an evidencedriven process (p. 13). This is absolutely true as well. Finally, the authors argue that institutions need to put resources into fostering data use. Not just planning, collecting, and reporting. I generally agree with the authors on these points. Even here at my institution where we are well advanced compared to many other institutions, there are still some challenges. For example, I m aware that some programs/departments assign a single person to handle assessment. This is unfortunate in my mind because the real value of assessment is supposed to be helping our students not simply filling out forms to get it done. The authors offer some steps for maximizing the chances of using data: 1. Leaders should audit to determine what data are available. (This is consistent with many authors lists for implementation see Walvoord, 2004). 2. Academic leaders should focus on no more than three (preferably one) initiative at a time. 3. Be sure to give the planned change enough time to bear fruit, and then adjust as needed. I ve experienced the opposite of this. I used to work for an institution where the president would wait no more than a few weeks for improvement. Little wonder that few changes bore any fruit at all. 4. Create and implement a communications plan. Think of this as a campaign, not a series of reports posted to a website (p. 14). This campaign deals with training and fostering collaborative discussions about the findings and planning for changes. Then, later on, you add the dimension of reporting on subsequent findings in a cyclical process. While the communications plan does include dissemination of the results, what is done with those results is far more important. 12

13 5. Provide resources for fostering collaborative use of the data. 6. Involve students. The audience for this paper is any faculty or academic leader involved with assessment. However, the article is not as easy to read as some others listed within this bibliography. 13

14 Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA.: Stylus. {Book} The author provides a very detailed text that describes what she calls good practices from across the country. She uses a mix of lists, descriptions, and many solid examples to illustrate her points about how to know good practice when you see it. The approach she takes is more of a look at what works and why guide based on many actual examples to illustrate her points. This differs from the Maki (2010) text that provides vignettes from various institutions leaving the reader to glean what they can from the material. It also differs from the Walvoord (2004) text which offers a simple step-by-step approach to doing assessment in only one way. A complete description of this very detailed book is beyond the scope of this review. However, there were some important issues and concepts that are worth noting. In chapter two the author presents her reasoning for doing assessment. She makes a point that somehow over the past several years faculty have become disconnected from evaluating and improving student learning (p. 9). The goal for any new effort should focus on reengaging faculty with these important elements of higher education. The author argues that assessment should be organized around what she calls program review. The first step in any new assessment effort she argues is to build a solid foundation of reasoning what she calls a conceptual framework (p. 19). In my experience, other professions may call this a mission statement, a set of principles, or even a philosophy. Then, she suggests that an institution agree on some ground-rules or guidelines for their selected flavor of assessment. Like other authors, Bresciani calls for as small a set of guidelines as possible. She argues that an institution should come up with just enough guidelines to ensure success, but leave enough flexibility to account for the needs of departments and disciplines. I found that the authors approach is similar to my metarubric for good assessment practice at our institution. That is a summary of the components that should be in place, what overall goals should be met, and what good assessment looks like. I agree with the author for two reasons. First, I know that rubrics are powerful tools for improving performance. Second, I know that good rubrics are great ways to communicate our conceptual framework (principles), our guidelines, and for all those new to assessment, what good looks like. The author s template for program review (see page 27) lists most of the elements of a good assessment plan and report that I ve seen in other texts. What differs in this text is the depth of description and examples she offers for each item on her list. In chapter four the author turns her attention to listing what constitutes a good or effective program review system. As was the case with the template in chapter three, the headings in chapter four read much like lists found in other authors works. What differs is the nearly exhaustive depth of description and illustration of the issues. For example, a key ingredient to good assessment is a clear understanding of the goals of the assessment. The author goes on for several pages to describe rich facets of this first criterion for an effective assessment program. BTW I think the author was too brief in her collaboration section. Faculty collaboration should be the foundation of assessment, and good collaboration has much to offer every phase of the assessment endeavor. 14

15 The author did a better job of explaining just how rich a support that assessment information can be for educators. She listed many uses for the information including: At the Course and Program Levels Course revision Pedagogical changes (e.g., strategies, approaches, practices, etc.) Resource changes/requests Operational processes (e.g., identifying roadblocks, inefficiencies, barriers to success, etc. and systematically fixing them at a systemic level) Policy development/change Informing stakeholders At the GE/CWSLO and Institutional Levels Informing stated performance indicators (such as GE or other college-wide goals/outcomes) Inform the budget and strategic planning process (for example, a summary of the program recommendations, or the nature of them, might reveal a pattern worth exploring. Comments about lack of time to address all goals might trigger changes in the goals. Results that reveal trends in strengths and weaknesses may trigger changes in funding, methods, or lead to other changes. BTW I suggest that when it comes to budget, we link each request for budget changes to measurable outcomes. Then require reports on the outcomes. Much like how third party grants require evaluation data. Extent to which the institution is meeting its overall goals. BTW The author also listed using this data for evaluating faculty for such things as tenure. I believe this was a mistake. Good measurement practice and precedent setting legal cases suggest that using data intended for one purpose should not be repurposed for another. Thus, I strongly recommend against using assessment data for this purpose. If an institution wants data for this purpose, they must set the rules, purposes, and safeguards up front. An example, assessment data are collected at the course level and reported up the line. If that data were then repurposed, then legal requirements for due process might be broken. Chapter five covers various steps that the author believes are crucial to success of any program assessment system. Build a solid and representative committee to guide the effort on an ongoing basis: Respected faculty leaders At least 80% faculty and only 20% administration & staff Define the authority level (What can we decide versus recommend?) Define the communications plan (Who will we tell what?) Define the assessment coordinator s role in the committee Set purpose and goals for the committee (just like any program assessment) Define the purpose and expectations of the assessment system as a whole. Why are we doing assessment? (Purpose, goals, vision, etc.) What are the expectations on me? What value will come from the work? 15

16 How will I (and my academic freedom) be protected? How can it improve learning, success, my courses, my pedagogy, etc.? How can it improve a program s overall success? Provide answers to faculty questions about assessment up front Who, what, when, where, how, etc. Determine what is already happening. What is happening? By whom? How well? (Paraphrased to save space. See text for full descriptions.) I really liked chapter six where the author discussed common barriers to building a successful assessment system. She argues that these barriers can hurt both genuineness and productivity p. 119). I like those terms. They focus on what goes wrong, but turn the verbiage to the positive. I need to remember these terms. The author states that there are four major barriers that faculty mention in surveys about assessment. Limited time to conduct assessment Limited resources to put to assessment Limited experience and expertise in assessment A belief that the benefits of assessment are not worth the effort The audience for this book is faculty members and administrators who are charged with developing assessment at their institutions. The book is strong on covering many (perhaps even most) of the potential issues one might encounter and what good assessment looks like. However, a weakness, due mainly to the authors attempt at being complete, is the overwhelming volume of material included. Were a new assessment coordinator to try to implement even half of the elements presented in this text, the effort would likely collapse under its own weight. I much prefer the Banta & Others (2009) text for a lighter and more general approach to assessment and I prefer the Walvoord (2004) for an even simpler simple approach to getting started with assessment. But if the reader is looking for a more exhaustive exploration of what assessment is, how it is being done successfully at other institutions, and what factors made the difference, then this is a solid reference. 16

17 Council for Higher Education Accreditation. (2006). Accreditation and accountability: A CHEA special report. Author. Retrieved August 19, 2011 from {Report} This is a very short document that provides a distillation of CHEA s work (p. i). The document summarizes the CHEA position on several issues related to how assessment ought to be done. I will simply list them without comment. I included this document simply to show that many different organizations have a voice in the assessment movement. In this case, CHEA is a voice for voluntary accreditation and quality assurance to the U.S. Congress and U.S. Department of Education (About CHEA, Webpage). The following outlines the CHEA positions. A. Audiences: To Whom Is Additional Attention to Student Learning Outcomes Important? Key external constituents (government, students, public) want information about student learning outcomes. Accrediting organizations need information about student learning outcomes. Institutions, programs and faculty need information about student learning outcomes. B. Key Questions: What Needs to be Asked and Answered When Addressing Student Learning Outcomes? What is a student learning outcome? What counts as evidence of student learning? At what level (or for what unit of analysis) should evidence of student learning outcomes be sought? To what extent should particular student learning outcomes be specified by accreditors? What models are available to accreditors when choosing an approach? What issues should be anticipated? C. What Can Accrediting Organizations Do to Establish Effective Policy and Practice to Address Student Learning Outcomes? Accrediting organizations need to establish standards and review processes that visibly and clearly expect accredited institutions and programs to gather and disseminate information. Institutions and programs need to establish clear statements of student learning outcomes and of their approach to collecting, interpreting and using evidence of student achievement. Accrediting organizations need to use evidence of student learning outcomes in making judgments about academic quality and accredited status. Institutions and programs share responsibility with accrediting organizations to provide clear and credible information to constituents about what students learn. The audience for this document is clearly policy makers and shapers in Washington. Recall that the Spellings Commission released their report in So this document was likely published as either a preemptive strike or a response. For a similar reaction and position statement, take a look at the principles that the AAC&U published about the same time (the current document was revised in 2008). 17

18 Council of Regional Accrediting Commissions. (2004). Regional accreditation and student learning: A guide for institutions and evaluators. Washington, D.C.: Author. Retrieved July 19, 2011 from {Book} Note: The council is made up of the seven regional accreditation organizations. This guide is billed as a companion piece to Regional Accreditation and Student Learning Principles for Good Practices (p. 5). The guide is designed to provide institutions with a framework for following the Council s desire for student learning to become the central focus for measuring institutional quality. The council presents five principles including: 1. The role of student learning in accreditation. The institution defines educational quality one of its core purposes by how well it fulfills its declared mission on student learning. 2. Documentation of student learning. The institution demonstrates that student learning is appropriate for the certificate or degree awarded and is consistent with the institution s own standards of academic performance. 3. Compilation of evidence. The institution derives evidence of student learning from multiple sources, such as courses, curricula, and co-curricular programming, and includes effects of both intentional and unintentional learning experiences. Evidence collected from these sources is complementary and demonstrates the impact of the institution as a whole on the student. 4. Stakeholder involvement. The collection, interpretation, and use of student learning evidence is a collective endeavor, and is not viewed as the sole responsibility of a single office or position. Those in the institution with a stake in decisions of educational quality participate in the process. 5. Capacity building. The institution uses broad participation in reflecting about student learning outcomes as a means of building a commitment to educational improvement. (p. 7-11) After the presentation of principles, the guide provides a bibliography of recommended sources for information on several topics of interest to anyone involved with assessment of learning or accreditation. The audience for this guide is any faculty member involved with assessment or accreditation. However, the tone suggests that the audience may be academic leaders and policy makers. 18

19 Duffy, D. K., Duffy, J. J., and Jones, J. W. (1997). Tuning up your class: Using assessment for optimal student performance. Journal of Excellence in College Teaching, 8(2), Not available online. {Journal Article} I ordered this document expecting to see ideas for assessment. However, the document is about what the teacher should do (e.g., strategies, methods, etc.). I m convinced that such things as adoption of constructivist leaning theory and significant pressure from various stakeholders are pressing for evidence of outcomes instead of descriptions of inputs (such as faculty credentials, lists of courses, etc.). In this article the authors provide a strong example of the older inputs perspective. While somewhat useful to a novice teacher, they didn t discuss assessment per se. Instead, they present a series of teaching strategies that they feel will help students succeed. The audience is just about any faculty member. 19

20 Ewell, P. T. (2001). Accreditation and student learning outcomes A proposed point of departure. (CHEA Occasional Paper). Washington, D.C.: Council for Higher Education Accreditation. Retrieved June 4, 2011 from {Paper} My personal comments about the author: Peter Ewell was one of the early (and still one of the best) authors who discuss the relationship between accreditation and outcomes assessment. I have known Peter since about 2000 and I m aware of his significant work in many aspects of higher education. Peter is the VP at NCHEMS (The National Center for Higher Education Management Systems). Peter s work is oft cited in the assessment and accreditation literature and Peter is constantly speaking to one professional group after another. This paper is a relatively early call for three major changes in the accreditation and assessment landscape. First, he argues that assessment of outcomes is necessary and even inevitable. Therefore, he calls for definition of how accreditation organizations will deal with outcomes and assessment what he calls a collective approach to the matter of evidence of student learning (p. 24). Second, he calls for some sort of language embedded in an understandable conceptual framework that outlines what each organization believes and will address. Third and I think this might be a bit of a stretch commonly shared resources across accreditation organizations. I say this may be going too far because I was involved in the one and only IRAC accreditation effort. In that situation the four western regions combined to try to accredit Western Governors University. The regions just managed to pull it off. But, they later agreed that the attempt was too difficult and that WGU would be accredited by a single region from that point on. I included this paper in my literature review to illustrate how long it has taken to move from accreditation based on inputs (faculty credentials, books in the library, etc.) to educational outcomes (student learning). The shift toward assessment has been happening for quite some time. Further, it is coming in small steps. For example, Ewell explains that all regional accreditation organizations had (by 2001) added specific language about assessment into their standards but there was little to no definition of what assessment really means. Ewell offers perhaps one of the first calls for action to avoid having external stakeholders dictate what assessment consists of and how it should be accomplished. He bluntly cautions - not addressing these conditions is no longer an option (p. 4). He offers a clear definition of the old accreditation approach of looking at inputs to education versus the outputs. Then he makes the case that learning is the key outcome of interest. Then he defines assessment as gathering evidence of learning using direct evidence that goes beyond surveys (p. 7). Most of the balance of the article deals with how assessment should be built from the bottom up as opposed to being prescribed by outside stakeholders. The audience for this article seems to be open to anyone involved in higher education. However, the title and overall tone suggest this article is really for those deeply involved with accreditation and also for those external stakeholders he mentions. The real value of the article is to place the ideas into the context of time and stakeholder interests. 20

21 Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth anniversary report from the council for higher education accreditation. Washington: D.C.: The Council for Higher Education Accreditation. This book can be ordered from CHEA. {Book} See my introduction to the first Ewell document (above) for my comments about the author. This is a very good book that shows significant understanding and careful scholarship as the authors presents a description of the past, present, and future of accreditation in America. Chapter one outlines the current state of assessment and important issues for discussion. In what seems almost like an aside, chapter one also includes a brief description of how assessment works. I found this very helpful. I skimmed through chapter two until I reached the subsection entitled Entering the Age of Accountability (p. 42). I found that Ewell s discussion about the Spellings Commission report (2006) helped me put things in context and I learned why the dust has not yet settled. Although the Spellings Commission called for many things, and the long overdue reauthorization of the HEA (p. 48) put some things on hold, we continue to see movement toward assessment driven by accreditation. Much of the rest of the book is a call for the nature of the changes we are currently (or will soon be) experiencing in higher education. Several points were worth noting. First, assessment is not going away. If anything, the backing for this movement is increasing. Second, assessment needs to be focused on providing meaningful evidence of learning. Simply jumping through meaningless hoops is neither productive for higher education nor acceptable to stakeholders. Third, while the methods of assessment can differ across institutions, there needs to be rigor in the measurement. Fourth, we must find ways to deal with a diverse group of students given the shifts in student populations. Finally, given that accreditation is done by faculty for faculty (and their students success), there is a lack of understanding of what constitutes good measurement throughout the academy. More specifically, because accreditation teams are made up of the same faculty who are not experienced with assessment, accreditation will continue to ebb and flow with the quality of the accreditation committee membership. The translation for me is that in the near term, accreditation will continue to look at process and adherence to completion of assessment. But as faculty become more acquainted with good measurement, we will see requirements for rigor as well as process. Ewell outlined the current challenges, historical context, and suggests next steps. Two things stand out to me. First, accreditation is in a state of tension with the U.S. Government. Ewell contends that accreditation and higher education must be proactive in developing assessment to avoid a top-down government intervention in the requirements (and complete model) for assessment in America. He makes the case that assessment has been, and continues to be a bottom-up look at ourselves endeavor. But he also makes a good case for a need to make significant changes in the model to include assessment evidence. The audience is, policy leaders at institutions, higher education associations, accrediting organizations and government agencies (see NIOLA website 21

22 However, I found this book quite readable and useful to anyone interested in better understanding accreditation and assessment. 22

23 Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension. (Occasional Paper #1). Washington, D.C.: National Institute for Learning Outcomes Assessment. Retrieved June 23, 2011 from {Paper} See my introduction to the first Ewell document (above) for my comments about the author. Dr. Ewell revisits comments he first made back in 1987 regarding the shift toward assessment and accountability. His contention in 1987 was that adopting an assessment for accountability versus assessment for improvement stance could greatly change the nature and success of an assessment effort (p. 5). Ewell provides a clear and simple description of the issues. When faculty members and institutions are focused on assessment as a tool for accountability to our stakeholders, the incentive is to look as good as possible. On the other hand, when faculty members focus their assessment efforts on improving learning, the incentive is to help students. He argues that assessment for accountability is summative, externally motivated, aimed at compliance, and often requires standardization to allow for comparisons among institutions. I agree with most of his points. I also realize however, that Ewell comments are dated because of multiple factors. Such things as state initiatives for standardized testing have already failed. Thus, additional calls by stakeholders for standardized testing can be met with strong evidence against those inappropriate practices. Also, calls by the Spellings Commission (see previous bibliographic entry) went nowhere. So at this point, the driver for assessment is accreditation rather than coming directly from Washington. At the same time, accreditation organizations that have always valued institutional improvement are now firming up their standards a bit, and are clearly pressing for assessment designed for improvement. Dr. Ewell calls for additional action to avoid having the pendulum swing back toward standardized testing and assessment aimed only at accountability. He argues for four interrelated practices that should help. 1. Institutions should make a commitment to collective responsibility. Instead of seeing assessment as an aspect of higher education s responsibility to its funders legitimate though this may be both faculty and academic leaders need to see it as a part of our accountability to ourselves (p. 15). 2. Respond visibly to domains of legitimate external concern (p. 15). In this section the author calls for training and other support to help faculty use data for decision making. He calls for institutions to measure (and report) such things as communications and critical thinking. He argues successfully that faculty members are not experienced with this. Further, he points out that the problem begins back with the design of the assessment not after the data are in. Thus, training must include how to develop assessments that can yield useful data. He strongly urges us to find ways to create concrete opportunities for faculty and staff to look at data (p. 16). Finally, he calls for a commitment to publishing the common outcomes for all stakeholders. 23

24 3. Emphasize assessment at the major transition points in a college career (p. 17). In this section the author discusses the need for pre-post testing of some sort, good benchmarking, and careful attention to when and how to assess students. He also calls for solid testing methods that generate information on multiple dimensions of student performance (p. 18). 4. Embed assessment in the regular curriculum (p. 19). I have always considered that good assessment would have to flow from coursework. Dr. Ewell argues, as I do, that certain efficiencies can be had and that students will attend to the assessment more fully if it is a part of the course and grade. In addition, and close to my heart, the author calls for assessments that are more authentic, designed to elicit the correct sorts of submissions, and are scored using reliable means. I know from other work with Dr. Ewell we both agree that a key tool for this is rubrics. In summary, the author calls for faculty and institutions to utilize assessment for learning and improvement as the foundation for their assessment efforts while at the same time, making the work public to help address external accountability requirements. The audience for this paper is any academic leader or faculty member interested in assessment within the higher education arena. Dr. Ewell is perhaps the most widely known and knowledgeable author and does not disappoint with this paper. I found it extremely useful. 24

25 Ewell, P., Paulson, K., and Kinze, J. (2011). Down and in: Assessment practices at the program level. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved June 27, 2011 from {Report} The authors begin by restating recent findings that a majority of institutions have an ongoing assessment data gathering process but few have using the evidence for improving student learning. They also cite the Banta and Associates (1993) text that suggests what faculty already know. The most actionable data they have for helping improve leaning comes from the classroom. The article is another call for creating course-embedded assessment that yields actionable data. Further, it calls on academic leaders to foster collaborative activity among faculty within each department to help ensure the data are used. The audience for this paper is any academic leader or faculty member interested in assessment within the higher education arena. 25

26 Fulks, J., Caballero de Cordero, A., Alancraig, M., Bearden, K., Lee, S., Pacheco, R., Tomson, J., and Williams, G. (2007). Agents of change: Examining the role of student learning outcomes and assessment coordinators in California community colleges. The Accreditation of Student Learning Outcomes Committee of the Academic Senate for California Community Colleges, Sacramento, CA. Retrieved May 26, 2011 from onpaper.pdf {Report} The authors outline a new type of administrator in higher education for the state of California known as the College-Side Coordinator for Student Learning Outcomes. This was not news to me. I ve been doing this sort of work for the past 11 years. While most of the document is not terribly germane to this bibliography, the authors bring out some points worth noting. The authors clearly describe the overall shift in state and national standards from showing evidence of inputs (e.g., faculty credentials, number of books in the library, etc.) to outputs in this case, student learning outcomes. They argue that assessment must be a faculty-driven task that focuses on what a student can do upon graduation. I found it interesting that the language has shifted. In the past, we heard about what students should know to be considered learned. As faculty, we often discussed content areas, topics or subjects, and bodies of knowledge. Now (and none too soon in my mind) the focus is on what the student can do which is related to performance assessment. The authors suggest that externally-produced knowledge-based exams such as the SAT or GRE are amorphous measures of some aspect of learning, but by no means provide data to improve learning or teaching (p. 7). I generally agree with them on this point. They argue instead for embedded assessment of what students can do with what they have learned thus avoiding the tendency to reduce, mechanize, and limit education (p. 8). The authors argue that good assessment that meets accreditation requirements must look at larger more integrated learning outcomes and restate that this means looking at what a student can do. They also touch on the need for multiple assessment modalities. In addition, the authors bring up the need for formative assessment that helps students learn. The authors explicitly call for a change in how assessment is conceptualized and performed in California so that their colleges will meet the demands of the accreditation organizations. The audience for this article is unclear. It seems to be aimed at legislators and academic leaders because it discusses the necessary funding for the SLO coordinators. But I still included it in this bibliography because if presented a statewide position on several germane topics. 26

27 Georgia State University. (2011). Georgia State University rubrics for program assessment. Author. Retrieved August 19, 2011 from {Online Document} This website contains rubrics for judging the quality of outcomes, assessment plans, and for improving programs through use of assessment data. The rubric scoring ranges across four levels of implementation: Evidence of planning Evidence of initial implementation/revisions Evidence of completed implementation/revisions Evidence of exemplary full implementation cycle The audience for these online rubrics are assessment coordinators and perhaps faculty members who want to self assess their efforts. 27

28 Gibbs, G., and Simpson, C. (2005). Conditions under which assessment supports students learning. Learning and Teaching, 1(1), Retrieved May 27, 2011 from {Journal Article} I included this article because the authors do an excellent job of outlining the characteristics of assessments that encourage learning. This topic has been raised in other articles (see previous entry as an example). They highlight the differences between standardized end-of-course exams with what they call assignments. They provide many citations in support of their arguments. The entire document reads like an argument for using problem-based or case-based learning and authentic assessment practices. This article is one of the better descriptions of what constitutes a good performance assessment that I ve reviewed. Although rubrics are not mentioned specifically for these roles, the authors seem to argue for components of a good educational system that are often driven by well-crafted rubrics. For example, they mention the need to handle economies of scale but still provide feedback to students that speak to the central criteria for the assignment. Those sound much like the rubric development research literature I ve reviewed that shows how rubrics can fulfill the formative assessment role and can reduce the workload for our faculty. The audience was unclear. However, the article should be useful to any faculty member. I had to do a little digging to understand their motivation for the article. It was written in the early years of the Quality Assurance Agency a non-profit group that performs external audits of universities. While not an accreditation organization under UK law, they perform tasks that sound much like those performed by regional accreditation organizations in the US. If a faculty member wants to understand why she/he should want to use their own measures instead of standardized exams, this article will certainly help. 28

29 Gilbert, G. (2010). Making faculty count in higher education assessment. Academe, 96(5), p Retrieved August 12, 2011 from {Journal Article} The author offers a series of recommendations to other faculty members in an effort to overcome what he sees as two possible outcomes of the assessment movement a marketplace ideology and the challenge of encroaching federalism (p. 25). His suggestions include ideas stated in other articles. The author s main point is that faculty involvement in the accreditation process is needed. Moreover, he argues that faculty should step up and take leadership roles in the process. These points are generally consistent with recommendations from other authors. Further, I tend to agree with them. However, what doesn t work for me in this article is the overwhelming tone that almost drowns the message. The author presses for faculty control for the sake of political control rather than as a precursor to finding useful information to help students. He argues in similar fashion for academic freedom, union involvement, taking control, and fighting the federal government s determination to control regional accreditors (p. 27). Here are some examples of his principles and his tone/agenda. Develop clear outcomes which will meet the external requirements but can unite the faculty and establish evidence-based arguments for fiscal priorities (p. 26). Conduct meaningful program reviews which can lead to better courses and programs. Participate because working on accreditation teams can benefit one s own professional standing (p. 26). Clearly establish both the players and the process. Here the author calls for faculty control and a faculty governing body to avoid giving power to the administration. In a nod to faculty unions, he also calls for faculty to establish local bargaining criteria in the process. He closes this section with a desire for the faculty voice to be the bigger dog (p. 26). Protect anonymity the author calls for faculty to protect faculty and students by reporting data instead of names. Although I agree with the concept, the author was focused on academic freedom and tenure instead of protecting people from inappropriate release of names. Defend sensible data usage Again, the principle itself is fine. But, in his argument his advice is to have faculty discredit any effort to compare institutions based on simplistic aggregates (p. 27). Preserve collegial consultation While I agree with this concept as well, the author again bangs the same drum by discussing his views on tenure and the eroding number of tenured faculty. Further, he returns to his fears of federal activity by suggesting that the single best defense against creeping federalization is comprehensive, evidence-based faculty consultation throughout our institutions (p. 27). Work with faculty members yet again the author lists a solid principle, but turns the discussion to faculty governance, union involvement in the process, etc. In fairness, he does call for these groups to train faculty so that they can contribute to the accreditation process. 29

30 The audience for this opinion piece is clearly other tenured faculty. I included it as an example of the sorts of agendas that are present in the assessment movement. I admit, I was hard on this author because of my biases. I strongly dislike turning something positive, like helping students learn into a stump for political agendas. 30

31 Gill, W. E. (2006). Outcomes assessment in the accreditation process. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA, (April, 2006). Retrieved May 26, 2011 from {Conference Article} Since about 1990 (depending on the region) accreditation organizations have continued to place more and more emphasis on outcomes assessment. The author set out to research what the literature had to say about the impact of outcomes assessment, how well regions agreed on the assessment front, the sorts of resources available, and what information might be missing. She suggests what my own literature review (described in this document) shows that colleges and universities are all over the map ranging from denying the importance of outcomes assessment to full-fledged implementation. The authors suggest that there is either agreement about the need for assessment, or at least a grudging acceptance of outcomes assessment at most institutions. She uses the same term I do that assessment is evolving across the academy. What she misses however is that assessment tends to evolve even within an institution. Gill feels that most of the literature agrees that assessment has had a major influence on general education. She again points to the fact that the regional accreditation organizations have all made assessment of student learning central to their processes and standards. But perhaps because of the very slow evolutionary progression, regional accreditation organizations have become far more prescriptive than before because institutions are not moving quickly enough. I suspect that the pressures from state and federal governments are driving forces here. That is, accreditation needs to step up to the plate soon, to avoid having assessment dictated from Washington. The audience for this article seems to be higher education administrators. This is an adequate but not great article. It seems to summarize a few selected pieces of literature, but does not try review a broad sampling. What I found most valuable was the author s appendix that outlined available resources for those getting into assessment. 31

32 Gillmore, G. M. (2004). The evaluation of general education: Lessons from the USA State of Washington experience. (Report 04-03). Washington D.C.: Office of Educational Assessment. Retrieved June 29, 2011 from {Report}. This is an excellent paper that documents how the assessment movement has progressed by highlighting real-world activities over several years. The author makes many good points and offers some important insights. The paper is not without problems however. The author begins by creating an overstated dichotomy to make his case. He argues that assessment and accountability are near polar opposites that accountability is strictly governed by external nonfaculty using standardized testing while assessment is governed by smaller, heterogeneous, internal measures that are all faculty developed. My experience has been that neither position is common perhaps not even tenable. Those who are involved with accountability (e.g. the accreditation organizations) value (and some demand in their standard) faculty input as a minimum and prefer faculty leadership over the process. Accountability also values facultydeveloped outcomes and measures. Conversely, assessment for learning advocates often values external standards for guidance and internally created measures can sometimes be aggregated to offer accountability measures. I have not encountered an institution or situation that resembled the author s extreme straw man. For a different view of the differences in accountability and assessment see Ewell (2009) earlier in this bibliography. What lies underneath the author s polar representations has some merit. For example, it can be true that accountability measures can be reduced to numbers that have little meaning for learning and instruction if the participants are not aware of the challenges. Although the author calls this something like consequential validity, his arguments are more akin to Messick s unified validity approach because he also lists such things as the relationship between test scores and other measures (AKA criterion validity), measures that were unrelated to the target skills (related to construct validity), and measures that were judged to be too narrow (AKA content validity). From the assessment perspective it is certainly possible for faculty to create measures that have little value to external agencies. To the author s credit, he does admit that the two models need not be antithetical (p. 4). Also I agree with him in that we can see the conflict between accountability s need for simple comparable measures and assessment s need for complex, heterogeneous measures (p. 4). But again, I feel that a compromise is possible and can be struck by thoughtful faculty. The author describes some underlying assumptions in the two models that echo Ewell s (2009) descriptions. The underlying (unstated) motivation of accountability is to look good while the underlying (and also unstated) motivation for assessment is to shine lights into the dark corners (p. 4). Thus, the underlying purposes are sometimes in conflict. The author then turns his attention to General Education. He argues that there are two camps one that subscribes to the body of knowledge (facts) approach and the other that subscribes to the skills approach (e.g., critical thinking, writing, etc.). He argues that the former is inappropriate because deciding what set of facts have merit and ought to be tested is a losing game. Further, he cites studies that suggest the skills learned in college have greater impact on later life than the 32

33 facts learned. I agree wholeheartedly with the author s position on this issue. I have argued exactly this point with many faculty members over the past 12 years who simply would not look at anything past their favorite lists of facts and figures that were important to their discipline. Two of the author s points were very useful to me. First, he shared a concrete example where the state of Washington built some measures without involving faculty thus leaving learning off the table. He believed that the state of Washington created a series of measures for accountability that had nothing to do with learning and may have been inimical to learning (p. 9). {Inimical tends to obstruct or harm yes, I had to look it up.} Second, he showed that accountability could be based on learning assessment data if the focus turned from the scores to what the institution did with those scores. The number of changes made in the curriculum as a result of assessment studies (p. 8). That sounds great until he further explains the unintended consequences. While promising, just reporting how many changes a given institution made could yield misleading results. An institution with a high number of changes could actually be a college that was so bad that they had made many more changes than their neighboring institutions. The idea that what you do with the data is the most important part of assessment is what I teach to all who will listen. I regularly tell faculty at my institution that they should focus more on what the scores mean and what they will do next year. Yes, it is OK to get anxious if you see scores down in the 40-50% range on exams, but scores simply can t be compared to other scores for a measure of quality. A course that saw 95% of students passing the exam versus one with only 75% could be the victim of a much more difficult subject area, a more difficult exam, or many other factors. Thus, the scores themselves are not good comparison points. Some Asides I found interesting: The author mentions the Oxford model that differentiates the faculty role where one group of faculty instructs students and another scores their exams. I used this model here in the US. The author blasts the use of survey data (indirect measures of learning) in lieu of direct measures of learning. This is a pet peeve of mine, so it was heartening to see the author present this issue. Conversely however, surveys can yield very important indirect data. The author points out that rubrics are a key tool for assessment of skills. Any activity that gets a heterogeneous group of faculty into a room together to read and discuss student work deserves support (p. 14). To this I agree completely. The author s summary (actually found on the first page strangely enough) contains six recommendations: 1. Measure skills (critical thinking, writing, etc.) instead of facts. 2. Design the assessment program to cherish the differences across programs, departments, schools, and even institutions. 3. Do not try to create a single measure that will apply to every subgroup uniformly. Those won t work. 4. Pay as much attention to how the results will be used as you do what to measure and how. 5. Faculty must be intimately involved. 33

34 6. Accountability requires that an institution develop a culture of assessment and reflection (p. 1). The audience for this article seems to be anyone interested in assessment in higher education. I found it very readable and might be a good starting point for novices. Rubric Examples: Indicators of Effective Writing 34

35 Gold, L., Rhoades, G., Smith, M. and Kuh, G. (2011). What faculty unions say about student learning outcomes assessment. (NILOA Occasional Paper No. 9). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved June 1, 2011 from {Report} I should admit my biases up front. I don t like most unions and I don t like politicians. There, I said it. I find their political posturing frustrating, even infuriating at times. Unions certainly had a place in the past when workers were exploited. They may have a place today in extreme situations that I am not aware of. For example, I cheer any organization that deals with legitimate safety and security issues. But most of what I hear about unions has to do with collective bargaining and other issues. The political nature of unions is evident in their posturing with the media, which I detest. With my biases up front, and with my admitting that a pro-union reader may get a very different take on this document, I offer my summary. This document summarizes responses from three faculty unions involved with higher education: the American Federation of Teachers, American Association of University Professors, and the National Education Association. Representatives from each union responded in writing to questions the author posed about their formal positions on assessment, their advice for dealing with the tensions between assessment for improvement and assessment for accountability (p. 5), successful approaches to assessment, and other key issues. Much of the document covers issues of academic freedom, shared governance, faculty leadership and supremacy of faculty professional opinion in issues of learning. Of the three organizations, the AAUP provided more details about the assessment endeavor and offered more reasonable positions such as the need for faculty to work together to form meaningful standards even across courses (see page 7 for this discussion). Note that I found significant documentation from the AAUP reviewed earlier in this bibliography. The AFT on the other hand would say that they are generally behind student success, but instead of offering details, said that they could not because there is not general agreement on what student success actually means (p. 8). The NEA position clearly shows their background with fighting the No Child Left Behind assessment efforts in K-12 education. The AFT brought up the idea that their members are suspicious of cross-institutional assessment systems imposed from the outside (p. 10). On this point I do agree. Those sorts of measures rarely fit local needs and certainly don t focus on student success. The NEA stated that while they are for accountability, they want it to be taking responsibility for one s own professional activity (p. 10). I m sorry; but I found this statement gutted the whole premise of assessment which places the onus on all faculty for meeting their professional responsibilities. Perhaps the NEA is reacting to their continuous fights between their membership and authorities, and therefore is gun-shy of anything that sounds like they agree with the responsibilities inherent in the principles of academic freedom? Not surprisingly, the NEA clearly stated that the best approach for any institution on any issue is to talk with the union and offer to work to develop the issue (p. 12). This document reads like an open letter to the world, so I suppose the intended audience is anyone interested in assessment issues in higher education. This document did not change my opinion about unions. While two of the three offered a thin veneer of acceptance on matters related to assessment, they aimed most of their discussion at issues of collective bargaining, 35

36 getting the unions involved in all affairs of the institution, and the like. I could find very little in the way of specific positions or information about what they thought of assessment. There were some points of agreement however. As the AAUP stated, and I agree, faculty should be heavily involved in any assessment effort. But I rather than focusing on control, my reasoning is that to be fair, aligned to local needs, and to help improve student learning, it is the faculty that must conceptualize and carry out assessment in the classroom. I believe that any classroombased assessment ought to provide formative feedback to students and then be used for course assessment. I believe that good teachers look objectively into issues related to their courses, reflect, and then make changes to make things better. Next, I strongly agree with the concept of not using the data that was intended to help students as a baseball bat for beating up programs or worse individual faculty. I have seen this happen with K-12 under NCLB. I have read the legal cases that confirm this sort of practice is inappropriate. An example Even this past school year, my son was given a number of diagnostic achievement tests here in Utah. But, neither he nor I saw the results. I don t think his teachers saw them either. Instead, the only use for this testing seems to be for reporting to the federal government. Then, if the news folks have it right, is seems that the results are then used to beat on principals and superintendents. (OK, so I m biased here as well.) But the point remains. Do not use assessment data for judging the relative merit of courses or faculty. The data do not support this type of use. 36

37 Haertel, E. H. (1999). Performance assessment and education reform. Phi Delta Kappan, 80(9), Not available online. {Journal Article} Haertel brings up several useful concepts and arguments about using assessment as a vehicle for educational reform. He correctly points out that using performance testing as the driving force for reform will certainly fail because it is too narrow a concept to handle all the issues of proper reform. He shows how using any form of assessment as the only tool for reform distorts the goals of education from improvement of student learning to simply passing the test. That is, any test tends to narrow the curriculum. Therefore the movement from multiple-choice to performance testing may be good for many reasons; it is not sufficient for driving reform. The audience for this article is clearly policy makers. But the points made certainly fit this bibliography. This is yet another argument against using assessment that was meant for one purpose for something altogether different. 37

38 Hatfield, S. (2011). Assessing your program-level assessment plan. (IDEA Paper No.45). Manhattan, KS: The IDEA Center. Retrieved August 22, 2011 from {Online Article} This is a very good checklist for anyone involved in building an assessment system at their institution. The authors present and describe a list of fourteen key questions for guiding the process of reviewing an assessment plan (p. 1). 1. Why are we doing assessment? (Look at the focus of the plan to determine this.) 2. What kind of plan are we writing? (Program effectiveness or student learning outcomes.) 3. Who is responsible for assessment in our program? (An individual or a group of faculty members.) 4. Is administration supportive of assessment? (If not, this is going to fail.) 5. Is there a common language for talking about assessment? (A glossary is sorely needed. Btw we do have a published glossary at my institution.) 6. Have we identified program-level student learning outcomes? (A small number of highlevel outcomes.) a. What is a reasonable number of outcomes? b. Are the outcomes tied to the college mission/goals? c. Are the outcomes written in an appropriate format? ( students will be able to <<action verb>> <<something>>. ) d. Do faculty agree on the definition of the outcome? (e.g., what does critical thinking mean?) e. Are the outcomes supported by core courses? (Curriculum map) 7. Does the plan rely on direct measures of student learning? (It better!) 8. Are the assessment methods appropriate to the outcomes? 9. Is there a systematic approach to implementing the plan? (Annual versus periodic.) 10. What is the method for collecting and organizing data? 11. How are faculty trained to use assessment tools? 12. Do the assessment tools distinguish among levels of achievement? 13. What happens to the data after it has been collected? 14. Have we used the data to improve learning and teaching? The audience for this article is anyone involved with developing an assessment system at the program or college level. 38

39 Hutchings, P. (2010, April). Opening doors to faculty involvement in assessment. (NILOA Occasional Paper No.4). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved June 24, 2011 from {Paper} The author begins by laying out the problem she wants to address: The majority of faculty at her institution are not engaging in assessment activities and many who do engage are simply going through the motions to address external requirements. Both of these trends are disappointing because faculty who are involved in meaningful inquiry and reflection on learning and teaching tend to improve their performance and their students success. Moreover, the best teachers all have ongoing improvement strategies within their professional quivers. She argues that student learning is strengthened when faculty discuss purposes, goals, outcomes, data, interpretations, and plans for improvement. These collaborative activities also improve the coherence of department offerings (see p. 7). But the author points out that there are significant obstacles (p. 8) to faculty involvement. Assessment has been seen as an externally driven management or even bureaucratic activity developed by actors outside academe (p. 8) rather than an integral part of teaching and learning where the locus of control is with the faculty themselves. The language of assessment (objectives, benchmarks, measures, etc.) is not friendly to the faculty experience. Faculty are not trained in assessment. Most graduate programs focus on the discipline rather than on teaching, learning, assessment, and the like. Faculty tend to bow out [of technical discussions regarding assessment] because they don t want to be seen as amateurs and dilettantes (p. 9). Faculty are often not rewarded for assessment activities. Although the author admits this is a larger problem in that even teaching is not seen as a valuable activity when it comes to tenure and other career enhancing rewards. Faculty are not yet convinced that assessment is worthwhile. Later in the document the author mentions funding cutbacks (p. 12). The author then presents her suggestions for improving faculty engagement in assessment. Build assessment around the regular ongoing work of teaching and learning (p. 13). Make a place for assessment in faculty development (p. 14). This has to do with bringing faculty together to encourage conversations about the data, interpretations, making meaning, and making plans (p. 14). Build assessment into the preparation of graduate students (p. 14). Reframe the work of assessment as scholarship (p. 15). In this recommendation the author calls for incentives for faculty such as stipends or grants and also a more systemic placement of assessment work as equal in value with teaching and other forms things that count for advancement. Moreover, she calls for going one more step beyond good campus citizenship to making assessment a part of the culture an important intellectual exercise or an act of scholarship (p. 15). Create campus spaces and occasions for constructive assessment conversation and action (p. 15). Involve students in assessment (p. 16). The author suggests that this may be the most powerful method for getting faculty engaged in assessment. It is tied in with metacognitive and intentional learning goals we all have for our students. It is possible that this avenue could be quite useful. The author closes by pointing out that faculty will never all be on the same sheet of music. Some will perform action research for their own needs. Others will stick to the requirements only. 39

40 Some may proctor deep engagement at department meetings while others may work with their students to research various approaches to learning. The point is -- we need to make room for all of these approaches. The audience for this paper is anyone interested in assessment within higher education. 40

41 Katz, S. N. (2010). Beyond crude measurement and consumerism. Academe Online, September- October. Retrieved August 15, 2011 from {Journal Article} The author presents his supportive point of view about assessment in higher education. This article is a near polar opposite of the Powell (2011) article. While Katz has a good grasp of what assessment is really about, and takes a positive tack in describing what makes it work, Powell s article was full of misconceptions and griping. The author s first remarks suggest that the academy ought to be up to the task of figuring out what it is that our students know by the end of four years at college that they did not know at the beginning (p. 1). In his first section he discusses the perils of externally imposed assessment, suggesting it is the worst case scenario (p. 2). I tend to agree. While the specter of mandated assessment exists, it is rarely the reality for any institution. The author argues that instead of fighting a battle we need not fight, we ought to support the most effective means for assessing our students. I like how this author is on the positive and proactive side of our situation. As compared to fighting for control and gaining authority as we saw in the Gilbert (2010) article. The author then turns his comments to the faculty role in assessment. He argues that debate over the issues of assessment in this arena has been neither edifying nor helpful (p. 3). He suggests that the bureaucrats bluster and the faculty dodge and weave (p. 3). He suggests that faculty members are primarily to blame for the slow progress in assessment but also outlines other factors such as the difficulty of building reliable measures, the expenses involved, and the faculty s lack of agreement about the purpose of higher education. He then turns his attention to formative evaluation as a channel for good assessment. He correctly argues that it is hard to imagine a principled objection to careful evaluation of leaning outcomes or to thoughtful suggestions for improvement in pedagogical strategies (p. 4). He cautions that the legitimate objections will come if administrators demand that faculty use specific strategies or when they use the learning assessment data for basing decisions about tenure, promotion, and compensation (p. 5). I agree with this assessment. Neither of these imposed methods are appropriate for valid assessment systems. The author then suggests that the most difficult area for faculty will be the articulation of learning outcomes. I agree 100%. I have seen this in action. Especially in certain disciplines, faculty have great difficulty creating precise learning outcomes. The author suggests that there is a distinct lack of certainty within our own discipline [history] about what constitutes particular learning outcomes (p. 5). Near the end of his article the author restates his thesis in two sentences If we could put a man on the moon, we ought to be able to figure out reasonable solutions to the technology of assessing student learning. And if we can, we ought to be able to improve quite dramatically the quality of the education we offer our students (p. 7). Here-here. 41

42 The author closes with a call for faculty to solve the problems and then use the data for transforming pedagogy and curriculum. The audience for this article is faculty and administrators. I found this article an open and easy read. I also found this to be a good counterpoint to the Powell (2011) article in two ways. First, the Katz article is written in a straightforward communicative style meant to get his points across. Second, he addresses the real issues without griping. This article is a good counterpoint to the Gilmore (2004) article that focuses on agendas instead of practices. 42

43 Keeling, R. P., Wall, A. F., Underhile, R., & Dungy, G. J. (2008). Assessment reconsidered: Institutional effectiveness for student success. Washington D.C.: National Association of Student Personnel Administrators (NASPA). {Book} This book offers a good look at important aspects of assessment. The book covers many of the issues I have seen in my work. Many parts of the text read like a primer for academic leaders who want to learn more about how to build assessment at their institutions. Each chapter discusses useful information about such things as rigor, the costs of assessment, policy, and many other issues. Although this book covers some of the same territory as other materials in this bibliography, it has a couple of useful differences. First, this book takes a look at assessment through the lens of a complete novice to the concepts. However, the authors don t speak down to the reader. Second, the authors assume the reader has much less experience with assessment than other authors found in this bibliography. In chapter one the authors cover the basics of the landscape of assessment and how it fits (or doesn t) into higher education. They cover some fundamental concepts such as the nature of institutions, learning, and assessment. Then they start making connections among these to build up their model and their reasoning. For example, they make the case that higher education is typically organized into silos. They also make the case that certain skills such as critical thinking that are so valued by so many faculty are acquired across the silos. Thus, they argue, in order to understand the real impact of our institutions they should do comprehensive, horizontal assessment to create (or measure) that impact (p. 7). This is correct. I ve written on the concept of learning and measuring enduring traits. Chapter two reads like a primer in assessment. They define such things as formal and informal assessment (p. 10), the stages of an assessment effort (p. 11), and highlight some fundamental concepts about how to make assessment work correctly (p. 12). Then they turn their attention to learning outcomes (p. 13). Then the authors take an interesting tack that I ve not seen in any but advanced texts in assessment. They argue for basing all that we do on theory theories and models provide frameworks within which good ideas and strategies can be supported (p. 15). While I agree, and I regularly use theory and models for all that I do, I would not expect it of novices in assessment. Thus, to have the parts of this chapter cover very basic principles only to follow-up with highly advanced ones is incongruous at best. The authors continue with theory and models for several pages. However, they return to basics on page 25 with their description of taxonomies of learning. Although they focus only on Bloom s taxonomy, the concepts are quite well explained. This fundamentals chapter ends with descriptions of the political context of assessment and some other roles for assessment such as promoting the public good. The title for chapter three is Rigor in Assessment (p. 34). I had expected a return to advanced concepts such as the unified theory of validity and the like. To their credit, the authors instead covered very basic questions to ask ourselves as we get involved in creating assessment systems and tools. I like this list because each of the elements listed if addressed will avoid or mitigate most of the problems I ve encountered in assessment within higher education. I ve paraphrased the authors list and then offer parenthetical commentary based on what I teach. 1. What should students at our institution know and be able to do? (Build good learning outcomes statements.) 43

44 2. How should our students values, ideas, knowledge and skills change as a result of attending here? (Tie outcomes to our mission statement and our high-level goals.) 3. How should our programs and services be aligned and integrated to support these changes? (Create mapping to form a coherent program for attaining our goals.) 4. What criteria will determine accomplishment of the learning goals? (Create such things as rubrics and other tools based on concrete criteria of success.) 5. What learning strategies will best link the curriculum to outcomes? (I love this one. Many authors leave this on out. This is the critical connection between the high-level goals and teaching in our classrooms.) 6. What are the best data collection methods for assessing our programs? (Select or design the best assessment methods.) 7. What resources exist to help us? (Self explanatory but I would have placed this issue closer to the top of this list as it should occur early on.) 8. With whom will we share our results and findings? (Identify stakeholders. But the authors missed a point. They failed to ask for the sorts of information these stakeholders would be looking for, what they might do with the information, and, in cases such as reporting to governments, what format the reports must take.) 9. What is our program/department/institution s capacity for change? (I love this one as well. Only a few authors hit on this point. Why assess things we have no control over? Be sure the data we collect are actionable.) 10. What skills and knowledge do our faculty already have and what professional development should we provide to help them engage in assessment? (This is consistent with my experience and with nearly every survey of faculty across the nation. Faculty are not experienced with assessment. They need our help to fully reap the benefits of assessment.) (Paraphrased from pp ). Following these basic questions the authors summarize by providing a list of principles that promote rigor in assessment. 1. Assessment practice should be transparent. the purpose, value, stance, and use of assessment are shared openly in the planning, implementation, analysis, reporting, and use of results (p. 38). {This first principle is essential to fairness and validity.} 2. Assessment should engage stakeholders. {This seems like a simple concept, but in my experience one that is often overlooked. Assessment is done BY people FOR those same people. Assessment is not done by others to evaluate those people. That is, assessment data should be a partnership between students and faculty. The results should be given as formative data to students and then aggregated for course, program, and institutional use. Otherwise, the only value of the data is to the checking off the accountability requirement. A waste of resources to be sure.} 3. Method should respond to question and context. {This is another seemingly obvious principle that I often see broken. Rather than determining what they want to know, novices jump right to HOW they will know it. They jump to building a survey before they even know exactly what the data are supposed to answer. Even if they know what the data are for, they fail to consider that there are other means for collecting the data that would provide stronger and more actionable evidence.} 44

45 4. Assessment should strive to adhere to the tradition of each employed method. {This is a fancy way of saying to follow the rules of good science. When using a particular assessment tool, use it right. Provide for such things as proper samples, objective measures, etc.} 5. Assessment should be ethical. {Here-here. This has to do with fairness and with protecting the participants. Such things as names should be left out of reports. All involved should protect against misuse of the results. For example, to repurpose student achievement data to evaluate faculty or programs would be unethical (and illegal according to US courts.} 6. Assessment should attend to questions of the public good. {I ve not seen this in a measurement rigor list before. The closest I ve seen are matters of consequential validity. Perhaps this is a reaction to higher education s social justice role?} (Principles quoted from pp ) Chapter four is a tad more advanced. Here the authors begin laying out the methods for performing assessment. They cover such things as the processes involved, types of assessment, rules for each type, etc. They organize around the concepts of quantitative, qualitative, and mixed methods approaches to data collection. Chapter five is an interesting switch. They go from the nitty-gritty, in the weeds discussions of the previous two chapters to a view from 10,000 feet. In this chapter they outline how an institution should build capacity (p. 56) for assessment across the institution. They outline how professionals from across the campus should collaborate to map learning. That is, where and when will we help our students learn x, y, and z elements that we say we value? The briefly (and not adequately in my mind) cover how to go about helping students integrate their learning. This is a very complex task that deserved more attention. Then the authors discuss how to go about supporting student learning and close with how to document that learning. This chapter is a call for a fundamental change in how we organize education. Rather than independent silos or fiefdoms as other authors call our colleges and departments, we ought to be about collaboration at a very high level to benefit our students. Chapter six follows-up on the previous chapter by outlining specific tasks that should be accomplished to build capacity. I was left with many questions and felt there was too little meat and potatoes in this chapter to really help readers. The next four chapters cover specific aspects of implementing assessment at an institution. Chapter seven discusses the real and political costs involved. Chapter eight covers the concepts of embedding assessment in the culture and mission of the institution. Chapter nine then discusses policy issues related to assessment. Chapter ten attempts to discuss assessment as scholarship. I was not impressed by this attempt. The last chapter is simply a summary of what the authors felt were the most important points from the book. The audience for this text is anyone interested in getting into assessment at a fairly high level. While the fundamentals are very basic and simple to understand, the authors choice of topics 45

46 suggests this was written for academic leaders rather than regular novice practitioners in the classroom. I would recommend this text to program chairs, deans, provosts, and newly-minted institutional assessment coordinators looking for a text that covers the territory. A word of caution. This text is not a methodological how to text. It is a relatively brief book yet fully frames the issues related to building a successful assessment system at an institution. For how to help, I recommend other works such as the Walvoord (2004) text as a VERY simple guide or the Leskes & Wright (2005) text for a bit more detail. An interesting aside: I also noticed that the authors did not mention the 2006 Spellings Commission report. Instead they provided a vague reference to it when they mentioned attempts to standardize and how this provoked the national dialog on accountability (p. 2). Perhaps they decided not to provoke the sleeping dog? 46

47 Knight, P. T. (2002). The Achilles' heel of quality: the assessment of student learning. Quality in Higher Education, 8(1), pp A draft of this paper was retrieved May 27, 2011 from {Journal Article} This article is a call for a return to documenting inputs of higher education rather than using output measures of student learning aimed at improving learning. Although I disagree with Knight s call to return to documenting inputs (as opposed to fixing the problems with poorly constructed outputs) his article is quite useful to the careful reader because it goes into great detail about what can go wrong with assessment. Knight s framing argument is a dichotomy of quality systems (page 108) which he then uses as a straw man to argue his case. Knight argues (and I tend to agree by the way) that educational systems may need a very strong push to overcome business as usual because existing systems tend to resist change. But he concludes that important reliability versus validity problems cannot be solved and that education is too entrenched to change. Thus, he argues, we should return to documenting inputs to education rather than measuring outputs. Oops. That said, the author spends most of his energy providing an excellent outline of what can go wrong with assessment. He even hints in several places about how to fix it. He offers good arguments for better operationalization of outcomes, better definition of higher-order outcomes, better construction of tests that go beyond simple (reliable) knowledge multiple-choice exams, and even a call for better rubrics that contain well-defined criteria. Then, he provides good argument for sharing and gaining consensus across stakeholders so that comparative decisions can be made. The article was written for educational administrators. However, it is quite readable and could be digested by just about any faculty member in higher education. I felt that Knight s initial straw man to define his two quality systems was a bit severe in that it seems to bring out what is worst in bad assessment and best with what he proposes. (OK, so I m biased in the other direction.) I also felt that his discussion suggesting that higher education not change simply because large operational systems resist that sort of change was not a sufficient argument. 47

48 Kuh, G. D., and Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy, 22(1), Not available online. {Journal Article} In this relatively brief article the authors present findings of a study that are better explained in another document (Kuh & Ikenberry, 2009). I included this entry in my bibliography because of a couple of useful points the authors made in this article. They argue that worldwide factors, not just accreditation, are driving the assessment movement. I agree accreditation is one of the many drivers to the assessment movement. The authors define the two sometimes divergent (p. 11) goals of assessment accountability which they relate to competency attainment (p. 12) versus assessment of learning for ongoing improvement. I agree with this characterization. Although I d add that they have to do with formative versus summative assessment. The authors mention that reaching either of these goals may well include the use of performance assessment scored with rubrics (see page 13). The audience for this article is higher education leaders. While much of the article was just a restatement of the findings of a survey which is presented in full in the Kuh and Ikenberry (2009) report, the issues I ve listed above were useful to me. See the next bibliographic entry for the survey findings. 48

49 Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: learning outcomes assessment in American Higher Education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved August 12, 2011 from {Report} Kuh and Ikenberry report on a national survey of higher education institutions that asks several questions about the institutions progress with joining the assessment movement. They found that a majority of institutions had by that time created a set of common learning outcomes and used various approaches to measuring the outcomes. The majority of institutions used assessment data primarily for addressing institutional and program-specific accreditation requirements. Most institutions were facing two challenges. First, funding was (and still is by the way) limited for assessment activities. Second, their assessment programs lacked assessment expertise for guiding their efforts. The audience for this document is any faculty member or institutional administrator who is involved with assessment. What is important about this document is that it provides empirical backing for many of the ideas presented elsewhere in this bibliography. For example, Ewell (2008) contends that accreditation has focused institutions on assessment and that institutions rhetoric far exceeds reality as it pertains to measuring such things as critical thinking (p. 2). 49

50 Leskes, A., & Wright, B. D. (2005). The art and science of assessing general education outcomes. Washington: D.C.: Association of American Colleges and Universities. {Book} In the words of the authors five strands of work on improving undergraduate education are converging in an effort to assure quality and accountability (p. 2). The five strands very clearly outline how education is changing from inputs to outputs of education (p. 7). The authors call for abandoning the old paradigm of looking at inputs (such as what constitutes good content of each course) to looking at learning via assessment, and then asking how to improve. Their first strand deals with reform of general education. But this could easily be expanded to a broader view of reform across the curriculum. The second stand highlights the growing push for assessment of learning rather than assessment simply for accountability. The authors point out that we have seen an evolution on this issue a move from distrust and avoidance to a growing level of acceptance. Therefore, they argue, there is a strong need for training and professional development for our faculty. The third strand describes how accreditation standards are changing from demanding inputs to outcomes assessment within each institution. (See Ewell, 2008 for a fuller look at this issue). Their fourth and fifth strands deal with the AAC&U efforts to further the goals of general and liberal education. Both mention assessment as a critical component or activity. Chapters two and three provide detail regarding how to institute an assessment program at an institution. The authors argue that assessment has evolved across the board into something that can improve learning learning occurs and fulfill summative accountability roles. I especially liked how the authors explained that good assessment could be both formative (to help learning as it occurs) and summative for accountability. But, the critical point is that assessment designed for formative data can be used for summative requirements but that assessments designed to gather summative data can rarely be used for formative information (see page 18). The authors argue that confusion continues on campuses about the assumptions, methods, purposes, and even the basic vocabulary of assessment (p. 7). The authors then turn their attention to the need for faculty training. They reason that assessment should be led by faculty, who are currently not equipped to handle assessment, thus leading us to training requirements. Throughout the book the authors highlight the assessment of complex enduring traits such as critical thinking and higher order thinking. They point to the relative difficulties of measuring these sorts of skills and the need for performance-based tests that use rubrics for scoring. In chapter six the authors get very specific about scoring rubrics and provide both examples and descriptions of how they are used. An aside: I found the descriptions very useful because most authors simply provide rubric examples without giving important context that one would need to use it properly. This short book is an excellent resource for any faculty member or administrator involved with assessment. The title may put some off, but I found the book very readable and could see any 50

51 faculty member using the concepts, ideas, and suggestions that the authors provide. This is one of the top 10 books I reviewed. The authors very clearly portray the shift in higher education that is happening right in front of us. Moreover, it offers the reader a practical way to get involved. 51

52 Maki, P. L. (2009). Moving beyond a national habit in the call for accountability. Peer Review, 11(1), Retrieved August 11, 2011 from {Journal Article} This is a simple article that is aimed at convincing the reader that our habit of using multiplechoice exams is an insufficient means for measuring important educational outcomes. I included this article as evidence that many faculty agree with this fundamental point. In my field (measurement) this article restates foundational truths that psychometricians have accepted for many years. Multiple-choice exams are only good for measuring certain types of outcomes. Other forms of assessment are far more likely to collect evidence of such things as critical thinking, integration of knowledge, etc. The author closes her article with a description of a solution the AAC&U VALUE approach to assessment of eleven liberal arts outcomes. She argues that VALUE rubrics offer educators a way to realistically represent learning in their students (p. 17). For a fuller description of the VAULE idea, see Rhodes, (2009). The audience for this article is any faculty member. It is clearly aimed at explaining why standardized testing is not the best means for measuring certain types of educational outcomes. Therefore, I believe this article is also aimed at policy makers and accreditation organization leaders. 52

53 Maki, P. L. (2010). Coming to terms with student outcomes assessment: Faculty and administrators journeys to integrating assessment in their work and institutional culture. Sterling, VA.: Stylus. {Book} This is an excellent book for anyone interested in developing an assessment system at an institution. I included this entry in my bibliography along with copious notes because these real-world situations offer the reader a better view of the assessment movement and how unique solutions can be at each institution. Further, these vignettes offer pearls of hard-won wisdom that may well forestall some errors and/or overcome some misconceptions readers might have. My summaries and commentary are organized around each institution s story. My apologies for the choppy writing. California State Adopted AAC&U LEAP program. GE assessment was a hard sell. They originally tried to work with both a curriculum committee and an assessment committee. The two committees evolved independently on different timelines and with overlapping purposes. The two committees were sometimes in conflict and did not coordinate. Thus, faculty had (perceived at least) extra work driven by the competing purposes. But faculty and the writers agreed that they were producing reports that had little meaning. The situation was so complex and so entrenched that they went to the extreme step of placing a moratorium on all new curriculum and assessment approvals. They cited the faculty dissatisfaction with the meaningless processes and assessment. During the two-year shutdown they retooled curriculum and assessment processes with a single committee in an effort to change the process and the underlying culture. They set standards for assessment: Each course had to have direct assessment of learning, optional indirect measure(s), assessment had to be ongoing, had to relate to at least one GE outcome, had to be embedded as a part of the graded assignments to be more efficient, had to report on changes made each year and the effects of the changes the next year (plus the new changes), results had to be written to help other faculty (lessons learned), and had to be published. They created a learning community of faculty around each LEAP goal (similar to my institution s college-wide outcomes) and assigned faculty who were involved with assessing a given LEAP goal to that community of practice. The community members had to publish their respective assessment results and discuss them with the community. Thus, the real value was not the data but rather the effort to make meaning of the data collectively, and the collaboration. This collaboration led to many improvements based on what they collectively learned. The authors say that keeping things flexible helped the communities succeed. Faculty could build just about any sort of assessment as long as it fit the LEAP model. Community members could choose to engage or not. Many shared best practices, good news as well as challenges. Thus, coordinating their assessment efforts across the institution and creating communities of practice helped them build a successful assessment system. Clark College (a small private Catholic College) Adopted Peggy Maki s book to guide their assessment system development effort. They assigned a Director of Assessment for the campus. They split their assessment system into two parts GE and program assessment. GE Assessment: They mapped out all their GE courses and determined where and when their GE goals would be assessed across all of them. This took huge coordination and a top-down approach. {An aside: I ve used this approach before. It was exhausting but bore wonderful fruit.} GE foundational courses, division offered reinforcement courses, and capstone courses were all mapped in a 3-tier approach. They kept each assessment very simple to minimize impact 53

54 on faculty. They held a GE assessment day each year for all participating faculty to go over the results and collaborate with one another. The primary outcomes for these assessment days were making meaning of the results, making plans for improvements, and making plans for subsequent data collection. They felt that the collaboration and engagement were critical to their success. Reporting process was streamlined to again minimize impact on faculty. They used rubrics for measuring for all GE outcomes. Program Assessment: They instituted one program assessment day each semester. This event was mandatory for all faculty. The morning session was devoted to training. Afternoons were dedicated to faculty working on their plans based on what they learned. Subsequent assessment days were devoted to aligning institutional and program mission and goals, curriculum mapping, refining outcomes statements, building (or improving) rubrics. Thus they took a step-by-step approach over a two year period to realize their full assessment system. They tried large measures for program success/impact by using signature assignments scored with rubrics by third-party graders. This approach was inefficient and was soon scrapped. They initially required a very detailed 12-part program assessment report each year from every program. But then it seems like they softened their hard line and now require a more feasible report that covers the critical issues: What outcomes were measured this year. What assessment tools were used. What data were collected. What decisions were made. What actions were planned to make improvements. They have a graphic on page 35 that outlines how the reports are used. Data and decisions reported (see bullets above) reviewed by board of trustees published for all faculty decisions enacted at program and course levels additional data collected (this includes IR data that are designed to directly support the expressed goals and the cycle repeats. This institution used a decidedly top-down approach to organizing their system and providing scaffolding for individual faculty. They required faculty participation. They trained all faculty in each element of the assessment system. Then they left the actual assessments, meaning, and plans strictly to the faculty. College of San Mateo (community college) The authors started by citing typical problems: for community colleges such as underdeveloped articulation agreements, swirl, very low retention, and 46% of their students drop out after one semester. They focused their assessment system mainly on course-level assessment. They received a grant to try to measure a college wide outcome (writing). They tried several approaches including placement testing, embedded assessments, and e-portfolio. They wanted to identify weak writers early and help them with various programs. Then their faculty recommended assessing writing across the curriculum. It was unclear to them exactly what this meant and how they would do it. They investigated and learned that e-portfolio reviews could be useful. The authors point out that assessment itself is not the point p. 75). Rather, the collaboration and changes are what matters. This community college performs course level assessment and periodically performs e-portfolio assessment each year for one major outcome. 54

55 Eastern Kentucky University This institution took a very different approach. They found that placing assessment under the umbrella of strategic planning offered the context necessary to develop excellent processes and to promote its role in the overall campus culture (p. 79). They explained that Strategic planning asks: What should we be doing and how will we do it and assessment asks how well are we doing what we are supposed to be doing? Thus, the strategic plan drives the assessment at KU. But the faculty did not get to this point easily or quickly. Faculty went through the stages of mourning (denial, anger, bargaining, depression, acceptance- p. 77). This took time. Further, they had to start top-down to provide a framework that fit strategic planning. This did not go over very well with faculty. They use program assessment that includes both educational and planning goals. (I assume they mean learning and program-wide success goals.) Their programs report their goals, assessments, results, and plans in a master database. In the database the programs must link their assessed educational and planning goals to the institutional goals and strategic directions (p. 78). The authors provide an example of how the faculty in one department went through the stages of mourning. I really liked the way they explained all the issues by highlighting the sorts of things that occurred at each stage. They used funny names for the faculty to highlight the sorts of personalities involved. Stage 1 This can t be happening! Dr. Foresight (we better do this soon or else), Professor Delay (his can t be happening & we already give grades), Professor A. Pathy (don t worry this will go away), Professor Typical (this is an infringement of my academic freedom), Professor Drone (this is not my job, I m not taking on anything else), Chair Martina Nett (I don t have time to think about this now). Stage 2 Why me, it is not fair! Professor Delay (those *#&% administrators are dumping this on us and we don t have time for it out here in the real world), Professor Typical (academic freedom! and he mailed copies of the guidelines from the American Association of University Professors to every faculty member on campus), Professor A. Pathy (this is useless paperwork not the job of a PhD let the administrators do it), Professor Drone (whined they are gathering data to fire me), Professor Paranoia (my chair is really out to get me), Chair Martina Nett (just do it), Professor S. Choir (I ll sue!). Stage 3 If I do this what will you give me? Chair Nett (thought hey I can use this to ferret out those bad teachers), (Professor Paranoia was right!), Professor Foresight (Not so fast Chair Nett you can t use assessment data that way finally the voice of reason and law), Professor Typical (hey, let s lower the bar, teach to the test, and look good), Professor Foresight (would you be proud of your students and yourselves afterward? Good or bad data is not the issue, what you do with the data is what matters), Chair Nett (let s hire a consultant to help us --- Ms. Getta Long). They tried it, and found that their students learning was not as good as they thought it was. Stage 4 I don t care anymore. 55

56 Professor Typical (it is not my fault, those students are just not as good as the ones we used to have here), Professor Delay (OK, we have talked, tried it, it didn t work, let s forget it until next year) -- Similar comments from all other faculty, Chair Nett (hold on there, let s bring in that consultant) Stage 5 Finding a way forward. Professors Delay, Typical, etc. (hey, the food at this workshop is not so bad), Professor S. Choir (the extra pay will cover my lawyer s retainer fee!), Consultant Getta Long (discussed the potential meaning and uses for the data got faculty talking with each other about what they might do next). Chair Nett (amazed saw her faculty actually talking to each other). I really enjoyed this story. It highlighted many of the issues listed in several of the other publications found in my bibliography. Gallaudet University (for the deaf) Identified key ingredients to their success with assessment. Identify experts and champions Use those experts to influence others Share information lots of it Get support from upper administration Continually reach out via influential champions Build communications channels they used monthly meetings of the assessment coordinators Use homegrown models for assessment Use rubrics What they learned from the experience: Assessment can benefit learning and teaching Assessment is not rocket science Assessment is not just reports Assessment fosters discussion about the discipline and about the program University of Maryland and University of Southern Maine The authors focused on listing the barriers and challenges of implementing assessment at their institutions. There was a misconception that assessment was strictly a bureaucratic process of accountability versus a tool for improving learning and teaching. {This had to be quashed but through time with repeated messages from above.} Grades were plenty good enough for assessment. {This took education and then education about alternatives especially rubrics.} Assessment takes limited resources. {True, but they had to show how the cost benefit was there.} There was a misconception that higher education outcomes can t be measured. They realized that this had three parts. 1. First, there were structural issues. When you have a distribution method for your GE program, it is hard to measure a sequence and growth. {This is absolutely true.} 2. Some constructs are just hard to measure. {True, but education and support can overcome this.} 56

57 3. It turned out that the biggest problem was that as faculty began getting down to brass tacks on what each construct means, they disagree and discuss things endlessly. Some departments were more difficult than others such as humanities. {The authors pointed out that it took several years of fighting over single words in sentences before they published their outcomes.} Few faculty understand the specialized language and process of assessment. {True. so educate and support them.} They had a lack of coherence in their models. GE? Program level? Course level? What are we doing? {They had to agree on this first.} Some traditional data such as NSSE just didn t give them useful information. {Also true.} The audience for this book is anyone who needs to organize assessment at an institution. The vignettes were pretty good. Many of them are enlightening both for what they did right, and what went wrong. The compilation also helps me to see the vast range of ways institutions have built their systems. Faculty who are assigned to do assessment may benefit from this book as well. Especially enjoyable was the Eastern Kentucky vignette. 57

58 McGuire, P. A. (2009). Accreditation s benefits for individuals and institutions. New Directions for Higher Education, 45(Spring), Not available online. {Journal Article} The author believes that faculty members ought to volunteer to be members of accreditation site visit teams. She appeals to civic duty first, but then mentions self protection in that we need faculty to own the process to guard against state and federal governments assuming accreditation roles with less effective, more burdensome regulatory behavior (p. 30). The author argues that while the Spelling commission work and the reauthorization Act (2008) didn t give explicit powers to government, the net result of the debates is a heightened awareness on the part of team members and chairs that their work occurs in a fishbowl (p. 32). The author offers her ideas about the characteristics of a good site visit team chair including being mature, a good organizer, a facilitator and coach, and a leader (p ). The article then offers a case study showing how accreditation can help a college. The audience for this article seems to be any faculty member. 58

59 Miller, R. (2007). Assessment in cycles of improvement: Faculty designs for essential learning outcomes. Washington, D.C.: Association of American Colleges and Universities. Not available online. {Paper} This document outlines the Association of American Colleges and Universities LEAP program. I will use the AAC&U introduction here: Liberal Education and America s Promise (LEAP) is a decade long national initiative launched by the AAC&U in 2005 to align the goals for college learning with the needs of the new global century. LEAP seeks to engage the public with core questions about what really matters in college, to give students a compass to guide their learning, and to make the aims and outcomes of a liberal education broad knowledge, intellectual and practical skills, personal and social responsibility, and integrative learning the accepted framework for excellence at all levels of education (p. v). The author begins to frame his comments with a description of the shift in education from a core curriculum to a focus on many disciplines. He also points out that higher education has been showing symptoms of a move toward constructivism teacher centered to learning centered cultures (p. 1). But, he believes that few faculty have been taught how to teach and fewer still have been taught about assessment so faculty struggle with the concepts. Later he suggests that much of the resistance to assessment comes from the misperception that assessment is strictly limited to quantitative work. If this were true, then the argument about things being un-measurable would be true. But that is simply not the case. Good assessment is far more encompassing many modalities and forms for getting at all sorts of learning outcomes. In fact, when it comes to liberal learning outcomes, other forms of assessment such as performance assessment are required. BUT The author goes on to point out that many policy makers and politicians still believe that standardized quantitative testing is the only way to get good data. Thus, we have the potential to be caught between a rock and a hard place. He reiterates his earlier comments when he quotes Shavelson (2007) there are no standardized measures for most outcomes of a liberal education (p. 2). The book lists the AAC&Us Essential Learning Outcomes: Knowledge of Human Cultures and the Physical and Natural World Through study in the sciences and mathematics, social sciences, humanities, histories, languages, and the arts Focused by engagement with big questions, both contemporary and enduring Intellectual and Practical Skills, Including Inquiry and analysis Critical and creative thinking Written and oral communication Quantitative literacy Information literacy Teamwork and problem solving Practiced extensively, across the curriculum, in the context of progressively more challenging problems, projects, and standards for performance 59

60 Personal and Social Responsibility, Including Civic knowledge and engagement local and global Intercultural knowledge and competence Ethical reasoning and action Foundations and skills for lifelong learning Anchored through active involvement with diverse communities and real-world challenges Integrative and Applied Learning, Including Synthesis and advanced accomplishment across general and specialized studies Demonstrated through the application of knowledge, skills, and responsibilities to new settings and complex problems The messages of the book are simply that lots of smart faculty collaborated to create the list of liberal arts outcomes and that they make sense. They offer stories from campuses across the country as evidence and illustration. Each story describes the institution s goals, how they fostered attainment of the goals, how they assessed attainment, and improvements they had made as a result of their assessment efforts. However, I note that several of the examples were not very strong. They contained weak reasoning and poor assessment practices. But given the publication date, perhaps better stories were not yet available? The audience is anyone seeking to build a system for measuring LEAP goals. However, other texts do a better job of explaining these issues and novice readers might gather some misconceptions based on some of the poor practices listed in the stories. 60

61 Murray, F. B. (2009). An accreditation dilemma: The tension between program accountability and program improvement in programmic accreditation. New Directions for Higher Education, 145(Spring), Not available online. {Journal Article} This article is more about the Teacher Education Accreditation Council (TEAC) than about accreditation. But the author brings out a few important issues. Most standardized assessment methods have significant flaws. There is no silver bullet. The author argues that student competence is not the issue for TEAC accreditation. Rather, the program itself is the unit of measure (p. 60). The quest for measures of competence and value added can distort the primary purpose for accreditation ongoing improvement of teaching and learning. The author claims that the TEAC approach resolves many of these issues. The factors they address include: Valid evidence that graduates are qualified, competent, and caring teachers. Evidence that the program has a quality control system that works as designed and improves program quality. Evidence that the program has the capacity for quality as measured by the fact that a regionally accredited institution is committed to the program (p ). The program in other words is shown to have the capacity for quality, the knowledge of how to control quality, and the wit to achieve quality results from its graduates performance (p. 62). Interesting asides: Teacher education programs are not necessarily the cheapest program on campus. TEAC has found that Teacher Education programs costs are above the norm (p. 63). The overall quality of preservice programs is not as low as some claim (p. 63). Preservice candidate grades in arts and sciences courses are equal to or better than other students. Performance in courses is not related to performance in student teaching. Preservice program faculty research skills is rather low. The audience for this article seems to be those interested in preservice teacher program accreditation. 61

62 Murray, M. Perez, J., and Guimaraes, M. (2008). A model for using a capstone experience as one method of assessment of an information systems degree program. Journal of Information Systems Education, 19(2), Retrieved August 8, 2011 from {Journal Article} I found this article because I was searching for information on rubric development. While that topic is mentioned, the piece is really about building assessments to meet accountability requirements. This article provides very readable description of elements of an assessment program including: Accreditation-driven assessment Methods for creating good outcomes Methods for mapping outcomes to assignments and to rubrics How to analyze the results of these assessments to satisfy accreditation and other accountability requirements The article is written for faculty members who want to build a capstone experience. But this article was clearly driven by accreditation and written for accreditation team members. The authors even cited their accreditation commission in their abstract. 62

63 Powell, J. W. (2011). Outcomes assessment: Conceptual and other problems. AAUP Journal of Academic Freedom, Volume Two. Retrieved August 16, 2011 from {Journal Article} This author presents his very strong and negative opinions about performing assessment in higher education. In one way, I appreciate this essay because it provides examples of several common misconceptions that faculty have about assessment. The author s arguments are certainly familiar. He describes misconceptions such as: 1. Assessment damages academic freedom. 2. Measures what I don t value. 3. Takes too much of my time. 4. Replaces grading and grades should be enough. 5. Is guided by those administrators and captains of business instead of faculty. 6. Is a passing fad. 7. Is really about evaluating and punishing faculty. 8. Is designed to allow administration to cut programs. 9. The data are never used. {I found this one interesting in light of #7 & 8} (Paraphrased from pp. 4-17) I am not surprised that the impetus for this scalding article included issues that are only tangentially related to assessment. His strongest comments were not about assessment at all. He argued strongly for what he felt were the proper components of general education which included adding a requirement for every student to take his two courses. He also argued that the field was headed in the wrong direction and didn t know the true purposes of higher education. Like others who didn t like assessment, he based his comments on terrible assessment practice and on assessment aimed strictly at accountability. However, his arguments were so stilted that at best his characterizations were based on ignorance. But given his obvious mental abilities, I wonder if instead, his exaggerated characterizations were intentional tools for grinding his ax. The author begins by outlining conceptual problems with the whole notion of assessment and then launches into an argument that assessment should not be used because it lacks experimental study results supporting its viability (e.g., multiple national studies utilizing control and study groups to prove the efficacy of using assessment). I had to ask myself while reading this part of the article whether any of the pedagogical strategies he currently used enjoyed such empirical backing. Of course, I know the answer is likely no because I ve studied pedagogy and teaching strategies. They are based on best practices rather than true experimental research because of the ethical constraints on such study with students. Thus, the author s first major argument against assessment does not hold water for me 63

64 He suggests that the real backing for assessment lies not in experimental research but instead is based in the demands of administration and the captains of industry who are pulling the strings. He suggests that the arguments for assessing outcomes are expressions of models of corporate business management (p. 9). Now we start to see more of that ax he wants to grind. He suggests that assessment vilifies qualitative data which of course is incorrect. Here is one of my reasons I think his characterizations are intentional. The common misconception found across the nation with faculty members is that assessment prefers quantitative data over qualitative data. Powell instead claims that assessment makes abusive and malicious statements about qualitative data. He goes on in with this same line of reasoning when he claims that assessment takes in the dirty laundry from quantitative [assessment] (p. 10). He finally reveals another ax he wants to grind on page 11 when he explains that educators and administrators don t have a clear picture of the contemporary bachelor s degree and what it means. If they did, general education would not be such an ineffective mess (p. 11). The author calls for education that leads to social justice, avoiding preparing students for existing power structures (p. 12). While this may be a common argument in some circles of higher education, perhaps even a worthy goal when tempered with reality, this has little to do with his arguments about assessment. He argues that assessment pulls him away from teaching which, of course, would be true if he added a layer of assessment beyond his current evaluation strategies. He later states that assessment divides the faculty. He suggests that assessment reinforces those faculty members who enjoy rebuking each other in underhanded ways (p. 14). It seems that he mistrusts his academic leadership claiming that things are guided by captains of industry and that this nebulous group extends their power into the academy via the chancellors and presidents they hire (and deans, if trustees can pull it off down that far on the organizational chart) (p. 13). He goes on to explain how data are mishandled, misused, ignored, and simply creates paperwork. Then he turns his attention to his belief that those who are calling for assessment on behalf of stakeholders like parents and others are serving their own agendas rather than speaking for those populations. In his final section he makes recommendations for the future. Consistent with his essay, his next steps have more to do with reevaluating the overall purposes of higher education and fixing general education. He asks for general education to stop being pushed off the bench stopping the expansion of the majors, stopping the trustees from pushing for faster throughputs, and expanding general education by about a year (p. 21). He closes his recommendations by arguing for a two year sequence of courses [not surprisingly in his discipline and area of expertise] the intellectual history of the world (p. 21). The audience for this essay is unclear. I believe it is written to other faculty, but the messages seem to be intended for administrators and academic leaders. I found three things of use to me from this article. First, it serves to explain how the complex issues in higher education can be used for serving political agendas. The article was not really about assessment at all. Assessment 64

65 was simply the hot button the author used to address his agenda. Second, and more valuable, this article serves to exemplify (and magnify in this case) misconceptions about assessment. Third, it serves as a reminder that bad assessment practice can be damaging to faculty and students. 65

66 Rhodes, T. (2009). The VALUE project overview. Peer Review, 11(1), 4-7. Retrieved August 11, 2011 from {Journal Article} Dr. Rhodes is the Vice President of the Association of American Colleges and Universities (AAC&U). His article outlines the VALUE project which is a part of the larger AAC&U initiative known as Liberal Education and America s Promise (LEAP). The VALUE project is an assessment methodology that uses authentic student course-based assignments and other work. I must admit my bias here. As a psychometrician I love using performance-based assessments for several types of learning outcomes because they fit both theory and best practices as defined by my discipline s standards. I also strongly favor good scoring rubrics because, according to the research I ve reviewed over several years, rubrics tend to improve formative evaluation, improve learning, and improve grading. The VALUE project brought educators, employers, and policy makers together to determine a set of essential learning outcomes in liberal education. The result was a list of eleven outcomes: Intellectual and Practical Skills Inquiry and analysis Critical thinking Creative thinking Written communication Oral communication Reading Quantitative literacy Information literacy Teamwork Problem solving Personal and Social Responsibility Civic knowledge and engagement local and global Intercultural knowledge and competence Ethical reasoning Foundations and skills for lifelong learning Integrative and Applied Learning Integrative and applied learning (Downloaded from page one of the website.) The participants in the project drafted scoring rubrics for each of these outcomes so that educators could assess them using more authentic student work (as compared to multiple-choice exams). The rubrics are somewhat holistic. They do have multiple criteria rows in the rubric tables, but each row is a set of interrelated elements of quality. The rubrics can be found online at The audience for this article is all educators. However, it seems to be written as a marketing piece for others such as policy makers. The piece argues for use of the eleven outcomes, use of portfolio and other authentic assessment methods, and the use of the VALUE rubrics for scoring these student artifacts. 66

67 Rogers, G. (2003). Do grades make the grade for program assessment? Communications Link Accreditation Board for Engineering and Technology, Baltimore: MD. Retrieved May 27, 2011 from UPDATE/Assessment/Assessment%20Tips4.pdf {Online Article} This is a very short article that really has only one central message. Faculty can t expect to use course grades as measures of student learning. The reasons are many, but boil down to the fact that many things unrelated to the actual learning go into course grades. In addition, when a faculty member does limit their course grade to just learning (a very rare case) it is global learning for the entire course. Thus, parsing out what a lower or higher grade actually means in terms of actionable content is not possible. For example, if grades were on a downward trend, what does that mean? Were the students doing poorly with the foundational knowledge offered at the front of the semester or did they flounder with the application of that knowledge? Did they do well with balance sheets but poorly with reasoning and decision making? Without more specific assessment results than an overall course grade, there is no way to determine the answer to these questions. The audience for this article is supposed to be faculty and others involved with accreditation of their science programs. But, the article would be useful to any faculty member wondering what constitutes evidence of student learning. 67

68 Rogers, G. M. (2007). Self assessment: Quality assurance of program level assessment of student learning. Accreditation Board for Engineering and Technology, Baltimore: MD. Retrieved August 19, 2011 from UPDATE/Assessment/Revised%20self-assessment.pdf {Online Document} The author presents a simple and easy to follow rubric for judging whether an institution s assessment system is up to current standards. The rubric was developed by the Accreditation Board for Engineering and Technology but the principles are useful to anyone in assessment and accreditation. The audience for this rubric is assessment coordinators. 68

69 Ruben, B. D. (2007). Higher education assessment: Linking accreditation standards and the Malcolm Baldrige criteria. New Directions for Higher Education, 137(Spring), Not available online. {Journal Article} I reviewed this article because some of the faculty at our institution are interested in winning the Baldrige Award. Thus, it behooves me to get a better handle on what the Baldrige organization might be looking for in an academic program. The Baldrige criteria are a framework that organizations can use for dealing with quality. The Baldrige organization has created versions of the framework for various business sectors and has now built one for higher education. The goals of the Baldrige program are to: Identify the essential components of organizational excellence Recognize organizations that demonstrate these characteristics Promote information sharing by exemplary organizations Encourage the adoption of effective organizational principles and practices. (p. 65). The Baldrige framework for education contains seven components (areas for exploration) including: Effective leadership approaches and governance systems used to guide the institution, department, and program Strategic planning process that considers the mission of the school, how the mission is translated into action in goals and plans, and how faculty are engaged in addressing the mission goals. Beneficiaries and Constituencies which looks at whether the organization has knowledge of the needs of stakeholders, how they got that knowledge, what they did with it, and whether the needs of the stakeholders are met. Programs and Services Looks at the programs and services and how their quality is assured. (Interestingly they don t mention whether the programs and services are effective or not.) Faculty/Staff and Workplace Maintain a workplace culture that encourages and rewards excellence and fosters professional development. Assessment and Information Use Looks at how the unit assesses its efforts to fulfill its mission and how the assessment information is used. Outcomes and Achievements Asks for evidence of the quality and effectiveness of the unit. (A graphic version of this framework can be found at Ruben begins his article by reviewing the findings of the Spellings Commission (see US Department of Education, 2006 for these findings). Then he addresses the drivers of change including assessment an associated national dialog (p. 62) and accreditation the most visible influences for reflective review within higher education (p. 63). More specifically on accreditation the author points out that accreditation has been a driving force in promoting increasing attention to assessment, planning, and continuous improvement through their standards and guidelines (p. 63). In more about the shift I have noted going on in higher education the author suggests that new standards are giving attention to assessment rather than intentions and inputs (p. 63). He speculates that external reviews of higher education will get more aggressive and comprehensive as time goes on (p. 65). 69

70 The author then turns his attention to explaining the Baldrige framework and how it was adapted to fit the needs of higher education. Because this was a marketing article, the author goes a bit far in his claims. For example, he suggests that accreditation has changed in response to the Baldrige framework. I suspect the accreditation organizations may disagree. I was uncomfortable with this article. At first I was not sure why. On reflection I realize the issue is the underlying approach. The framework asks only for evidence of excellence. Accreditation on the other hand looks for evidence that a unit is seeking and findings areas that need improvement and dealing with them. Further, the Baldrige framework barely mentions assessment in their criteria. When they do, they are looking only for evidence that the unit is using evidence. This is similar to accreditation standards from the 1990s that looked only at whether a process existed. Not the nature and quality of that process. I feel that this article is primarily a sales tool for the Baldrige organization, so the audience for this article is anyone who will listen. While disquieting to me, the article did give me a better understanding of the sorts of challenges faculty might have to deal with if they want to win the Baldrige award. 70

71 Schuh, J. H. and Gansemer-Topf, A. M. (2010). The role of student affairs in student learning assessment. (Occasional Paper No.7). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved June 24, 2011 from {Paper} I included this paper in the bibliography to show how other areas of the academy are entering the assessment movement. I won t review all the basic assessment points the authors provided. Suffice it to say that they offer a primer on assessment issues that is consistent with other resources found in this bibliography. I was not surprised by the messages in this paper. I ve spoken at NASPA and other student affairs meetings. While there I found that their membership is working toward the same goals I ve seen on the academic side of the academy. The authors present a call for student affairs staff to join the assessment movement. They suggest that surveys measuring such things as how often certain campus resources are used are not enough. Instead, they need to find ways to measure what students gained from the use of resources. Further, the authors suggest that these measures should be cyclical and collaborative with other assessment efforts on campus. The audience for this paper is clearly student affairs professionals. This paper serves to show that student affairs professionals recognize not only that they need to join the assessment movement, they need to provide meaningful data. It also serves to show how student affairs professionals recognize the need to collaborate more with academic affairs to get a better picture of how our students are performing. 71

72 Smith, D. S., & Katz, M. A. (2007). Academic freedom in an age of assessment and accountability. Midwest Law Journal, 22, A version of this article was retrieved August 22, 2011 from {Journal Article} This was a very valuable article for me. While I ve not heard a single claim of infringing on academic freedom here at my institution, I ve heard it elsewhere. I was aware of the concepts of academic freedom from well before my time in college. My grandfather (Felix August Wilhelm Muhlner) received his PhD from a university in Germany in the late 1930s. He experienced an extreme version of infringement on academic freedom. Luckily for him, his research led him to disbelieve Hitler s regime so he fled Germany for America before the balloon went up. Even with this background, I was not fully aware of the specifics and components of academic freedom until now. The authors present a somewhat balanced review of Academic Freedom (AF) from a legal viewpoint. They begin by attempting to define AF: The term academic freedom is an elusive term which has been used to describe the rights of a variety of stakeholders in the realm of higher education. It has historically been applied to an educational institution s right to operate autonomously, free from governmental control. Individual faculty members use it when discussing their right to express opinions and viewpoints, both in the classroom and in their research activities. Most recently, the definition of academic freedom has also been expanded to include students right to learn. (P. 1). Note My apologies, the page numbers listed in my review are based on the online version of this journal article rather than the typeset version. The authors then highlight cases to define the dimensions of academic freedom. I ll mention each of them here. 1. Rights of the Individual Professor versus the Government 2. Rights of the Individual Professor versus the Institution 3. Rights of the Institution. 4. Rights of the student. I won t delve into the specifics of all of these. Rather, I ll focus on the areas dealing with accreditation and assessment. The interested reader will find plenty of information on all of these dimensions in this article. AF as it pertains to outcomes assessment and accreditation is a balancing act between the four aforementioned dimensions. The courts have upheld the institutional decisions to require accreditation and assessment at their respective institutions and the institutional right to govern itself (p. 4-5). The American Association of University Professors (AAUP) has stated their support for accreditation and assessment citing quality and the necessity of a review process (p. 5). Thus, the AAUP recognizes the responsibility of faculty to participate in reasonable accreditation and assessment efforts without academic freedom being an issue (p. 5). 72

73 The authors admit that published materials directly related to assessment and accreditation are scant, but interpret the existing laws, direct announcements and commentary from AAUP, and other materials to suggest that institutions take the responsibility for discovering and managing ways to improve and measure student learning outcomes (p. 6). More specifically, they suggest that if there is appropriate faculty input, then assessment provides little threat to AF (p. 6). They summarize by suggesting that institutional decisions use assessment and seek accreditation should be considered necessary for the optimal organization and maintenance of the institution and free from academic freedom infringement claims by faculty (p. 7). The authors close their article by warning us that if institutions don t take these responsibilities, we may find AF in jeopardy because of mandates for specific assessments from outside the academy (p. 9). The audience for this article seems to be faculty and administrators interested in this topic. I have to admit that I sensed a positive attitude toward assessment and accreditation throughout the article. I also have to admit that I m not a legal scholar. Therefore, it may be possible that some of the authors opinions exaggerate some of their positions to support their conclusions. However, in my biased opinion, the authors seemed to balance court cases and other publications with both pro and con opinions. Further, I reviewed the original AAUP source documents they cited (see the AAUP entries at the front of this bibliography) and found the original source documents consistent with Smith & Katz s conclusions. In fact, I found the AAUP statements affirmed assessment and accreditation even more strongly than characterized in this article. 73

74 Swing, R. L., and Coogan, C. S. (2010). Valuing assessment: Cost-benefit considerations. (Occasional Paper No.5). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved July 28, 2011 from {Paper} The authors offer a good companion piece to the Wellman (2010) paper listed later in this bibliography. They suggest that colleges have no choice but to guess at how much they should spend on assessment to achieve the best return on their investment. A primary reason is that administrators can only focus on reducing costs because they have no idea what benefits or ROI are coming from their assessments. The authors argue that a good place for administrators to start with budgeting optimization is to focus on maximizing the value of the assessment work. For example, they suggest that we ensure that the data collected matter and insure that the data lead to improvements in learning and student success. This argument is similar to other authors calls for closing the loop. However instead of citing the moral imperative to use data to help students, Swing and Coogan argue the use of the data to improve the cost-benefit ratio. The authors second issue has to do with determining the true costs of assessment by coming to agreement about what should be included in the totals. They begin with some basic principles or assumptions. Cost effectiveness: maximum information is gathered for the time and money given to the work. Boundaries: Too little funding means a lack of meaningful data. Too much means an overwhelming array of data that are not used. Cautions: Figuring an exact price of assessment is likely going to cost more than it is worth. Estimates are just fine. (Paraphrased from pp. 5-6) Then the authors describe what costs ought to count in figuring the costs. Start-up costs (meetings, training that is not aimed at improving instruction, etc.) Instruments (or development) Administration (unless a part of regular instruction) Access to vendor exam results Data analysis (also unless a part of regular grading practice) Report generation Stipends or other incentives Coordinators and other overhead Faculty time (an estimate that does NOT include normal instructional or departmental activities) (Paraphrased from pp. 6-9) Of course, the authors also list what should not count in the ROI calculation. 74

75 Opportunity costs what would we be doing instead. The benefits of effective practices encountered (and the reverse) Sunk costs (after the start-up period that is) Administrator time for using data to make decisions or for planning. (This happens regardless of whether you start assessment systems or not.) They close with some rules of thumb: To the extent that assessment data are only used for accountability the spending is considered an expense. To the extent that assessment data are used to benefit learning the spending is considered an investment. But, the reality is somewhere in the middle. They argue as I do to only buy as much validity as you can afford. Perfection is not an option. As the institution fails to use the data, fails to close the loop, and spends time looking at interesting but issues beyond their control, the cost benefit ratio decreases. As the institution focuses more on using data intentionally for specific decisions, the value goes up. Thus, spending more time carefully planning may return a much higher cost benefit ratio. Also, as the institution keeps the size of the assessment system down, uses existing data, and utilizes proper sampling, the cost benefit ratio improves. The audience for this paper is administrators charged with leading assessment efforts. 75

76 U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education. Washington: D.C: Author. I ordered a paper copy of this report. However an online version can be retrieved from {Government Report} I included this report because many authors cite the Spellings Commission as they discuss the specter of government intervention and/or prescription in higher education. Although most of the recommendations found within this report were not implemented, it still represents an important event in the assessment movement. The preamble to this report goes to great lengths to extol higher education as one of our greatest success stories (p. ix) But it is all down-hill from there. Despite the achievements, however, this Commission believes U.S. higher education needs to improve in dramatic ways (p. 3). Our yearlong examination of the challenges facing higher education has brought us to the uneasy conclusion that the sector s past attainments have lead our nation to unwarranted complacency about its future (p. 3). The commission suggests problems in higher education ranging from access for diverse populations to costs, and the level of student learning. They outline six specific recommendations for improving higher education. 1. Expand higher education access and success by improving preparation and persistence 2. Restructure the financial aid system and add controls to the system. 3. Move from a system based on reputation to one based on performance. 4. Higher education must embrace a culture of continuous innovation and quality improvement 5. Develop a national strategy for lifelong learning 6. Ensure the capacity of our institutions in key strategic areas such as science, engineering, etc. via infusion of capital (pages 17-25). Germane to this review, the commission very specifically calls on institutions to assess student learning and report the findings. Their call is to concentrate on what higher education can become (p. 17) and that the nation give urgent attention to improving higher education (p. 29). The audience for this report is unclear. I suppose it is to any American who will listen. I read the report because it was mentioned by so many of the other authors in this bibliography (see Ewell, 2008 for what I feel is the best commentary on this report). This document forms an important impetus for, and component of the complex assessment movement. 76

77 Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass. {Book} I can sum up this book in four words basic but very good. The author has built a how-to guide for institutions to get started with assessment. She begins with a foundational chapter that outlines some basic principles and perspectives on assessment. She argues for assessment as data collection for improvement instead of assessment for accountability. I happen to agree with this perspective rather strongly, so I began my read with a positive first impression. Like most other authors, Walvoord admits that accreditation and other external pressures exist and are likely driving some/many assessment projects. But, she argues, the real value of assessment is in student learning and success. The author attempts to explain the differences between classroom assessment and program assessment. However, I feel she was not as complete on this issue as she could have been. In fairness, perhaps the lapse is simply because of the early release of this book. Few examples of success with integrating the two levels of assessment had emerged by The author tried to make explicit her perspectives on assessment through what she called themes for her book (p. 24). 1. Assessment may seem alien but it can be scholarly in the best sense. 2. Assessment can provide a wiser basis for planning, budgeting, and change. 3. Assessment should be embedded into the basic activities of the institution. 4. Assessment can be applied to complex liberal arts learning goals, distance education, accelerated learning, etc. 5. Assessment is founded on three basic steps a. Build on what you are already doing b. Articulate the criteria for evaluating student work c. Collect data on student strengths and weaknesses and then determine the causes for both 6. Keep assessment simple. (p. 24) I found much to like about this list. I ve seen scholarly work in assessment and I ve also seen a few minor attempts at using data for budgeting and other management decisions. I strongly agree with the author about embedding assessment in coursework. But I argue a different reason based on an alternative attempt. At a large land grant institution they tried using an off-the-shelf exam with their seniors as a measure of attainment of their institutional outcomes. They invited seniors to come take the exam. Those who finished would receive a coupon for a pizza. Most students finished the four-hour exam in less than ten minutes and collected their coupons. Thus, it is essential that students attend to the assessments. The best way to do that is to make it count as part of the grade. Better still; select a regular learning assignment from a course to use as the assessment tool. Issue number four is the primary thrust of the Association of American Colleges and Universities LEAP program described within this bibliography (Leskes & Wright, 2005). Issues five and six are the essence of the author s step-by-step approach. 77

78 The author addressed many of the concerns faculty express about assessment and answers to each of them. Academic Freedom Walvoord states that good assessment does not ask faculty to give up their decision making authority or to repress their knowledge or judgments (p. 8). Student Privacy She feels this is not an issue because the same student privacy rules apply to assessment data as any other data collected from students. The Real Goals of Higher Education Cannot Be Measured: The author points out that higher education goals can be measured albeit with difficulty and with subjectivity. But, she points out that assessing students work has always been a part of teaching (p. 9). She explains (as do measurement experts) that there is a middle ground between pure subjectivity and pure objectivity. The goal of good assessment is to get the best data possible. Assessment is Really Evaluation of Faculty: This is a common argument that I ve heard as well. The author uses the same approach I do. Good assessment is aimed squarely at student learning and improving that learning. Thus, the decisions and changes faculty make as a result of the data collected are the main focus. But, the author ends by admitting that assessment does bring in a measure of accountability that may not have been in place before. But, that such accountability can be used to benefit the students, the faculty, and the institution (p. 9). Student Learning is Affected By Factors Beyond Faculty Control: While absolutely true, this is not a reason to avoid assessment. A good assessment program focuses on those factors you can control (p. 9). The author created a generic set of steps that she adapts slightly for each level/type of assessment (institution, department, and general education). Embed assessment into existing institutional functions. Appoint a coordinator and/or establish responsibility for assessment. Plan and organize to allow for collaboration Analyze task, audiences, and purposes Articulate learning goals Conduct an assessment audit to see what is already happening Make a plan to fill all information needs base on existing materials and any new materials Collect data Use the data Make recommendations for improving learning Make recommendations for improving assessment Write the Report Start the planning process over again for the next cycle and keep going every year. (These same steps are repeated in some form in every chapter.) The steps listed above are generally good. I like the simplicity of the steps in the process. The author keeps the descriptions relatively brief and easy to understand. An institution might succeed if these steps were followed. BUT 78

79 This book represents only one model for assessment and one method. Her model relies on higher level program assessment methods focused on accountability. She also limits her model to embedded institutional functions such as strategic planning. Again, this might fit a given institution. But, I am aware of many other models and authors have recently begun publishing descriptions of them (See Maki, 2010 for an example). Maki (2010) show that assessment must be designed to fit each institution s unique situation and that one size does not fit all. I agree that institutions should be very careful to consider their own situations and determine the best models and means for their own journey down the assessment path. The audience is obviously practitioners. I strongly recommend this text as a good framework for understanding the basics of assessment in higher education. But the reader is again cautioned to look to other sources for alternative models and methods that may better fit their needs. 79

80 Wellman, J. V. (2010). Connecting the dots between learning and resources. (Occasional Paper No.3). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved July 29, 2011 from {Paper} The author provides a discussion about funding for higher education. She argues that there has been lots of work done on assessment but relatively little on the affects of funding on the quality of higher education. In the forward, Peter Ewell explains that when college leaders are asked how much money they need to operate, the typical response is more or as much as our peers. Little serious consideration usually is given to how much of what kind of resources does the institution require to reach or maintain a given level of output or performance (p. 4). Wellman provides a list of issues related to funding of higher education. In a time of diminishing budgets, not knowing what the thresholds are for maintaining quality learning seems out of step with reality. There has been little done on cost effectiveness in higher education. Budget has been used as a surrogate for quality thus justifying continuing cost hikes. Further, faculty tend to be focused on getting funding for their research at the expense of student learning. Thus, many call for a change in the tenure system to shift from research to teaching and learning. What little work has been done shows that Community colleges tend to cost less per student. However, because they don t deliver many degrees, the costs per degree statistics are quite high. Sciences and lab-based programs tend to cost the most. Languages and humanities tend to cost the least. The public tend to have negative views about how higher education uses their money. A very important point the author offers is that research has thus far found no correlation between levels of spending and levels of learning. I don t find this surprising. However, research has found that how the funds are allocated makes a difference. For example, when a higher percentage of the funds went to student services and academics, students tended to perform better. Business has known for years that just throwing money at a problem is not a good solution. The author s next issue found in the literature is something we have been discussing at our institution for some time. Research has found that the decline in student success at community colleges is due more to the lower levels of high school preparation than to reduced funding. The opposite is true for selective institutions. Also common to our experience here was the literature showing that a majority of institutions have seen a sharp increase in the use of adjunct rather than tenured faculty. However, these 80

81 studies have not tied this to learning quality. Rather, the research is dedicated to opinions about the effects of this trend on the professoriate (p. 12). The authors suggest that what few published ROI studies in higher education focus on the spending per student in relation to improvements in student retention. They argue that true ROI studies would look at the costs of degree production to the earnings of college graduates or some similar relationship. Another trend the authors noted in the literature is the focus on learning productivity or efficiency. Here researchers look at the effects of lower-cost labor and resources on retention, degree attainment, time to degree, and reducing remedial education. The concepts in this issue seems very close to the ROI studies the authors mentioned previously. The authors then offer a series of sweeping recommendations to improving the connection between learning and resources. Switch the model from allowing revenue availability to determine spending priorities to the other way around. Focus on how the funds are used. That is, aim at student services and academics. Change the financial aid rules to support the goal of student degree attainment rather than access. Stop allowing students to accumulate excess credits. The authors final argument sounds much like arguments I ve seen in other documents listed in this bibliography. They suggest that analysis of costs should be embedded in the ongoing evaluation of students. Student learning assessments have become too focused on compliance and do not yield results that are particularly helpful in making decisions about resources. Far too much costs analysis work yields accounting information rather than how well the resources are used for educating students. Other authors call for assessments that are not focused so much on compliance and reframing the assessments to provide information useful for making decisions about learning. I feel that trying to do both is possible, but highly complex and perhaps beyond the ability of novice assessors. There would need to be very significant coordination between faculty and administrators needs in very specific defined terms to have any hope of collecting data that would be useful to both learning and budgeting. 81

82 Wergin, J. F. (2003). Regional accreditation and student learning: Preparing teams for effective deliberation. Washington, D. C.: Council of Regional Accrediting Commissions, Retrieved July 19, 2011 from {Paper} This article was written at the behest of the Council of Regional Accrediting Commissions as a guide for developing training for accreditation site team members. The author points out that accreditation has become far more complex with the move to measuring outputs as opposed to inputs. Therefore, the article focuses on the skills needed to deal with the current complexities. The author begins with a discussion about deliberation as a key tool for the accreditation site team. The skills of a good deliberator include flexibility tolerance, non-confrontational discussant, focused on what matters, able to go from concrete (the standards) to the abstract (the institution s story). The author digs deeply into four related but different possible roles (perhaps perspectives) that a given team member might take on. These include: Examiner an objective judge of the quality of the learning (academic quality) not a role commonly used in accreditation. The examiners focus is compliance with standards. Auditor an objective judge that reviews the processes of learning that are believed to lead to good learning. The auditors focus is similar to examiner but the auditor looks at what the institution knows about its student s learning and what the institution does with that knowledge (p. 15). Advisor an insider who knows the story and details and is there to help the institution improve. The advisor s focus is the unique challenges faced by the institution and what the institution might do to move forward. Critic a relative outsider who connects all of the above and makes a holistic judgment about the quality of the institution. The critic s focus is on all of the above. This article is written for leaders of accreditation organizations. However, reading this would certainly give a prospective site team member one way of looking at their upcoming roles. 82

83 Wright, R. E. (2010). Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests. College Student Journal, 44(1), Not available online. {Journal Article} In this article the author reveals his unhappiness with a third-party exam. In the process he also reveals a lack of understanding of assessment purposes and practice. In his first argument, he suggests that multiple authors have concerns about assessment of learning in higher education. I went and read two of his citations, and I looked at the abstracts of the others. I found that these authors concerns had more to do with discussing what needed to be done to make assessment useful than an expression of dislike (e.g., Gibbs & Simpson, 2005 listed earlier in this bibliography). Further, one of his cited articles (Knight, 2001) is more about faculty built assessments and a call for a return to documenting inputs than a support for Wright s argument. Wright s next two arguments suggest he is a novice in measurement and testing. He argues that an Educational Testing Service exam is passing students with a cut score of only 59% to 69%. This is completely possible and completely legitimate. He claims that in reality, 95% of the MBA examinees are actually failing the MBA exam because they are not scoring nearly well enough to matter. Here, Wright is trapped in the belief that a 70% is a C, and 80% is a B and a 90% is an A. I have built literally hundreds of professional exams. It is quite easy to build an exam that is extremely difficult thus requiring a low cut score such as 59% to pass. It is just as easy to construct a very easy exam which would require a very high cut score such as 90% to pass. ETS uses two methods for generating cut scores that account for the difficulty of the particular items on each new exam they build. I ve used the same legally defensible methods resulting in cut score ranging from the mid fifties to the high eighties. The author s next argument is that faculty can do a better job of assessing their student by talking to them than any outside expert can with an exam. I can t comment much on this argument because he does not expand on this claim enough to understand his reasoning. In a related concern, the author says that bringing in experts and their exams suggests faculty can t do their jobs. He then discusses faculty fitness based on their credentials and measures used over sixty years (p. 145). Next he suggests that the ETS exams are simply inadequate either in size or coverage. He asks - Can any test assess properly the knowledge and information gained in a semester worth of classes in 36 minutes (p ). Of course, any measurement expert can tell you this is possible, but not easy. Thus, I tend to agree with this argument. But not with his conclusions about tossing assessment altogether. Rather, this argues for other types of assessment that get at all the important learning throughout the semester. Wright moves on to point out that he does not get to see the questions on the exam, which he says is the reason the exam is not valuable to learning. Here, the author fails to differentiate between formative and summative measures. He also doesn t realize that ETS allows faculty to see the exams under controlled conditions. ETS proctors test review sessions for faculty regularly. What ETS won t do is send faculty a copy of the exam. Strangely, the author s next 83

84 argument is that using the ETS exams is bad because faculty would find out what the questions were, and would then adjust their curriculum to teach to the test. Finally, Wright argues that the ETS exams are bad because they counter the missions of the business accreditation organization. On this issue, I agree that there is a potential problem. To the extent that an external exam does not align, it should not be used. This article is written for faculty members. Wright clearly has very negative feelings about external exams. In his case, I suspect he targeted the ETS exam because it was the instrument that his institution selected. He reserved his most vigorous comments around his concerns about faculty evaluation which suggests to me that he had a bad experience. Perhaps his institution used scores from the ETS inappropriately? I found this article quite useful because it illustrates some misconceptions some faculty have about good testing practice. The author makes broad and unsupported generalizations along the way creating baggage the reader would have to wade through making this one of my less recommended articles especially for those new to assessment and testing. There was one ironic twist to my reading this document. The author s citation of the Gibbs & Simpson (2005) article revealed one of the best articles on connecting assessment to real learning. 84

85 Zane, T. W., Schnitz, J. W., and Abel, M. H. (2010). Western Governors University: A radical model for preservice teacher education. In L. B. Erickson, N. Wentworth, (Eds.), Tensions in teacher preparation: Accountability, assessment, and accreditation. United Kingdom: Emerald. {Chapter in a Book} This chapter describes the Western Governors University (WGU) model for delivering higher education. It focuses on how the differences in their model created some tensions with accreditation but resolved tensions common at other higher education institutions. The authors discuss how a very detailed (and preexisting) assessment system with centralized management of all assessment data made the made that part of the accreditation experience quite easy. The audience for this book is any faculty member or higher education administrator involved with accreditation. Although it focuses on specialized accreditation for teacher education, the issues are nearly identical for regional accreditation. I recommend this article because it outlines some key issues that can be factors in successful accreditation. 85

National Institute for Learning Outcomes Assessment

National Institute for Learning Outcomes Assessment National Institute for Learning Outcomes Assessment Making Learning Outcomes Usable & Transparent About NILOA NILOA s mission is to champion and support efforts by colleges and universities to obtain,

More information

NASPAA Accreditation. Policy Briefs. Crystal Calarusse

NASPAA Accreditation. Policy Briefs. Crystal Calarusse NASPAA Accreditation Policy Briefs Crystal Calarusse What are the Characteristics of NASPAA Accreditation? Crystal Calarusse 1 April 2015 The Commission on Peer Review and Accreditation (COPRA) of the

More information

The Role of Student Learning Outcomes in Accreditation Quality Review

The Role of Student Learning Outcomes in Accreditation Quality Review 6 This chapter is designed to acquaint the reader with the context for accreditors increased emphasis on student learning outcomes as a key indicator of institutional quality and on the use of learning

More information

National Institute for Learning Outcomes Assessment

National Institute for Learning Outcomes Assessment National Institute for Learning Outcomes Assessment August 2011 Making Assessment Meaningful: What New Student Affairs Professionals and Those New to Assessment Need to Know Marilee J. Bresciani For some

More information

Program Quality Assessment. William Wiener [email protected]

Program Quality Assessment. William Wiener william.wiener@marquette.edu Program Quality Assessment William Wiener [email protected] Marquette University Medium Sized Private Catholic University 11,500 students 3549 Graduate and Professional Students 39 Master s

More information

Why Are We Assessing? Rethinking Assessment s Purposes. By Linda Suskie

Why Are We Assessing? Rethinking Assessment s Purposes. By Linda Suskie Why Are We Assessing? Rethinking Assessment s Purposes Electronically published in Inside Higher Ed, October 26, 2010 and Adapted from opening plenary remarks, 2010 Assessment Institute, Indianapolis,

More information

Assurance of Learning Assessment Process

Assurance of Learning Assessment Process Assurance of Learning Assessment Process (AACSB Assurance of Learning Standards: An Interpretation, Accessed 12/01/2011, )

More information

CHEA. Accreditation and Accountability: A CHEA Special Report. CHEA Institute for Research and Study of Acceditation and Quality Assurance

CHEA. Accreditation and Accountability: A CHEA Special Report. CHEA Institute for Research and Study of Acceditation and Quality Assurance CHEA Institute for Research and Study of Acceditation and Quality Assurance Accreditation and Accountability: A CHEA Special Report CHEA Occasional Paper Special Report December 2006 CHEA The Council for

More information

ISSUE PAPER. Fourth in a series of Issue Papers released at the request of Chairman Charles Miller to inform the work of the Commission

ISSUE PAPER. Fourth in a series of Issue Papers released at the request of Chairman Charles Miller to inform the work of the Commission A NATIONAL DIALOGUE: The Secretary of Education s Commission on the Future of Higher Education ISSUE PAPER Fourth in a series of Issue Papers released at the request of Chairman Charles Miller to inform

More information

THE DEGREE QUALIFICATIONS PROFILE: A FRAMEWORK FOR ASSESSING GENERAL EDUCATION

THE DEGREE QUALIFICATIONS PROFILE: A FRAMEWORK FOR ASSESSING GENERAL EDUCATION THE DEGREE QUALIFICATIONS PROFILE: A FRAMEWORK FOR ASSESSING GENERAL EDUCATION Natasha Jankowski and Pat Hutchings, NILOA Stefani Dawn, Oregon State University Laurie Dodge, Brandman University AAC&U General

More information

Instructional Technology Capstone Project Standards and Guidelines

Instructional Technology Capstone Project Standards and Guidelines Instructional Technology Capstone Project Standards and Guidelines The Committee recognizes the fact that each EdD program is likely to articulate some requirements that are unique. What follows are a

More information

Promoting High Impact Learning through Experiential Education:

Promoting High Impact Learning through Experiential Education: Promoting High Impact Learning through Experiential Education: Community-Based Projects in a Graduate Marketing Class with Integration of Essential Learning Outcomes Diane M. Holtzman, Ed.D. Associate

More information

Eligibility Procedures and Accreditation Standards for Accounting Accreditation. Innovation Impact Engagement

Eligibility Procedures and Accreditation Standards for Accounting Accreditation. Innovation Impact Engagement Exposure Draft Accounting Accreditation Standards (November 15, 2012) FOR AACSB MEMBER REVIEW NOT FOR PUBLIC DISTRIBUTION Eligibility Procedures and Accreditation Standards for Accounting Accreditation

More information

2011-2012 Program Guidebook

2011-2012 Program Guidebook 2011-2012 Program Guidebook Organizational Leadership, PhD Online-Blended 1 Table of Contents Department Educational Model and Goals... 3 TCSPP Individual and Cultural Differences... 4 Program Competencies...

More information

National Standards. Council for Standards in Human Service Education. http://www.cshse.org 2013 (2010, 1980, 2005, 2009)

National Standards. Council for Standards in Human Service Education. http://www.cshse.org 2013 (2010, 1980, 2005, 2009) Council for Standards in Human Service Education National Standards ASSOCIATE DEGREE IN HUMAN SERVICES http://www.cshse.org 2013 (2010, 1980, 2005, 2009) I. GENERAL PROGRAM CHARACTERISTICS A. Institutional

More information

Strategies That Foster Critical Reading

Strategies That Foster Critical Reading Strategies That Foster Critical Reading This guide offers strategies faculty members can use to foster careful reading and critical thinking. Provide Guidance before Each Reading Make Them Curious Professors

More information

CTL 2009 ADVANCED PROGRAM REPORT

CTL 2009 ADVANCED PROGRAM REPORT CTL 2009 ADVANCED PROGRAM REPORT 1 Because the Office of Undergraduate Studies is now requiring program assessment reports that are similar to CTL program assessment reports; The Office of Research, Evaluation,

More information

AC 2011-109: ENGINEERING MANAGEMENT PROGRAM ACCREDI- TATION: COMPARING AACSB AND ABET

AC 2011-109: ENGINEERING MANAGEMENT PROGRAM ACCREDI- TATION: COMPARING AACSB AND ABET AC 2011-109: ENGINEERING MANAGEMENT PROGRAM ACCREDI- TATION: COMPARING AACSB AND ABET Amy K. Zander, Ph.D., P.E., Clarkson University Amy K. Zander is a professor and the Director of the Engineering &

More information

ASSESSMENT OF STUDENT LEARNING: AN OVERVIEW OF THE LANDSCAPE. Natasha A. Jankowski Assistant Director, NILOA Research Assistant Professor, UIUC

ASSESSMENT OF STUDENT LEARNING: AN OVERVIEW OF THE LANDSCAPE. Natasha A. Jankowski Assistant Director, NILOA Research Assistant Professor, UIUC ASSESSMENT OF STUDENT LEARNING: AN OVERVIEW OF THE LANDSCAPE Natasha A. Jankowski Assistant Director, NILOA Research Assistant Professor, UIUC Overview NILOA who we are and what we do Overview of assessment

More information

Basic Skills Initiative http://www.cccbsi.org. Academic Senate http://www.asccc.org. Center for Student Success http://css.rpgroup.

Basic Skills Initiative http://www.cccbsi.org. Academic Senate http://www.asccc.org. Center for Student Success http://css.rpgroup. Basic Skills Initiative http://www.cccbsi.org Academic Senate http://www.asccc.org Center for Student Success http://css.rpgroup.org California Community Colleges Chancellor s Office http://www.cccco.edu

More information

Revisioning Graduate Teacher Education in North Carolina Master of Arts in Elementary Education Appalachian State University

Revisioning Graduate Teacher Education in North Carolina Master of Arts in Elementary Education Appalachian State University Revisioning Graduate Teacher Education in North Carolina Master of Arts in Elementary Education Appalachian State University A. A description of how the proposed program has been revisioned to reflect

More information

GUIDE TO EVALUATING INSTITUTIONS

GUIDE TO EVALUATING INSTITUTIONS GUIDE TO EVALUATING INSTITUTIONS A Publication of the Accrediting Commission for Community and Junior Colleges Western Association of Schools and Colleges JULY 2013 Edition ACCJC/WASC 10 Commercial Blvd.

More information

PROMOTION & TENURE SYMPOSIUM

PROMOTION & TENURE SYMPOSIUM PROMOTION & TENURE SYMPOSIUM DR. NICHOLAS P. JONES A Conversation with the Provost FRIDAY, OCTOBER 16, 2015 9:45 a.m. 10:30 a.m. FOSTER AUDITORIUM, PATERNO LIBRARY 2 Good morning! Thank you for the kind

More information

360 Degree Leadership Competencies and Questions

360 Degree Leadership Competencies and Questions 360 Degree Leadership Competencies and Questions Educational Impact 2007 1 360 Degree Leadership Assessment Responses Below is a list of the questions that appear in the Educational Impact 360 Degree School

More information

EC2000 CRITERION 2: A PROCEDURE FOR CREATING, ASSESSING, AND DOCUMENTING PROGRAM EDUCATIONAL OBJECTIVES

EC2000 CRITERION 2: A PROCEDURE FOR CREATING, ASSESSING, AND DOCUMENTING PROGRAM EDUCATIONAL OBJECTIVES Introduction EC2000 CRITERION 2: A PROCEDURE FOR CREATING, ASSESSING, AND DOCUMENTING PROGRAM EDUCATIONAL OBJECTIVES Michael Carter, Rebecca Brent, Sarah Rajala North Carolina State University Criterion

More information

The National Communication Association s Standards for Undergraduate Communication Programs Updated April, 2011

The National Communication Association s Standards for Undergraduate Communication Programs Updated April, 2011 The National Communication Association s Standards for Undergraduate Communication Programs Updated April, 2011 A variety of disciplinary organizations have developed specialized accreditation standards

More information

Department of Political Science. College of Social Science. Undergraduate Bachelor s Degree in Political Science

Department of Political Science. College of Social Science. Undergraduate Bachelor s Degree in Political Science Student Outcomes Assessment Plan (SOAP) I. Mission Statement Department of Political Science College of Social Science Undergraduate Bachelor s Degree in Political Science The Department of Political Science

More information

Eligibility Procedures and Accreditation Standards for Accounting Accreditation. Engagement Innovation Impact

Eligibility Procedures and Accreditation Standards for Accounting Accreditation. Engagement Innovation Impact Adopted: April 8, 2013 Updated: January 31, 2015 Eligibility Procedures and Accreditation Standards for Accounting Accreditation Engagement Innovation Impact AACSB International The Association to Advance

More information

Masters Comprehensive Exam and Rubric (Rev. July 17, 2014)

Masters Comprehensive Exam and Rubric (Rev. July 17, 2014) 1 Educational Leadership & Policy Studies Masters Comprehensive Exam and Rubric (Rev. July 17, 2014) The comprehensive exam is intended as a final assessment of a student s ability to integrate important

More information

Structuring Self-Evaluations Prepared by Richard A. Holmgren Allegheny College

Structuring Self-Evaluations Prepared by Richard A. Holmgren Allegheny College Structuring Self-Evaluations Prepared by Richard A. Holmgren Allegheny College Self-evaluations play a critical role in faculty review by providing a context for the faculty member s work. To be of service

More information

DOCTOR OF PHILOSOPHY DEGREE. Educational Leadership Doctor of Philosophy Degree Major Course Requirements. EDU721 (3.

DOCTOR OF PHILOSOPHY DEGREE. Educational Leadership Doctor of Philosophy Degree Major Course Requirements. EDU721 (3. DOCTOR OF PHILOSOPHY DEGREE Educational Leadership Doctor of Philosophy Degree Major Course Requirements EDU710 (3.0 credit hours) Ethical and Legal Issues in Education/Leadership This course is an intensive

More information

National Standards. Council for Standards in Human Service Education. http://www.cshse.org 2013 (2010, 1980, 2005, 2009)

National Standards. Council for Standards in Human Service Education. http://www.cshse.org 2013 (2010, 1980, 2005, 2009) Council for Standards in Human Service Education National Standards BACCALAUREATE DEGREE IN HUMAN SERVICES http://www.cshse.org 2013 (2010, 1980, 2005, 2009) I. GENERAL PROGRAM CHARACTERISTICS A. Institutional

More information

pm4dev, 2007 management for development series Introduction to Project Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS

pm4dev, 2007 management for development series Introduction to Project Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS pm4dev, 2007 management for development series Introduction to Project Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS A methodology to manage

More information

Integrated Course Design

Integrated Course Design IDEA PAPER #42 Integrated Course Design L. Dee Fink University of Oklahoma Teaching is a complex human action. The many tasks that are involved comprise four general components: Knowledge of the subject

More information

HSU Division of Enrollment Management and Student Affairs. Assessment Workbook

HSU Division of Enrollment Management and Student Affairs. Assessment Workbook HSU Division of Enrollment Management and Student Affairs Assessment Workbook Contents Introduction..... 3 Mission Statements...... 4 Goals......5 Outcomes.6 Program Outcomes Learning Outcomes Methods

More information

Quality Assurance in Higher Education: A Literature Review

Quality Assurance in Higher Education: A Literature Review Quality Assurance in Higher Education: A Literature Review ANDREA WILGER National Center for Postsecondary Improvement 508 CERAS School of Education Stanford University, Stanford, CA 94305-3084 1997 National

More information

Deploying Professionally Qualified Faculty: An Interpretation of AACSB Standards An AACSB White Paper issued by:

Deploying Professionally Qualified Faculty: An Interpretation of AACSB Standards An AACSB White Paper issued by: Deploying Professionally Qualified Faculty: An Interpretation of AACSB Standards An AACSB White Paper issued by: AACSB Accreditation Coordinating Committee AACSB Accreditation Quality Committee January

More information

Reimagining the Doctoral Learning Experience

Reimagining the Doctoral Learning Experience Reimagining the Doctoral Learning Experience WALDEN UNIVERSITY was founded in 1970 as a doctorate-granting institution for working professionals. Supported by a diverse and global faculty of leading scholar-practitioners

More information

Standards for Accreditation of Master's Programs in Library & Information Studies

Standards for Accreditation of Master's Programs in Library & Information Studies Standards for Accreditation of Master's Programs in Library & Information Studies Adopted by the Council of the American Library Association January 15, 2008 Office for Accreditation American Library Association

More information

EXECUTIVE SUMMARY. List all of the program s learning outcomes: (regardless of whether or not they are being assessed this year)

EXECUTIVE SUMMARY. List all of the program s learning outcomes: (regardless of whether or not they are being assessed this year) STUDENT LEARNING ASSESSMENT REPORT SUBMITTED BY: C.KOPAC AND M. VENZKE DATE: JUNE 26, 2014 REVISED JANUARY 2015 TO MEET UAC RECOMMENDATIONS SEE BELOW- HEADING HIGHLIGHTED IN GREY BRIEFLY DESCRIBE WHERE

More information

Tarleton State University 2015

Tarleton State University 2015 REPORT ID: 713 Introduction & Survey Framework... 1 Organization Profile & Survey Administration... 2 Overall Score & Participation... 3 Construct Analysis... 4 Areas of Strength... 5 Areas of Concern...

More information

A Faculty & Staff Guide to. Creating Learning Outcomes

A Faculty & Staff Guide to. Creating Learning Outcomes A Faculty & Staff Guide to Creating Learning Outcomes This publication has been designed to encourage faculty and staff to create and use measurable learning outcomes in and outside of the classroom. While

More information

Standard 2: The program shall have an explicit philosophical statement and clearly defined knowledge base.

Standard 2: The program shall have an explicit philosophical statement and clearly defined knowledge base. Council for Standards in Human Service Education National Standards MASTER S DEGREE IN HUMAN SERVICES http://www.cshse.org 2013 (2010, 2009) I. GENERALPROGRAM CHARACTERISTICS A. Institutional Requirements

More information

SACSCASI. August 21-22, 2007. Joe Smith, Principal, Sample School Mary Anderson, Chairperson, SAIS - SACS Team

SACSCASI. August 21-22, 2007. Joe Smith, Principal, Sample School Mary Anderson, Chairperson, SAIS - SACS Team Note: This is a sample report designed to provide team chairs with a quick review of the components of the web-based report. The formatting of the report is done automatically through the web-based system.

More information

A Comprehensive Model for Assessing Service-Learning and Community- University Partnerships

A Comprehensive Model for Assessing Service-Learning and Community- University Partnerships 6 A comprehensive assessment model can measure the effects of service-learning on everyone involved. A Comprehensive Model for Assessing Service-Learning and Community- University Partnerships Barbara

More information

GEORGIA STANDARDS FOR THE APPROVAL OF PROFESSIONAL EDUCATION UNITS AND EDUCATOR PREPARATION PROGRAMS

GEORGIA STANDARDS FOR THE APPROVAL OF PROFESSIONAL EDUCATION UNITS AND EDUCATOR PREPARATION PROGRAMS GEORGIA STANDARDS FOR THE APPROVAL OF PROFESSIONAL EDUCATION UNITS AND EDUCATOR PREPARATION PROGRAMS (Effective 9/01/08) Kelly Henson Executive Secretary Table of Contents Standard 1: Candidate Knowledge,

More information

COURSE DELIVERY METHOD

COURSE DELIVERY METHOD Public Safety Management Degree Program Syllabus for Community Risk Reduction for the Fire and Emergency Services UST 427, 3 Credit Hour Course, Spring Term 2014 PROFESSOR Bernard W. Becker, III, MS, EFO,

More information

ABEST21 Accreditation Manual

ABEST21 Accreditation Manual 2014 edition ABEST21 Accreditation Manual 2014 ABEST21 THE ALLIANCE ON BUSINESS EDUCATION AND SCHOLARSHIP FOR TOMORROW, a 21st century organization I. Introduction 1. The Purpose of Accreditation ABEST21

More information

CASE STUDY. JUNIATA COLLEGE Assessing Global Learning: Lessons from the Field BY JENIFER CUSHMAN, JUNIATA COLLEGE

CASE STUDY. JUNIATA COLLEGE Assessing Global Learning: Lessons from the Field BY JENIFER CUSHMAN, JUNIATA COLLEGE CASE STUDY JUNIATA COLLEGE Assessing Global Learning: Lessons from the Field BY JENIFER CUSHMAN, JUNIATA COLLEGE Reproduction is strictly prohibited without written permission of the author. 2013 Jenifer

More information

A Guide to Curriculum Development: Purposes, Practices, Procedures

A Guide to Curriculum Development: Purposes, Practices, Procedures A Guide to Curriculum Development: Purposes, Practices, Procedures The purpose of this guide is to provide some general instructions to school districts as staff begin to develop or revise their curriculum

More information

Principles to Actions

Principles to Actions Principles to Actions Executive Summary In 1989 the National Council of Teachers of Mathematics (NCTM) launched the standards-based education movement in North America with the release of Curriculum and

More information

Growing Tomorrow s Leaders Today Preparing Effective School Leaders in New York State

Growing Tomorrow s Leaders Today Preparing Effective School Leaders in New York State The New York State Board of Regents and The New York State Education Department Growing Tomorrow s Leaders Today Preparing Effective School Leaders in New York State "The factor that empowers the people

More information

Developing & Measuring Student Learning Outcomes in Academic Advising

Developing & Measuring Student Learning Outcomes in Academic Advising Developing & Measuring Student Learning Outcomes in Academic Advising Tomarra A. Adams, Ph.D, LCSW University of Louisville Asst. Dean, College of Arts & Sciences Center for Academic Advising & Student

More information

Curriculum Development for Doctoral Studies in Education

Curriculum Development for Doctoral Studies in Education Curriculum Development for Doctoral Studies in Education Irine BAKHTADZE * Abstract Profound changes in high education which started in 1996 in Georgia have to be continued. We have to explore potentially

More information

Submitted for Review and Approval Florida Department of Education 5/1/2012

Submitted for Review and Approval Florida Department of Education 5/1/2012 2012 INSERT DISTRICT NAME HERE SCHOOL LEADER EVALUATION SYSTEM Observation and Evaluation Forms and Procedures for Leadership Practice Effective July 1, 2012 A Comprehensive System for Professional Development

More information

Reflections on Outcomes Assessment and the ABET Accreditation Process

Reflections on Outcomes Assessment and the ABET Accreditation Process Session 2513 Reflections on Outcomes Assessment and the ABET Accreditation Process Ronald L. Miller Colorado School of Mines Summary Because of new accreditation guidelines included in Engineering Criteria

More information

Text of article appearing in: Issues in Science and Technology, XIX(2), 48-52. Winter 2002-03. James Pellegrino Knowing What Students Know

Text of article appearing in: Issues in Science and Technology, XIX(2), 48-52. Winter 2002-03. James Pellegrino Knowing What Students Know Text of article appearing in: Issues in Science and Technology, XIX(2), 48-52. Winter 2002-03. James Pellegrino Knowing What Students Know Recent advances in the cognitive and measurement sciences should

More information

ASKstandards ASSESSMENT SKILLS AND KNOWLEDGE CONTENT STANDARDS FOR STUDENT AFFAIRS PRACTITIONERS AND SCHOLARS

ASKstandards ASSESSMENT SKILLS AND KNOWLEDGE CONTENT STANDARDS FOR STUDENT AFFAIRS PRACTITIONERS AND SCHOLARS ASKstandards ASSESSMENT SKILLS AND KNOWLEDGE CONTENT STANDARDS FOR STUDENT AFFAIRS PRACTITIONERS AND SCHOLARS A milestone development in the history of ACPA s commitment to assessing student learning and

More information

School of Accounting Florida International University Strategic Plan 2012-2017

School of Accounting Florida International University Strategic Plan 2012-2017 School of Accounting Florida International University Strategic Plan 2012-2017 As Florida International University implements its Worlds Ahead strategic plan, the School of Accounting (SOA) will pursue

More information

University Mission School Mission Department Mission Degree Program Mission

University Mission School Mission Department Mission Degree Program Mission Degree Program Student Learning Report (rev. 7/14) Fall 2013 Spring 2014 The Department of Applied Technology in the School of Business & Technology Business Information Technology, B.S. Effectively assessing

More information

ASSESSMENT GLOSSARY OF TERMS

ASSESSMENT GLOSSARY OF TERMS ASSESSMENT GLOSSARY OF TERMS It is important that faculty across general education areas agree on the meaning of terms that will be used in assessing courses and programs. Selected terms are defined below.

More information

Title: Enhancing Student Learning in Introduction to Sport Management. Author: Angela Lumpkin, Health, Sport, and Exercise Sciences

Title: Enhancing Student Learning in Introduction to Sport Management. Author: Angela Lumpkin, Health, Sport, and Exercise Sciences Title: Enhancing Student Learning in Introduction to Sport Management Author: Angela Lumpkin, Health, Sport, and Exercise Sciences Summary: In order to enhance and document students learning, a sports

More information

S TANDARDS R EQUIREMENTS. for Accreditation. of Affiliation. and. Middle States Commission on Higher Education THIRTEENTH EDITION

S TANDARDS R EQUIREMENTS. for Accreditation. of Affiliation. and. Middle States Commission on Higher Education THIRTEENTH EDITION S TANDARDS for Accreditation and R EQUIREMENTS of Affiliation THIRTEENTH EDITION Middle States Commission on Higher Education StandardS for accreditation and requirements of affiliation thirteenth edition

More information

Since the 1990s, accountability in higher education has

Since the 1990s, accountability in higher education has The Balanced Scorecard Beyond Reports and Rankings More commonly used in the commercial sector, this approach to strategic assessment can be adapted to higher education. by Alice C. Stewart and Julie Carpenter-Hubin

More information

M.S. in Education Assessment in the Major Report 2010. By Dr. Renee Chandler, Program Director Submitted: October 2011

M.S. in Education Assessment in the Major Report 2010. By Dr. Renee Chandler, Program Director Submitted: October 2011 M.S. in Education Assessment in the Major Report 2010 By Dr. Renee Chandler, Program Director Submitted: October 2011 Table of Contents 1. Outcomes of Previous Assessments... 2 2. Questions To Be Answered

More information

Kouzes, J. M., & Posner, B. Z. (2007). The leadership challenge (4th ed.). San Francisco, CA: Jossey-Bass.

Kouzes, J. M., & Posner, B. Z. (2007). The leadership challenge (4th ed.). San Francisco, CA: Jossey-Bass. Kouzes, J. M., & Posner, B. Z. (2007). The leadership challenge (4th ed.). San Francisco, CA: Jossey-Bass. Reviewed by Chelsea Truesdell Introduction The Leadership Challenge serves as a resource for any

More information

Brown, H. N., & Sorrell, J. M. (1993). Use of Clinical Journals to Enhance Thinking. Nurse Educator,18(5), 16-19.

Brown, H. N., & Sorrell, J. M. (1993). Use of Clinical Journals to Enhance Thinking. Nurse Educator,18(5), 16-19. Use of Clinical Journals to Enhance Critical Thinking By: Hazel N. Brown, EdD, RNC, and Jeanne M. Sorrell, DAEd, RN Brown, H. N., & Sorrell, J. M. (1993). Use of Clinical Journals to Enhance Thinking.

More information

D R A F T. Faculty Senate Ad Hoc Committee on Quality in Online Learning.

D R A F T. Faculty Senate Ad Hoc Committee on Quality in Online Learning. Faculty Senate Ad Hoc Committee on Quality in Online Learning. The Faculty Senate Ad Hoc Committee on the Future Direction of Quality Education is charged with: Defining quality in online/distance education

More information

California State University, Stanislaus Doctor of Education (Ed.D.), Educational Leadership Assessment Plan

California State University, Stanislaus Doctor of Education (Ed.D.), Educational Leadership Assessment Plan California State University, Stanislaus Doctor of Education (Ed.D.), Educational Leadership Assessment Plan (excerpt of the WASC Substantive Change Proposal submitted to WASC August 25, 2007) A. Annual

More information

Leadership and Learning: The Journey to National Accreditation and Recognition

Leadership and Learning: The Journey to National Accreditation and Recognition VOLUME 31, NUMBER 4, 2013 Leadership and Learning: The Journey to National Accreditation and Recognition Lisa Bertrand, EdD Professor and Program Coordinator Department of Educational Leadership and Counseling

More information

Georgia Regents University College of Education Department of Teacher Education. EDTD 7950A Advanced Topics in Content Area

Georgia Regents University College of Education Department of Teacher Education. EDTD 7950A Advanced Topics in Content Area Georgia Regents University College of Education Department of Teacher Education EDTD 7950A Advanced Topics in Content Area Instructor: Barry Thompson, Ph. D. Phone: 706-729-2495 Office: UH 378 Email: [email protected]

More information

ACS WASC Accreditation Status Determination Worksheet

ACS WASC Accreditation Status Determination Worksheet ACS WASC Accreditation Status Determination Worksheet How are students achieving? Is the school doing everything possible to support high achievement for all its students? Directions 1. Discuss the evidence

More information

Dr. Ryan McLawhon Texas A&M University

Dr. Ryan McLawhon Texas A&M University Dr. Ryan McLawhon Texas A&M University Introduction to Assessment Components of an Assessment Plan Mission Outcomes Measures Achievement Targets Question and Answer Session NCAAA Standard 4 (&5, etc.)

More information

NORFOLK PUBLIC SCHOOLS SUMMATIVE SPECIAL EDUCATION TEACHER EVALUATION. Summative Evaluation Directions

NORFOLK PUBLIC SCHOOLS SUMMATIVE SPECIAL EDUCATION TEACHER EVALUATION. Summative Evaluation Directions Teacher: Date: School Year: Building: Assignment: Evaluation Status: Summative Evaluation Directions The purposes of the summative evaluation conference are: 1) to review and discuss the contents of the

More information

A Primer on Writing Effective Learning-Centered Course Goals

A Primer on Writing Effective Learning-Centered Course Goals A Primer on Writing Effective Learning-Centered Course Goals Robert K. Noyd (DFB) & The Staff of The Center for Educational Excellence (CEE) US Air Force Academy A Shift from a Topic-Centered to a Learning-Centered

More information

The challenges of teaching. The role of teacher evaluation

The challenges of teaching. The role of teacher evaluation : The National Education Association s Framework for Transforming Education Systems to Support Effective Teaching and Improve Student Learning NEA recognizes the urgent need to transform the U.S. education

More information

Developing a Sustainable Assessment Plan for EAC-ABET Accreditation

Developing a Sustainable Assessment Plan for EAC-ABET Accreditation Eleventh LACCEI Latin American and Caribbean Conference for Engineering and Technology (LACCEI 2013) Innovation in Engineering, Technology and Education for Competitiveness and Prosperity August 14-16,

More information

DEPARTMENT OF MARKETING COLLEGE OF BUSINESS ADMINSTRATION POLICY ON REAPPOINTMENT, TENURE, AND PROMOTION (RTP)

DEPARTMENT OF MARKETING COLLEGE OF BUSINESS ADMINSTRATION POLICY ON REAPPOINTMENT, TENURE, AND PROMOTION (RTP) Approved by Academic Affairs May 2010 DEPARTMENT OF MARKETING COLLEGE OF BUSINESS ADMINSTRATION POLICY ON REAPPOINTMENT, TENURE, AND PROMOTION (RTP) I. DEPARTMENT OF MARKETING RTP POLICY A. Preamble B.

More information

Program Report for the Preparation of Elementary School Teachers Association for Childhood Education International (ACEI) 2007 Standards - Option A

Program Report for the Preparation of Elementary School Teachers Association for Childhood Education International (ACEI) 2007 Standards - Option A Program Report for the Preparation of Elementary School Teachers Association for Childhood Education International (ACEI) 2007 Standards - Option A NATIONAL COUNCIL FOR ACCREDITATION OF TEACHER EDUCATION

More information

Delaware Performance Appraisal System

Delaware Performance Appraisal System Delaware Performance Appraisal System Building greater skills and knowledge for educators DPAS-II Guide for Administrators (Principals) Principal Practice Rubric Updated July 2015 1 INEFFECTIVE A. DEVELOPS

More information

UNIVERSITY OF ROCHESTER William E. Simon Graduate School of Business Administration. Proposal for a Clinical Faculty Track

UNIVERSITY OF ROCHESTER William E. Simon Graduate School of Business Administration. Proposal for a Clinical Faculty Track Contents: UNIVERSITY OF ROCHESTER William E. Simon Graduate School of Business Administration Proposal for a Clinical Faculty Track 1. Rationale and Objectives of the Proposal 2. The Simon School Mission

More information