Woodring College of Education Preparing thoughtful, knowledgeable, and effective educators for a diverse society Closing the Assessment Loop Report 2010 Department of Elementary Education Elementary Education, TESOL, and MEd in Literacy Programs A central challenge of any renewal effort is designing extended, authentic, and meaningful activities by which faculty can engage with and consider the implications of aggregated data if the first steps in this process include gathering useful data that has value to our internal audience, an equally important task is to build a process by which our faculty of experienced and successful educators might make thoughtful use of the available information. Lit, Nager, & Snyder, If it ain t broke, why fix it? Teacher Education Quarterly, 37 (1), Winter 2010 Introduction: The Department of Elementary Education regularly reviews and discusses data to improve programs and courses. The venues in which this data are discussed include regular faculty meetings, faculty committee meetings (including the newly-established Curriculum and Assessment Committee), and departmental retreats. For example, a synthesis of the data in this report was shared at a departmental faculty meeting on November 14, 2010. Our goal is to close the assessment loop, to use information/data to drive program improvement initiatives. This report will summarize department-level discussions of assessment data and uses of data for documenting candidate performance and improving the quality of programs and operations. In 2010, the members of our program continued a process begun several years previously comparing and analyzing data over time, conducting queries of the database targeted to reveal program trends and needs for curriculum change, and seeking to determine the reliability of our various assessments of student progress. Our ultimate goal of these efforts is to create coherence in our students experience in their coursework and field placements throughout our program. Section 1 of this report features a 2-table overview of our program assessment system across five transition points through the program: Transition Point 1: Program Admission Transition Point 2: Program Retention/Mid-Program Transition Point 3: Yearlong Internship Transition Point 4: Program Completion Transition Point 5: Additional Endorsements (Reading, English Language Learners (ELL), Bilingual Education (BE)) Table 1 identifies currently implemented Key Assessments; Table 2 identifies targets for additional data collection and analysis. Report Section 2 features a descriptive synthesis of three key assessments analyzed in the past year and the changes discussed and initiated due to the consideration of data.
Table 1: ELED Program Assessment System: Key Assessments of Student Progress (* represents selected assessments summarized in Section 2 of this report) TRANSITION Key Assessment(s) & Data Action based on analysis of data POINTS (TP) Collected TP 1: ADMISSION * Application to ELED program Admissions summary data: Admissions demographic data Formation of ELED Recruitment/Retention Committee to revise application process TP 2: RETENTION ELED 370 ELED 470, 1 st quarter, Internship * ELED 471, 2 nd quarter, Internship qualification for 3 rd quarter fulltime internship TP 4: COMPLETION * Internship Survey Practicum Assessment: CT & Candidate self- assessment of teaching and professional behaviors Practicum Assessment: CT & Candidate self-assessment of teaching and professional behaviors Teacher Work Sample: Analysis of TWS scores across instructors West-E Pass-rates Teacher Education Internship Survey: Graduates provide program survey data upon completing the program Identify candidates who need additional support/remediation at the beginning stage of their program Identify candidates who need additional support/remediation at the beginning of the internship year Inter-rater reliability study and focus groups on TWS evaluation Discussion of the possible reasons why a handful of students have difficulty passing West-E and measures that might be taken earlier to identify and support them. Resulted in the formation of subcommittees focused on student development, curriculum & assessment, and literacy (includes M.Ed. graduate program & LLC major) 2
Table 2: Targets for additional program data collection and analysis in 2011 TRANSITION POINTS (TP) Key Assessment(s) & Data Collected Action based on analysis of data ELED 480 ELED 481 ELED 481 ELED 471 SCED 480, 490 ELED 425, Quarter 1 ELED 470, Quarter 1 ELED 492, Quarter 2 Internship Year Quarters 1, 2, 3 TP 5: ADDITIONAL ENDORSEMENTS Reading: ELED 456/518 (or previous 497n/538) ELL: TESL 420 (or previous 420a) BE: TESL 425 (or previous 497a) Practicum observation and writing proficiency: 1-1 candidate/student observations, written work Practicum observation: Lesson plans, written analyses, and small group practicum observations Writing proficiency: Writing projects: Lesson plans and TWS Curriculum Study assignment SCED 480 Lesson Plan assignment SCED 490 Social Studies Unit: Unit plan: Standard V.3.d (sustainability) rubric row UBD Unit: Rubric rows assessing competencies foundational for TWS WIS Observational Data Pilot Online Mentoring Modules (for mentors/interns): Analyze candidate mentoring during/after completion of online modules --PRAXIS II endorsement tests --SIOP and TWIOP program level assessment data from practica in Reading, ELL & BE Literacy Assessment Basic Materials/Methods Methods/Programming Will provide early indicator of candidate strengths/challenges in early methods course & indicate where candidates need more support. Will serve as gatekeeper for entry to ELED internship. Will assess candidates writing proficiency Will assess candidates subject matter content knowledge and pedagogical content knowledge Will assess candidates proficiency toward sustainability standard and how to better integrate sustainability into curriculum & assessment Support/remediate teacher candidates in preparation for internship quarter 2 Analyze instructors scores and discuss observation practices to achieve better inter-rater reliability and prepare for the implementation of the required Washington State Teacher Performance Assessment in 2011 Determine how best to extend mentoring modules to entire ELED program during internship and possible effects of enhanced mentoring practices. Document candidate proficiency for endorsement; discussion of possible reasons why a handful of students have difficulty passing endorsement tests. 3
Section 2: Use of Data for Improvements of Programs and/or Operations Key Assessment #1: IDES Teacher Education Internship Survey The IDES survey was returned with a 53% response rate. The survey, completed by candidates finishing the program, assessed program elements such as the program s practica, issues of exceptionality, language and cultural diversity, alignment with Standard V, the ability of faculty to integrate research into practice, the faculty s modeling of best practices and advising. At the annual faculty retreat in September, this data was shared with faculty and resulted in the formation of ELED subcommittees focused on student development, curriculum and assessment, and literacy including the LLC major and M.Ed. graduate program. An analysis of the data revealed the ELED program is doing quite well in many areas. Particular areas of strength include the connections to Standard V and the fostering of critical and analytical thinking throughout program. The most prominent area of weakness/challenge was advising, with the lowest scoring survey category (3.15 out of 5). The program identified advising as a primary target for improvement. As a result, in January, 2011, the ELED program is instituting a new advising process. Faculty members and incoming candidates to the program will attend an initial orientation session on the ELED program and meet their faculty advisor. Subsequent to this advisor meeting and orientation, monthly advising drop-in sessions will be held for candidates (in addition to as-needed individual appointments) to inform students of upcoming program requirements, deal with registration issues, and provide advising toward successful program completion. Another measure was undertaken to improve communication with students, which is itself an aspect of advising. The ELED Chair or Program Manager will now attend Office of Field Experience and Admissions orientations to make sure all communications coming from within the department and from these other WCE offices are on the same page regarding internship requirements and policies. The intent of this new ELED policy is to enhance communication throughout the program and better support candidates during internship. Key Assessment #2: TWS Inter-Rater Reliability An important scoring reliability study of the Teacher Work Sample was undertaken. The Teacher Work Sample, or TWS, is the capstone project for the Elementary Education Program. Scores from faculty members had been entered into the Woodring Information System since the fall of 2007. In 2008 and 2009 analysis reports were done to identify areas for course and program improvement. Changes were made in ELED 471 learning activities and the Teacher Work Sample Handbook as a result of these analyses. In 2009, comparative analysis was done of year 1 and year 2 data; subsequently a group of ELED faculty discussed these data considering meaning of score trends. However, there had not yet been a reliability study done to compare the scores of different faculty evaluators and the implicit criteria, or lenses, that faculty use to score work samples. During the summer and fall of 2010 a study funded by the WCE Assessment Committee was undertaken. The goals of this study were to a) Achieve a better understanding of where the assessment of work samples by different faculty was consistent on the scoring rubrics and where it was inconsistent, b) Identify how faculty interpret particular TWS indicators, and c) Determine how to better prepare current and future faculty to reliably evaluate the TWS or other program-level assessments. 4
Three faculty members used the TWS rubrics to evaluate six TWS samples two which had previously been evaluated by each of the three instructors. A detailed analysis of the evaluation data later compared the evaluation of each TWS section in relation to the rubrics. The analysis determined where the numerical ratings were consistent or inconsistent and compared any qualitative comments. In Fall, 2010, the faculty met for two focus groups (October and November) to examine the data analysis, compare and contrast the similarities and differences in evaluations, discuss the general lenses each faculty member used for TWS evaluation, and to discuss the implications for inter-rater reliability and potential needs for further professional development. Findings from the data analysis included: Consistent alignment among instructors evaluations, in most rubric rows, especially in TWS Sections II (Learning Goals) and VIII (Written Communication). Greater variability among evaluators on TWS sections I (Contextual Factors) and IV (Design for Instruction. The focus group sessions revealed that these sections had greater variability due to factors such as: Each of the instructors made their particular expectations explicit to their students, but not necessarily to each other. There was no specific process for sharing expectations and samples with each other. As a result of these findings, the following recommendations were agreed upon: TWS instructors ought to systematically discuss the dimensions of each of the rubrics, identifying the expectations they may have communicated to their students. More professional development ought to be done to illuminate the dimensions of each rubric row, since each instructor has specific professional strengths that can influence his/her capacity to consistently evaluate students work toward the rubric indicators. Instructors ought to evaluate each other s TWSs (or at least sampling) in order to tease-out particular priorities and strive for greater reliability. This study is very timely, as the TWS teaching personnel are likely to change markedly in coming quarters. There is a strong need to understand how we might better prepare TWS evaluators to evaluate work samples reliably. Key Assessment #3: Admissions Data The first round of applications to be received this year, those for admission to Winter Quarter 2011, showed an encouraging increase in the proportion of students of color applying to the elementary education program. Out of 80 applicants, 17 were students of color (21.25%). This compares to the figure for all of last year which was at 14.1% for elementary education (36 out of 254). The Early Childhood cohort for 2010-11 had even more stunning success. 10 out of 27 applicants were students of color (37%). The Student Development Sub-Committee has been at work on revising the application process for the Elementary program. We reviewed the work of the Applications Work Group (college-wide) from last summer, which proposed a series of potential essay questions and a draft rubric for scoring them. We found the questions potentially interesting but we are considering some adjustments or substitutions for those questions. We also considered what else should be part of the application process. We did some on-line investigation of application processes at other institutions. We had to interrupt our investigation in order to conduct the review of the 80 applications that were submitted for Winter Quarter. Although we had hoped to design a new application process in time for spring admissions, we felt that we needed more time to better understand the apparent recent success in our recruitment efforts. Therefore, we developed a Departmental Action Project that will involve interviewing a variety of underrepresented students who have either shown interest in or applied to our program in the recent period. We hope to use data from that process to inform the revision of our application process in time for the fall admissions. 5