1 2015 Texas Public School Rankings Methodology April 2015 Robert Sanborn, Ed.D. LaPorcha Carter, MPH Jesus Davila, MPP Torey Tipton, MPA
2 I. Introduction A. About CHILDREN AT RISK CHILDREN AT RISK is a 501(c)(3) non-profit, non-partisan research and advocacy organization dedicated to addressing the root causes of poor public policies affecting children. The organization began in the fall of 1989 when a group of child advocates met to discuss the lack of documentation on the status of children and the absence of strong public policy support for youth. Over the course of two decades, CHILDREN AT RISK has evolved from an organization researching the multitude of obstacles our children face to one that also drives macro-level change to better the future of our city and state through community education, collaborative action, evidence-based public policy, and advocating for our youth at the local and state level. Through its Public Policy and Law Center established in 2006 as the only center of its kind in Texas CHILDREN AT RISK uses policy and legal expertise as a powerful tool to drive change and create a better future for our children. In recent years, CHILDREN AT RISK has grown exponentially in its capacity to speak out and drive change for children and has become a premier resource on children s issues among major media outlets, public officials, and the non-profit sector. Today, the mission of CHILDREN AT RISK is to improve the quality of life for children across Texas through strategic research, public policy analysis, education, collaboration, and advocacy. Since 2006, the school ranking system developed by CHILDREN AT RISK has highlighted the successes and need for improvement of local public schools. As a research and advocacy organization, the purpose of the rankings is not only to provide a tool to parents and students, but also to provide information to campuses and districts on how they perform relative to their peers, comparing them against successful models of high-performing public schools. In 2009, CHILDREN AT RISK began to include all eligible high schools in the state of Texas and extended the ranking system to include eligible elementary and middle school campuses. Thus far, the CHILDREN AT RISK rankings have proven to be instrumental in generating conversations among educators and the public regarding methods for improving our public education system. In addition, the School Rankings aim to: Serve as an accessible guide for parents, educators, and community members on the performance of local schools. Generate conversations not just about the data used in the ranking, but around how schools and districts are performing overall in creating college-ready students. Be transparent. Research is strongest when it is made available to the public and open to scrutiny. Thus, discussion can be generated, the ranking methodology can be improved upon, and all districts can utilize this avenue of assessing campuses. Encourage the use of data in public school reform. The rankings have successfully encouraged data analysis at the campus and district level, targeted school intervention, aided teacher and staff professional development, allocated funds to better serve children, and promoted changes in strategic planning.
3 Each year, CHILDREN AT RISK reexamines its methodology of ranking schools to ensure that this report most accurately reflects school performance, utilizes the best data available, and incorporates feedback from educators, researchers, and service providers. Because CHILDREN AT RISK continually improves the methodology, this year s School Rankings are not directly comparable to previous years results. B. Collaboration Portions of the data analysis for the 2015 School Rankings were done in conjunction with Dr. Lori Taylor of Texas A&M University. II. Methods A. School Ranking Methodology All campuses in Texas are ranked across three indices: Student Achievement, Campus Performance, and Growth. Additionally, high schools are also ranked by the College Readiness Index. Within each index, a weighted index score was calculated for each campus. Using these three index scores, a weighted average was computed to create an overall composite index. A state or local rank was determined as the order in which campuses are listed when the weighted composite indices are sorted from highest to lowest, relative to other schools servicing the same grade levels (i.e., Elementary, Middle, High). B. Indices Student Achievement Index The Student Achievement Index reflects raw performance in key achievement areas. The Student Achievement indicators and their weights are as follows: ELEMENTARY AND MIDDLE SCHOOL STUDENT ACHIEVEMENT INDICATORS STAAR Reading- Advanced 50% STAAR Math- Advanced 50% HIGH SCHOOL STUDENT ACHIVEMENT INDICATORS STAAR Reading- Advanced 50% STAAR Math- Advanced 50%
4 For each indicator (e.g., STAAR Reading- Advanced), campuses were ordered highest to lowest by their score and a percentile rank was calculated. The percentile rank indicates the percentage of scores that fall at or below that score. Each indicator, as demonstrated above, has a predetermined weight and the weighted average of these percentiles is the Student Achievement Index. For more detailed definitions of achievement indicators, please see Section IV. Campus Performance Index The Campus Performance Index captures performance on the Student Achievement indicators using demographically adjusted values. Raw academic measurements, such as those in the Student Achievement Index, have a bias toward campuses with a low percentage of economically disadvantaged students. The Performance Index is created to measure the effectiveness of the people and programs at a campus independent of the differences in student demographics. Thus, the Campus Performance Index utilizes linear regression analysis to demonstrate the relationship between the percent of economically disadvantaged students and their performance on the indicators used for the Student Achievement Index. Using the regression analysis, a campuses deviation from its expected score was calculated. Deviation from expected value is defined as the difference between the actual pass rate of a campus and its forecasted performance as defined by the regression line in this analysis. Each campus received a positive or negative deviation score, based on the difference between its actual score and the expected score. Campuses were ordered highest to lowest by their deviation score and a percentile rank was calculated. The percentile rank indicates the percentage of scores that fall at or below that score. Each indicator had the same pre-determined weight as the Student Achievement Index and the weighted average of these percentiles is the Campus Performance Index. Growth Index The Growth Index captures improvement over time in standardized test scores. The Growth Index is composed of gain scores in math and reading, which measure student-level performance relative to a student s test-score peers. A student s test score peers are all the students statewide who took the same subject-matter test last year and posted the same score (at the same grade level). Thus, the peer group for a 6th grade math student who scored a 20 on the 5th grade math test is everyone who also scored a 20 on the 5th grade math test, statewide. The year-to-year difference was standardized, and then transformed into a normal curve equivalent score for ease of interpretation. A normal curve equivalent score is a version of a standardized score that can be interpreted as percentile ranks in a normal distribution. The campus level scores are averages of the student level gains from one year to the next, where a student with an average gain among students who had the same prior score as she did would have a normal curve equivalent gain of 50. The exams included in the gain score calculations include: STAAR Reading, English I EOC, English II EOC, STAAR Math, and Algebra I EOC. In a very limited number of cases where the gain score
5 was unable to be calculated, the percentile rank for the percentage of students at Final Level II or above on STAAR Math/Reading was substituted. For each indicator (e.g., Reading gain score), campuses were ordered highest to lowest by their score and a percentile rank was calculated. The percentile rank indicates the percentage of scores that fall at or below that score. The average of the two percentiles became the Growth Index. College Readiness Index (High School Rankings Only) The College Readiness Index measures a high school student s college readiness. Included in this Index is the CHILDREN AT RISK graduation rate, the participation rates for SAT/ACT and AP/IB exams, average SAT and ACT scores, and the percent of examinees above AP/IB criterion score. As with other indices, each indicator was ranked lowest to highest and transformed into a percentile rank. Each indicator has a pre-determined weight, and the weighted average of these indicators became the College Readiness Index. For a complete breakdown of each index and their weight, please see Appendix I. C. Letter Grades All campuses were assigned a letter grade, based on their Composite Index. For elementary, middle, and high schools: Campuses at or above the 75 th percentile (indicating they rank better than at least 75% of schools at that grade level) received an A. Campuses at or above the 55 th percentile, but below the 75 th percentile, received a B. Campuses at or above the 35 th percentile, but below the 55 th percentile, received a C. Campuses at or above the 15 th percentile, but below the 35 th percentile, received a D. Campuses below the 15 th percentile received an F. Once campuses were assigned a general letter grade, A, B, and C grades were further differentiated into plus/minus grades. The range of Composite Index scores for each letter grade was divided evenly into thirds. The top third of Composite Index scores became plus grades and the bottom third of Composite Index scores became minus grades. The cut-points are different for elementary, middle and high schools because they are based on the unique sample of scores for this year s schools at each level.
6 The cutoff percentiles for each particular grade for each level are included in the table below. Letter Grade ELEMENTARY MIDDLE HIGH Minimum Percentile Minimum Percentile Minimum Percentile A A A B B B C C C D D D F <15 <15 <15 III. Data and Limitations A. Overview of School Rankings Data To rank public schools across Texas, CHILDREN AT RISK compiles and analyzes data collected by the Texas Education Agency (TEA) through the Texas Academic Performance Reports system (formerly the Academic Excellence Indicator System) and direct requests to TEA. We would especially like to thank Perry Weirich of TEA for his tireless efforts to ensure we received the data to complete this analysis. CHILDREN AT RISK emphasizes utilizing an array of indicators which encourages a holistic examination of school quality when evaluating campuses. CHILDREN AT RISK seeks to hold schools accountable for student performance on standardized testing in addition to other measures such as graduation rates, college readiness, and improvement over time. The included indicators and weights applied to each indicator in the CHILDREN AT RISK ranking calculation were determined by staff members and influential members of the education community. More weight is given to indicators that better predict college success based on a growing body of research.
7 1. Grade Ranges With regards to the various possible grade ranges, CHILDREN AT RISK used a consistent logic to determine what level (elementary, middle, high) to assign schools for the rankings. For the general constructs of levels, we used traditional grade ranges: EE-5 is elementary 6-8 is middle 9-12 is high school However, many schools fall outside of these criteria. In response, CHILDREN AT RISK employed a systematic way to ensure we provide the most comprehensive picture of school performance, including as many schools as possible without muddling the issue. Schools were ranked in each category in which their data was complete for that particular range. For example, an EE-8 th grade school would be ranked at both the elementary and middle school level. However a 4 th -8 th grade school would only be ranked as a middle school as their data for elementary would not be complete. This does lead to some discrepancies due to the fact that schools will have a different rank in each of the categories in which they are analyzed. While we recognize this as a limitation, CHILDREN AT RISK s goal is to provide parents and stakeholders with the most complete picture of school performance possible. For a complete list of possible grade ranges, please see Appendix II. B. Missing Data and Excluded Schools For a school to be included in the school rankings, a campus must have complete data from the Texas Education Agency (TEA) for each of the indicators included in the analysis. Any campus that was missing one or more data points was excluded from the analysis. Campuses under an alternative accountability system from TEA (i.e., disciplinary sites) and campuses with fewer than 150 students enrolled were excluded from the rankings. Campuses confirmed to be undergoing a state or district investigation were also excluded th Grade Centers Recently, there has been an increase in campuses that serve 9 th graders that transition into a different senior high school upon completion of 9 th grade. The most common grade ranges associated with these centers are: In an effort to be inclusive and rank as many schools as possible, CHILDREN AT RISK merged these campuses with respective feeder senior high schools. This was done to prevent dropping senior high schools due to missing data to calculate graduation rate. This was only done for campuses
8 where there was a direct feeder pattern from the 9 th grade center to the associated senior high school campus. C. District Rankings Districts rankings were compiled from all the schools that were ranked within the district. The composite scores for each school were weighted by population both at the school level and at the school type level (e.g. elementary, middle, high). Charter districts and districts with fewer than 150 students enrolled were excluded. Large districts were defined as those with 30,000 or more students enrolled. A-F grades mirror the D. Geographic Sub-Lists CHILDREN AT RISK rankings are computed at the elementary, middle, and high school levels across the state of Texas. The School Rankings analysis is conducted at the state level before campuses are extracted to rank schools in smaller geographic areas (i.e., Houston, North Texas, Austin, and San Antonio).The Greater Houston area is defined as the following eight counties: Brazoria, Chambers, Fort Bend, Galveston, Harris, Liberty, Montgomery, and Waller. The North Texas area includes the following nine counties: Collin, Dallas, Denton, Ellis, Hunt, Johnson, Kaufman, Rockwall, and Tarrant. The Greater Austin area includes seven counties: Bastrop, Blanco, Burnet, Caldwell, Hays, Travis, and Williamson. Finally, Greater San Antonio is defined as the following six counties: Atascosa, Bandera, Bexar, Comal, Guadalupe, and Medina. E. Explanation of Sub-Lists Peer Lists: These lists are composed of schools that meet certain criteria along key socioeconomic indicators. They are intended to show how schools stack up against their peers - campuses that are similar in terms of student populations. The indicators used to determine peers are: Category Indicator Cut-off PEER RANKINGS INDICATORS Poverty High % Economically Disadvantaged One standard deviation above the mean Economically Disadvantaged % across the state: Elementary:91.42% Middle:85.18% High: 75.48% High designations of poverty and minority populations were based on statistics for the sample of schools included in the school rankings. For each indicator, the mean and standard deviation were calculated for included schools at each level. Schools were placed in the high range if their data point for the specific indicator fell at least one standard deviation above the population
9 mean. Separate cut-offs were calculated for high schools, middle schools, and elementary schools. Gold Ribbon Schools: The top schools in the CHILDREN AT RISK rankings that house a high concentration of economically disadvantaged students (based on the Economically Disadvantaged Peer List), received an A or B in the 2015 rankings, and were a typical ISD school (not magnet or charter school). F. Study Limitations There are numerous factors that affect the success of children and schools. Research shows some of the biggest factors for student success are parental involvement, social and emotional development, participation in extracurricular activities, teacher and parent expectations of students, and engaging class work that stimulates critical thinking. However, there is no standard measure for any of these constructs, and it would be particularly difficult to collect these data efficiently and consistently for ranking over 7,000 schools. Another constraint is CHILDREN AT RISK s dependence on data collected by the Texas Education Agency (TEA). Thus, the limitations posed by TEA data are valid criticisms for this school ranking system. Any erroneous data reported to or by TEA may have an effect on the rankings. This includes the gain score data; CHILDREN AT RISK has no way to verify the veracity of reported gains and must trust schools reporting. Additionally, the CHILDREN AT RISK ranking is limited to campuses that have complete data available through TEA for all measures included in the ranking. One final limitation that is important to note is the fact that all testing data is aggregated for each school, not by grade level. Essentially, this means that all the data for a particular campus is collapsed into one count of students performing at Level III. So, all the reading data for elementary schools would be collapsed into one count even though students take the exam annually. IV. Achievement Indicator Definitions A. Elementary and Middle School Indicators STAAR Reading- Advanced: The percentage of students scoring at Level III Advanced on the STAAR Reading exam (sum of all grades tested). This standard indicates that students are well prepared for postsecondary success. STAAR Math- Advanced: The percentage of students scoring at Level III Advanced on the STAAR Math exam (sum of all grades tested). This standard indicates that students are well prepared for postsecondary success.
10 B. High School Indicators STAAR Reading- Advanced: The percentage of students scoring at Level III Advanced on the STAAR English I and English II exams (sum of all grades tested). This standard indicates that students are well prepared for postsecondary success. STAAR Math- Advanced: The percentage of students scoring at Level III Advanced on the STAAR Algebra I exam (sum of all grades tested). This standard indicates that students are well prepared for postsecondary success. Graduation Rate: The four-, five-, or six-year graduation rate calculated by CHILDREN AT RISK. Tracking first-time freshmen entering in , , and , following them to any Texas public high school for six, five, or four years, respectively. In short, this rate is the percentage of students that graduated by 2012 from any Texas public high school. Any students who passed away or began home schooling within four, five, or six years of beginning high school were removed from the first-time freshman cohort. The cohort of first-time freshmen used in this calculation is built by the Texas Education Agency using the same rules the Agency applies for the cohort used for its own Graduation, Dropout, and Completion Rates for state and federal accountability purposes. First-time freshmen are those students enrolled in ninth grade for the first time, looking back at five years of enrollment data in the Public Education Information Management System (PEIMS), who are attributed to the campus which he or she most recently attended. For each campus, the highest value among the four-, five-, and six-year graduation rate calculation was selected to include in the rankings analysis. This method does not account for the small percentage of students who transfer to private school or leave the state/country after starting high school as freshman. Due to TEA s need to protect student privacy, some student level data was hidden, or masked. Schools were given the maximum benefit of the doubt when filling in masked data (a 4 was substituted for missing total graduate numbers, total number of students who began home schooling, or for total number of student deaths; a 1 was substituted for missing first-time freshmen cohort numbers). At times, these substitutions resulted in a graduation rate greater than 100%. In these cases, a school s final graduation rate was rounded down, and they were credited with a 100% graduation rate.
11 Appendix I Tables of Indicators for Public School Rankings ELEMENTARY AND MIDDLE SCHOOLS Student Achievement Index STAAR Reading- Advanced 50% STAAR Math- Advanced 50% Campus Performance Index STAAR Reading- Advanced (Demographically Adjusted) 50% STAAR Math- Advanced (Demographically Adjusted) 50% Growth Index Reading Gain Score 50% Math Gain Score 50% 60% 20% 20% HIGH SCHOOLS Student Achievement Index STAAR Reading- Advanced 50% STAAR Math- Advanced 50% Campus Performance Index STAAR Reading- Advanced (Demographically Adjusted) 50% STAAR Math- Advanced (Demographically Adjusted) 50% Growth Index Reading Gain Score 50% Math Gain Score 50% College Readiness Index Graduation Rate 60% SAT/ACT Participation Rate 10% AP/IB Participation Rate 10% % Examinees Above AP/IB Criterion Score 10% Average ACT Composite Score 5% Average SAT Total Score 5% 30% 20% 20% 30%
12 Appendix II Possible Grade Ranges ELEMENTARY Low Grade High Grade EE 5 EE 4 EE 3 EE 2 KG 5 KG 4 PK 5 PK 3 PK EE 6 EE 7 KG 7 KG 6 PK 6 PK 7 KG EE 8 KG 8 PK 8 EE 10 PK EE 12 PK 12 KG 12 KG 9
13 MIDDLE Low Grade High Grade EE EE 8 PK 8 KG 8 EE 9 PK 9 KG 9 EE EE 12 PK 12 KG 12 HIGH Low Grade High Grade EE 12 KG 12 PK