How we approached this third national study
The data sources and analytical techniques used for this educational study includes a unique approach to creating a “virtual twin” for each charter school in the dataset, in addition to how we analyze the progress of and outcomes for charter school students and their counterpart in a traditional public school (TPS).
An overview of our methodology
In focusing on the specific outcome of the annual progress that students make over an academic year, it is important to highlight one notable limitation of our approach. We have a limited line of sight “under the hood” and into the role that localized environmental, regulatory and organizational factors play on individual school performance. Therefore, our contributions to the K–12 education research and practice landscape are to test fundamental questions of the effectiveness of charter schools and highlight outcomes and trends rooted in academic progress.
Since the 2009 study, Multiple Choice: Charter School Performance in 16 States, we have refined our matching and analysis techniques and expanded our data collection.
We apply the Virtual Control Record (VCR) protocol to create a “virtual twin” for a charter school student. For research purposes, the virtual twin differs from the charter student only in the school attended.
Our fair analysis of impacts on student academic progress
Most researchers agree the best method of measuring school effectiveness is to look at schools’ impact on student academic growth, independent of other possible influences. The technical term for this is “value-added” (Betts & Tang, 2008). The central idea is that schools should be judged on their direct contribution to student academic progress. This necessarily considers the students’ starting scores on standardized tests and student characteristics that might influence academic performance. This approach forms the foundation of our educational study design.
To conduct a fair analysis, this study followed the approach of our previous studies looking at the academic growth of individual students as reflected in their performance on state achievement tests in both reading and math. To ensure accurate estimates of charter school enrollment on student academic growth, we used statistical methods to neutralize the influence of student demographics and eligibility for categorical program support, such as free or reduced-price lunch eligibility and special education. In this way, we structured the analysis so that differences in academic growth between the two groups are a function of which schools they attended.
While we went to great efforts in each state to match the charter students and their virtual twins, it is important to recognize states differ in the location of charter schools and the students they serve. These differences mean charter students are not likely to be representative of the state’s full complement of students. Our statistical models included controls for these differences between states to consider these differences when estimating the overall impact of charter school attendance.
Basic analytic models we employed
The purpose of this educational study is to address multiple questions, all of which focused on one central question: “How did the academic growth of charter school students compare to similar students who attended traditional public schools (TPS)?” By answering this foundational question, we aim to extend the pool of knowledge on charter school effectiveness and provide reliable information for policymakers.
In Volume 1, Charter School Performance in 31 States, we analyze charter schools’ effectiveness in the 31 states with which we have data partnerships. We also discuss the performance change for the states covered in the 2009 and 2013 reports. These cross-study comparisons are included by research topic when applicable.
How we present the results
We present the findings in units of days of learning to make the results clearer to non-technical readers. The statistical analysis produces results denominated in standard deviations — an unfamiliar currency to the general public. The days-of-learning metric takes the statistical findings of our analysis and transforms them. It uses a protocol that was developed prior to the study and then applied here.1 For each growth period, we identify the one-year learning growth of an exactly average TPS student in each state and grade and set that learning gain as “180 days of learning in 180 days of schooling.” We then take our results, student by student, and compare their academic progress to the benchmark learning of 180 days. If a student in our educational study has more learning, we award them extra days of learning on top of the 180. If a student learns less than the benchmark, they are awarded negative days of learning, which added to the 180 benchmark, result in fewer days of learning.2
While transforming the statistical results into days of learning provides a more accessible measure, the days of learning are estimates and should be used as general guides (Hanushek & Rivkin, 2006). We provide the difference in growth in standard deviation units in the outputs of the statistical methods used for each analysis found in the Technical Appendix.
1 Using nationwide growth data from the National Assessment of Education Progress, the transformation involves multiplying the standard deviation units produced by our statistical analyses by 578 days. This yields 5.78 days of learning for every 0.01 standard deviation difference in our analysis. For those wanting to convert these larger counts into weeks or months: a school week consists of five days; a school month is 20 days and a quarter or nine-week term is typically 45 days.
2 The expression “additional days of learning” does not mean the students were necessarily in school for more days during the school year. It means that the additional learning that took place in charter schools during the school year was equivalent to attending school for x additional days in a TPS setting.