Matches in SemOpenAlex for { <https://semopenalex.org/work/W4386093885> ?p ?o ?g. }
Showing items 1 to 64 of
64
with 100 items per page.
- W4386093885 endingPage "1124" @default.
- W4386093885 startingPage "1118" @default.
- W4386093885 abstract "Here, we will make our best guess, based on the highest-quality evidence available, of what the effect would be of a policy that increases school spending on student outcomes. First, we establish a few key empirical facts about the historical relationship between school spending and student outcomes and also what the existing older literature found about this relationship. A prevalent counterargument to the notion that higher school spending correlates with improved student achievement relies on the examination of national test scores over time, as measured by the National Assessment of Educational Progress (NAEP). Because school spending has increased by 130% since the 1970s, one might expect a corresponding increase in achievement, assuming all other factors remain constant. Naturally, not all factors are constant; however, thoughtful analysis of the data reveals that test scores increased dramatically over this period. The NAEP is administered in English and math in grades 4, 8, and 12. Figure 1 displays scores on these tests from 1978 to 2012 for all students and then by ethnicity (White, Black, Hispanic). Math scores have risen for all grade levels and ethnic groups. In English, gains have been less pronounced, which aligns with the common finding that school-level interventions generally have a greater impact on math than language skills. There were modest gains for all groups in 4th and 8th grades. The average 9-year-old in 2011 had the same English and math proficiency as a 10- and 11-year-old in 1978, respectively. While 12th-grade English scores changed little on average, there were substantial gains among Black and Hispanic students (groups that disproportionately experienced increased school spending). In sum, test scores have improved for almost all grade levels and ethnic groups. This is progress. If the scores at all grade levels were equally reliable, one might argue that 12th-grade scores (which increased by less) instead of younger-grade scores provide a better reflection of what students ultimately know at the end of their K–12 schooling. However, the tests are not equally reliable because changes in the 12th-grade NAEP reflect both changes in knowledge and also potentially sizable compositional changes. The compositional changes that bias trends in the 12th-grade NAEP (but not the 4th- or 8th-grade NAEP) occur for two reasons. First, 4th- and 8th-grade scores represent the knowledge of students educated in the public school system, as most students have not dropped out by this grade. In contrast, 12th-grade scores only account for the performance of those who remain in school through 12th grade. As high school graduation rates have increased over time, more students who would likely have low NAEP scores take the 12th-grade test. This may cause 12th-grade scores to decrease over time, even if there were no changes in knowledge. Second, because schools must participate in the 4th- and 8th-grade NAEP (but not 12th grade) to receive Title I funding, participation rates for the 4th- and 8th-grade NAEP have been consistently high (over 90%) while those for the 12th-grade NAEP have varied between 66% and 91% (National Center for Education Statistics, 2017). For these two reasons, trends in 12th-grade NAEP scores may represent both knowledge and significant composition changes, making them unreliable. A reasonable person would trust 4th- and 8th-grade scores more, as they are not biased by compositional changes. Importantly, these more reliable tests have been rising over time along with school spending. As evidence that changes in 8th-grade scores reflect persistent changes in skill (which changes in 12th-grade scores may not), Doty et al. (2022) found that birth cohorts with higher 8th-grade NAEP scores have improved educational attainment, earn higher incomes, and have lower teen motherhood, incarceration, and arrest rates. These authors find no such relationship for 12th-grade NAEP scores, making it indefensible to ignore the 8th-grade NAEP trends in favor of the potentially-biased 12th-grade NAEP trends. Using the more reliable 4th- and 8th-grade scores, one can compute the association between the two in the time series as a useful benchmark. Roughly, the time series relationship suggests a 0.052σ and 0.037σ increase in elementary school test scores per $1,000 increase in per-pupil spending for math and English, respectively (implying 0.044σ for both subjects combined). These estimates are similar to the most credible causal evidence. The research literature has never supported the idea that increased school spending doesn't improve student outcomes. Hanushek (2003) reviewed 163 studies published before 1995, relating school resources to student achievement. He found over 10 times as many positive and significant studies as expected by random chance if spending had no impact, and nearly 4 times as many positive and significant estimates as negative ones—strong evidence of a positive association between school spending and student achievement in older studies. However, he concluded that “overall resource policies have not led to discernible improvements in student performance” (p. F89). We explain below that this conclusion relies on flawed statistical reasoning that confuses statistical significance with the existence of an effect. A single study can be statistically insignificant for two reasons: (1) lack of effects, and (2) lack of statistical power to detect an effect (e.g., due to small sample size or noisy data). To illustrate the problem with conflating the two, consider a scenario with a sample of 10,000 students, where a strong positive and highly significant relationship exists between increased school spending and student outcomes (with a t-statistic of 4, and p-value <0.0001). Now, imagine dividing the data into 100 samples with 100 observations each. Each subsample would have a similar relationship but with standard errors approximately 10 times larger than the full data (yielding t-statistics around 0.4, and p-value ≈ 0.68). Each sample would likely show a positive, but insignificant, relationship. Using Hanushek's flawed logic (i.e., disregarding statistical power and merely counting statistically significant effects), one would mistakenly conclude that there is no systematic relationship overall. Analyses of a set of studies similar to those in Hanushek (2003) that avoid this statistical error show a strong positive association between school spending and student outcomes (Hedges et al., 1994). To be crystal clear, a scientific reading of the old literature has never supported the view that there is no association between school spending and student achievement. While suggestive of positive school spending effect on test scores, the evidence above does not represent credibly-causal relationships. To better understand the likely causal relationship, we must examine more recent empirical literature. This literature aims to uncover the relationship between increased school spending and student outcomes while carefully defining a comparison group and isolating changes in spending due to specific identifiable policies. The studies in this literature vary in policies examined, contexts studied, and estimation approaches used. However, they consistently demonstrate a positive, credibly causal relationship between school spending and students' educational outcomes. Several studies use national data to show that increased spending due to the passage of school finance reforms is associated with improved test scores, educational attainment, and income (e.g., Jackson et al., 2016; Murray et al., 1998; Rothstein & Schanzenbach, 2022). However, many studies do not rely on the introduction of a reform. Many studies use increases in spending due to discontinuities and deterministic functions embedded in state school funding formulas to reveal improved test scores or educational attainment in states such as Massachusetts, Michigan, and New York (Gigliotti & Sorensen, 2018; Guryan, 2001; Hyman, 2017; Papke, 2005; Roy, 2011). Using the near passage of school spending referenda, research like Baron (2022) found that spending increases improve test scores and educational attainment, while Miller (2018) associated spending increases from house price appreciation with improved outcomes. When examining spending declines, Jackson et al. (2021) linked recessionary cuts in per-pupil spending to lower test scores and reduced college attendance, and Downes and Figlio (1998) associated lower spending due to spending limits with increased dropout rates and reduced test performance. Although one can always critique any individual study, collectively, they consistently support a robust positive causal relationship between increased school spending and student outcomes on average. There are now two recent summaries of this more recent literature: one by Jackson and Mackevicius (in press) and the other by Handel and Hanushek (2023). Despite the authors of these studies differing on interpretation (we detail this below), both of these overall summary papers found that, on average, there is a strong statistically significant relationship between increased school spending and student outcomes. Jackson and Mackevicius showed that based on 31 of these recent studies (published between 2001 and 2000), on average, a $1,000 increase in per-pupil public operational school spending (for 4 years) increases test scores by about 0.035σ and college-going by 2.8 percentage points. Handel and Hanushek examined 43 estimates from papers published between 1999 and 2022. Translating their estimates (reported in percent changes) into dollar amounts at current spending levels, increasing school spending by $1,000 per pupil increases test scores by 0.0466σ and college going by 3.8 percentage points. In both meta-studies, almost all of the estimates are positive, and most are statistically significant. That is, in both studies, when districts increase spending, outcomes improve the vast majority of the time. Averaging across both studies, a $1,000 increase in per-pupil spending would raise test scores by about 0.04σ and college attendance by around 3.3 percentage points. We now provide context for these estimates. As test scores only measure part of the benefits of better schools, these estimates likely represent a lower bound on the overall benefits of spending. Using test scores, Chetty et al. (2014) found a 0.5% wage increase for each percentile increase. A 0.04σ increase equates to about 1.6 percentile points, implying a 0.8% wage increase for a roughly $4000 spending increase. With lifetime earnings of around $600,000, this results in a $4,800 benefit, indicating a benefit-cost ratio of about 1.2. However, this is a probable lower bound. To approach the true benefit, we examine educational attainment effects. Zimmerman (2014) showed that a 10 percentage point increase in college attendance leads to a 5% wage increase. Assuming this holds true, a 3.3 percentage point increase in college attendance would result in a 1.34% wage increase for a roughly $4,000 spending increase. With a present discounted value of lifetime earnings of about $600,000, this translates to $9,900, yielding a cost-benefit ratio of about 2.475. Note that this calculation doesn't account for benefits such as reduced criminality or other advantages to students—likely understating the full benefits of increased school spending. In summary, recent literature suggests that each dollar spent on operational school spending generates at least a $1.20 return on average for society, with the true benefit possibly exceeding $2.47. While one might worry that results from studies of older reforms on educational attainment may not apply today, this concern doesn't hold for test score studies, as most examine policies implemented in the 1990s or later. Jackson and Mackevicius (in press) tested whether the marginal effect of school spending is larger when baseline spending levels are low, finding no evidence of such a relationship for either test scores or educational attainment. The effect of an additional $1,000 is consistent across baseline spending levels, suggesting the average effect of school spending will resemble the recent literature reviews. Notably, a recent study in NY State, with well above-average baseline spending, reported a marginal effect similar to the pooled average. In summary, both formal tests and the effects of recent reforms indicate that the benefits reported in the studies summarized above are likely applicable today. The finding that test scores will increase by an average of 0.04σ and college-going by 3.3 pp doesn't guarantee these results in every context. It implies that approximately half the time, one might observe smaller benefits, while in other instances, the effects could be larger. Considering the benefit-cost ratios, this means that the benefits of increased spending will usually outweigh the costs. However, large social returns cannot be guaranteed in all cases or settings. To understand how reliable increased school spending is as an investment (i.e., the likelihood a policy will improve student outcomes), one must understand the spread of actual school spending impacts. Both Jackson and Mackevicius (in press) and Handel and Hanushek (2023) tried to address this issue. However, Handel and Hanushek confused the distinction between an estimate and an effect in their analysis, resulting in a drastic overstatement of the uncertainty surrounding the potential for policies to improve outcomes in specific settings. Below, we explain why this matters. In any given setting i, we have the true effect of a $1,000 per-pupil increase in school spending (θi). We draw a random sample from the population and try to estimate this effect. However, due to sampling variability, measurement errors, and other sources of noise, we do not observe θi itself, but a noisy version of it Yi = θi + εi. Note that the noise (εi) is itself a source of variability. For example, Roy (2011) and Chaudhary (2009) both examined the effects of the same policy, but one has an estimate of 0.38, while the other has an estimate of 0.0179. A failure to properly account for the large chance differences in estimates one can observe of the same effect can lead one to overinterpret noise for signal and misinterpret the variability in estimates as variability in effects. Sadly, Handel and Hanushek (2023) did exactly this, while saying things like “the point estimates are widely different across studies” and “the outliers are dramatically different” (p. 31). As we highlighted above, estimates can be drastically different even when the underlying effect is the same- there is nothing startling or dramatic about chance variability. To understand how heterogeneous effects are likely to be, one must estimate it directly. The basic idea behind estimating true heterogeneity is that one treats the reported standard error from each study as a measure of the noise of each study (or within-study error), and then one estimates how much variability exists that cannot be explained by noise alone. This heterogeneity parameter (τ), measures how much any two random effects are likely to differ from each other and can be used to predict how likely one is to observe the effects of any given size. Jackson and Mackevicius (in press) reported the standard deviation of true heterogeneity in test score impacts is about two-thirds the size of the mean. Accordingly, true test score effects are almost always positive—ranging between -0.004σ and 0.067σ ninety percent of the time. For educational attainment, the estimated standard deviation of heterogeneity is less than half the size of the mean so that true effects on college-going are virtually always positive—ranging between 0.5 and 5.1pp 90% of the time. Using the previous numbers, a college-going effect of 1.35 percentage points would pass a cost-benefit test. The pooled average and heterogeneity reported indicate this would occur at least 85 percent of the time. That is, each dollar spent on public schools would yield more than one dollar in social return at least 85% of the time. This does not account for benefits through taxes, reduced crime, or other channels, so the true likelihood is higher than this. All of the evidence points to a strong causal relationship between increased school spending and student performance. The best evidence suggests that increasing school spending by $1,000 per pupil would increase student test scores by about 0.04σ and college attendance by roughly 3.3 percentage points. The most careful examination of heterogeneity shows that while positive social returns is not guaranteed, it is exceedingly likely. Note that the range of estimates relates to the broad range of things schools tend to do with increased money. As such, the range of estimates is indicative of what one would see in a typical state or district when they are given more money to spend. Accordingly, the range of plausible effects found indicates that by simply letting districts do what they typically do with an additional $1,000 per pupil, we can expect positive test score effects more than 90% of the time, positive educational attainment impacts more than 99% of the time, and that the money will yield more in return than the cost at least 85% of the time. There is a growing literature on how various school inputs, such as class size, teacher quality, and high-quality professional development, improve educational outcomes. All of these cost money. There is also evidence that incentives and good governance matter. As such, there are reasons why some areas are more efficient with their additional funds than others. However, the fact that certain kinds of spending may be more effective at improving outcomes than others is largely irrelevant to the question of whether increasing school spending is worthwhile. Whatever the budget may be, it should be spent as effectively as possible. The debate around school spending often seems to be framed as one of providing more resources versus spending resources more effectively. This dichotomy is largely rhetorical and has little economic or policy content. So long as the additional funds pass a cost-benefit test (which research suggests it typically does), one should do both. While the evidence shows that providing largely unrestricted additional money to schools does improve outcomes in general, this does not mean that increasing school spending (without regard for what it may be spent on) is the best way to improve outcomes. However, the results clearly indicate that increasing school spending in ways that have been done in the past at current spending levels will likely add social value. As such, it should be part of the policymaker's toolkit. C. Kirabo Jackson is Abraham Harris Professor of Education and Social Policy in the Department of Human Development and Social Policy at Northwestern University, 2120 Campus Drive, Room 204, Annenberg Hall, Evanston, IL 60208 (email: [email protected]). Claudia Persico is an Associate Professor in the Department of Public Administration and Policy at American University, School of Public Affairs, 4400 Massachusetts Avenue NW, Washington, DC 20016–8070 (email: [email protected])." @default.
- W4386093885 created "2023-08-24" @default.
- W4386093885 creator A5012304167 @default.
- W4386093885 creator A5024307347 @default.
- W4386093885 date "2023-08-22" @default.
- W4386093885 modified "2023-10-01" @default.
- W4386093885 title "Point column on school spending: Money matters" @default.
- W4386093885 cites W1639391176 @default.
- W4386093885 cites W1978522237 @default.
- W4386093885 cites W1984288743 @default.
- W4386093885 cites W1992035217 @default.
- W4386093885 cites W2037004354 @default.
- W4386093885 cites W2037039536 @default.
- W4386093885 cites W2186342072 @default.
- W4386093885 cites W2888749697 @default.
- W4386093885 cites W3121907929 @default.
- W4386093885 cites W3122939989 @default.
- W4386093885 cites W3162126022 @default.
- W4386093885 cites W3195791887 @default.
- W4386093885 cites W4226494150 @default.
- W4386093885 cites W4310713225 @default.
- W4386093885 cites W4311940028 @default.
- W4386093885 doi "https://doi.org/10.1002/pam.22520" @default.
- W4386093885 hasPublicationYear "2023" @default.
- W4386093885 type Work @default.
- W4386093885 citedByCount "0" @default.
- W4386093885 crossrefType "journal-article" @default.
- W4386093885 hasAuthorship W4386093885A5012304167 @default.
- W4386093885 hasAuthorship W4386093885A5024307347 @default.
- W4386093885 hasBestOaLocation W43860938851 @default.
- W4386093885 hasConcept C13355873 @default.
- W4386093885 hasConcept C162324750 @default.
- W4386093885 hasConcept C2524010 @default.
- W4386093885 hasConcept C2780551164 @default.
- W4386093885 hasConcept C28719098 @default.
- W4386093885 hasConcept C33923547 @default.
- W4386093885 hasConcept C556758197 @default.
- W4386093885 hasConceptScore W4386093885C13355873 @default.
- W4386093885 hasConceptScore W4386093885C162324750 @default.
- W4386093885 hasConceptScore W4386093885C2524010 @default.
- W4386093885 hasConceptScore W4386093885C2780551164 @default.
- W4386093885 hasConceptScore W4386093885C28719098 @default.
- W4386093885 hasConceptScore W4386093885C33923547 @default.
- W4386093885 hasConceptScore W4386093885C556758197 @default.
- W4386093885 hasIssue "4" @default.
- W4386093885 hasLocation W43860938851 @default.
- W4386093885 hasOpenAccess W4386093885 @default.
- W4386093885 hasPrimaryLocation W43860938851 @default.
- W4386093885 hasRelatedWork W1541807514 @default.
- W4386093885 hasRelatedWork W2017743522 @default.
- W4386093885 hasRelatedWork W2034170613 @default.
- W4386093885 hasRelatedWork W2097500084 @default.
- W4386093885 hasRelatedWork W2108954098 @default.
- W4386093885 hasRelatedWork W2587828312 @default.
- W4386093885 hasRelatedWork W2755602841 @default.
- W4386093885 hasRelatedWork W2778542558 @default.
- W4386093885 hasRelatedWork W4255972572 @default.
- W4386093885 hasRelatedWork W4378174678 @default.
- W4386093885 hasVolume "42" @default.
- W4386093885 isParatext "false" @default.
- W4386093885 isRetracted "false" @default.
- W4386093885 workType "article" @default.