The Measure of Progress
Use a National Benchmark to Clarify State Standards
Ah yes, a national benchmark. Hoxby calls it a benchmark; I call it a test, and I say to hell with it.
And she has the audacity to use the word heart. Not surprisingly, her definition is different from yours and mine.
As Gerald Bracey observes:
The NAEP achievement levels are ridiculous and everybody knows this. It is criminal to keep using them. They've been rejected by NAS, NAE, GAO, CRESST and individual psychometricians, but no one is interested in getting better ones. At least, no one with any power to do something. They have political utility--the mileage in saying American kids are awful. Paige used to say he would use the levels to shame states into doing better. Hoxby's proposal only quantifies that shaming.
As almost everyone knows by now, the central aim of the No Child Left Behind (NCLB) law is to ensure that every public school student is proficient in reading and math by 2014. It is a laudable goal, as shown by the near unanimous and bipartisan vote for NCLB. The law's drafters even had the foresight to know that the goal would be accomplished only if every school began to work right away on closing the gap between current performance and proficiency. They were right to insist that schools make regular, measurable improvements on the way to the 2014 goal, especially with groups of students who have traditionally lagged behind others. Accordingly, the drafters made "adequate yearly progress" toward proficiency the heart of NCLB.
Unfortunately, four years into the life of the law -- and less than 10 years from 2014 -- there are signs of an irregular heartbeat. Though the goals and language of the law are right, the implementing regulations too often thwart the law's intent. The legislation repeatedly states that adequate yearly progress assessment should be based on statistics that are scientifically valid, but the regulations that help states abide by NCLB sometimes guide them toward statistically invalid calculations. Thus, a school may be identified as failing when it is not. Such mistakes undermine the legitimacy of the law, not just among parents and teachers, but even among proponents of accountability.
I want to suggest a few relatively easy fixes designed to get NCLB's heart ticking smoothly -- relatively easy because they can be implemented without new legislation, they respect the autonomy of states, and are easy for schools to understand. Just as important, they neither penalize schools unfairly nor dilute the law.
Core Principles of Adequate Yearly Progress
The first principle is that every student can attain proficiency. It is not acceptable to ignore any group -- minorities, the poor, the disabled, students whose first language is not English. Another principle is respect for states' autonomy. This makes sense because states bear most of the burden for funding schools. The final principle is focus. The federal government cannot effectively monitor many targets for the 95,000 schools in the U.S. Therefore, adequate yearly progress focuses on one target: proficiency. It is up to states to reward a wider array of outcomes, through their own accountability systems.
Use a National Benchmark to Clarify State Standards
NCLB allows each state to create its own definition of proficiency in reading and math. While this allowance respects state autonomy, it has given rise to wide variation in states' proficiency levels. Ironically, the states that are most likely to be penalized under NCLB are those that took accountability most seriously prior to the law. Before the law, they had already set ambitious standards for their students. States that set low proficiency requirements -- seemingly on the basis of what schools could readily achieve rather than on what students should know -- find the law less onerous.
Given that states were allowed to set their own proficiency levels and that part of the implementation period has already elapsed, the federal government must allow states to stick to their approved plans. There is no reason, however, that citizens should not know whether their state's standard is tough or lenient. Such transparency might encourage states with lenient proficiency standards to raise their requirements.
What, however, can the federal government do to keep faith with states that set unusually high proficiency standards for themselves? I propose that any state with a proficiency standard higher than the median state's standard should have its deadline extended beyond 2014 in proportion to the amount by which its proficiency standard exceeds the typical one.
To create transparency and provide a scientific basis for the extension of the deadline in states with tough proficiency standards, we should benchmark states' proficiency levels against the National Assessment of Educational Progress, which is administered in every state to a representative sample of children in grades four and eight. It is not a perfect bridge between states' tests, but it is plenty good for the purpose.
Measure Progress With a Statistically Valid Approach
Measuring adequate yearly progress is in fact a fairly straightforward statistical problem. Essentially, we want to use a school's record of performance to forecast whether 100% of its students will attain proficiency by 2014. Using regression, it is simple for a statistician to construct a forecast and predict what each school's distribution of scores will be in 2014. Forecasts are not perfect, of course, but conventional statistics allow us to construct high, medium, and low forecasts. Using them, we can state the percentage of students whom we can confidently forecast to be below proficiency in 2014.
The technique is similar to that used to forecast whether a business will meet its earnings target, whether an airplane will arrive on time, whether a storm will reach hurricane force, and so on. In other words, we have methods for forecasting progress toward proficiency. NCLB regulations do not need to reinvent the wheel.
A major benefit of using conventional statistical forecasts is that we automatically determine, with a specific level of confidence, whether ethnic and other subgroups are making adequate yearly progress. Currently, a subgroup's progress is measured if the group includes a certain number of students, even if that number is too small for a confident forecast. In other words, a subgroup may be shown as failing even though conventional statistics do not give us any confidence that it actually is. Mistakes are very destructive because a whole school fails to make adequate yearly progress if a single one of its subgroups fails.
A second benefit of the method described above is that it automatically ensures that students performing far below proficiency help a school make adequate yearly progress if their achievement is rising fast enough. There is no need for the clumsy, confusing "safe harbor" provision, which is the current way to deal with schools that start far from the goal.
A final benefit of forecasting is that it becomes easy to convey the meaning of adequate yearly progress to teachers and parents. A progress report should contain simple figures showing where the school is currently, where the school needs to be in 2014, and a forecast of where the school will end up if it stays on its forecast trajectory. The figures should show high, medium, and low forecasts so that people understand how certain the forecast is.
Put the Participation Requirement on a Sound Basis
Under current legislation, a school fails to make adequate yearly progress if fewer than 95% of its total students or students in any subgroup participate in assessment. (If a subgroup is too small for the progress calculation, it is exempt from the participation requirement.) The goal of the participation provision is excellent: We do not want schools to discourage low-achieving students from taking tests. Nevertheless, the rule is arbitrary: A school might have 94% participation through no fault of its own and fail to make adequate yearly progress.
Instead of requiring 95% participation, regulations should assign the minimum score to any student who does not participate. Since all students will score at or above the minimum score, this rule would give schools strong incentives to encourage universal participation. Yet, a school could not face big penalties for a small slip-up: It could only be penalized to the extent that the minimum scores recorded for absent students drag down overall scores.
What about students who switch schools mid-year? Currently, their performance counts for the school at which they are tested, even if they've just arrived. Instead, their achievement should be factored into adequate yearly progress with a weight equal to the share of the year that they've spent at the school since the last test was administered. In most cases, the sending school and receiving school will share responsibility for the achievement of a student who moves.
Adequate yearly progress is the heart of NCLB. On it, many consequences depend. Tracking progress toward proficiency is crucial. Fortunately, administrative action can correct deficiencies in the way that adequate yearly progress is measured and reported today. We need to benchmark state definitions of proficiency, measure progress by forecasting whether each school will attain proficiency, and encourage participation with sound calculations.
Ms. Hoxby is professor of economics at Harvard and a member of the Hoover Institution's Koret Task Force on K-12 Education.
Caroline M. Hosby
Wall Street Journal
INDEX OF NCLB OUTRAGES