Orwell Award Announcement SusanOhanian.Org Home


High-stakes Testing and Student Achievement: Updated Analyses with NAEP Data

Susan Notes:


In spite of a researching showing how harmful high-stakes testing is on teaching and learning, the U. S. Department of Education continues to promote it. Use this research to present evidence in letters to editors, letters to Congressional representatives, and so on.

Abstract: The present research is a follow-up study of earlier published analyses that looked at the relationship between high-stakes testing pressure and student achievement in 25 states. Using the previously derived Accountability Pressure Index (APR) as a measure of state-level policy pressure for performance on standardized tests, a series of correlation analyses was conducted to explore relationships between high-stakes testing accountability pressure and student achievement as measured by the National Assessment for Education Progress (NAEP) in reading and math.

Consistent with earlier work, stronger positive correlations between the pressure index and NAEP performance in fourth grade math and weaker connections between pressure and fourth and eighth grade reading performance were found. Policy implications and future directions for research are discussed.


Introduction

The present study adds to a growing literature on the relationship of high-stakes testing accountability and student achievement. The major goal of federal and state high-stakes testing policies is to improve schools. The theory of action undergirding this approach suggests that by tying negative consequences (e.g., public exposure, external takeover) to standardized test performance, teachers and students in low performing schools will work harder and more effectively, thereby increasing what students learn. Although the practice of high-stakes testing dates back several decades in various districts (Chicago public schools) and states (Texas, New York, Florida), the passage of the No Child Left Behind Act in 2002 mandated high-stakes testing nationwide and at many more grade levels than was customary. The extant literature on high-stakes testing and student achievement can be organized into three types. In the first type, researchers use two-group designs to compare achievement patterns in states with accountability practices versus those without such practices, or in states with a long history of accountability versus those with shorter histories (Amrein & Berliner, 2002a; Amrein Beardsley & Berliner, 2003; Braun, 2004; Dee & Jacob, 2009). A second approach has analysts ranking states according to some measure of High-stakes testing and student achievement 3 accountability and then using correlation or regression techniques to ascertain the form and significance of the relationship between accountability measures and student achievement (Carnoy & Loeb, 2002; Hanushek & Raymond, 2005). A third type of research focuses on specific aspects of high-stakes testing practice and impact as they affect particular districts, regions, or states (Clarke, Haney, & Madaus, 2000; Jacob, 2001; Winters, Trivitt, & Greene, 2010).

Each approach has limitations of methods, making it difficult to determine with confidence the effects of high-stakes testing. Still, a pattern seems to have emerged that suggests that highstakes testing has little or no relationship to reading achievement, and a weak to moderate relationship to math, especially in fourth grade but only for certain student groups (Braun, Wang, Jenkins, & Weinbaum, 2006; Braun, Chapman, & Vezzu, 2010; Figlio & Ladd, 2008; Nichols, Glass, & Berliner, 2006). This particular pattern of results (only affecting fourth grade math) raises serious questions about whether high-stakes testing increases learning or merely more vigorous test preparation practices (i.e., teaching to the test).

This study is a follow-up to our earlier work in which we used an empirically-derived measure of state level high-stakes testing policy to examine the relationship between accountability policy implementation and student achievement as measured by the National Assessment for Education Progress, NAEP). In contrast to other research that measures high-stakes testing accountability according to the number of laws passed that are associated with accountability (Clarke et al., 2003; Pedulla et al., 2003), or by estimating the acceptance of accountability based on state level variables such as funding and student demographic characteristics (Braun et al., 2006; Carnoy & Loeb, 2002), our measure was derived from both legislative efforts as well as "on-the-ground" implementation, response, and reaction (see Nichols et al. 2006). In our earlier analyses, we used the unique measure of accountability pressures that we created with NAEP 4th and 8th grade data from 1992-2003. The purpose of this follow-up study is to examine state policy accountability (measured by our state level accountability pressure index) as it relates to more recent (e.g., 2005, 2007, and 2009) NAEP data available . . .

Read the rest of this article at the hot link below.

— Sharon L. Nichols, Gene V. Glass, David C. Berliner
Education Policy Analysis Archives
2012-07-20
http://epaa.asu.edu/ojs/article/view/1048/988


INDEX OF RESEARCH THAT COUNTS


FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of education issues vital to a democracy. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information click here. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.