Orwell Award Announcement SusanOhanian.Org Home


What Counts as Credible Research?

Susan Notes:

In their discussions of Obama administration research policy, key officials continue to use phrases such as "what works" and "methodological rigor" that were invoked in the Bush administration to support highly restrictive definitions of appropriate research methods.

Time for change.

by Annette Lareau & Pamela Barnhouse Walters

It is a critical moment in educational policy. The Obama administration has renewed emphasis on educational policy and No Child Left Behind is up for renewal. But in the current debate, there has not been sufficient discussion of a crucial piece of educational debates: what kinds of research should be considered to be acceptable? In recent years, randomized-controlled trials were elevated to the position as the "gold standard" for educational research. We believe this position to be highly problematic. As the debate about education begins to pick up speed, it is important to broaden the definition of legitimate educational research.

It is a critical moment in educational policy. The Obama administration has renewed emphasis on educational policy and No Child Left Behind is up for renewal. But in the current debate, there has not been sufficient discussion of a crucial piece of the educational debates: what kinds of research should be considered to be acceptable? In recent years, randomized-controlled trials were elevated to the position as the "gold standard" for educational research. We believe this position to be highly problematic. As the debate about education begins to pick up speed, it is important to broaden the definition of legitimate educational research.

Seeking to "restore science to its rightful place" in policymaking (Obama, 2009), the Obama administration vowed to make important shifts in research policy. Nevertheless, there are signs that the current administration has allowed an overly narrow and restrictive definition of what constitutes "rigorous" or "scientific" research in education to stand. Notably, the privileging of the randomized controlled trial as the "gold standard" for education research has not been sufficiently questioned and challenged, nor has there been a vigorous effort to examine the many ways that this narrow vision of good research was institutionalized in federal policy. A revisiting of Bush administration science policy, we contend, needs to include a serious reexamination of the very definition of "scientific research" in education that came to prevail in that period and the ways in which that definition was institutionalized. Without such a reexamination, forms of research that could usefully inform educational policymaking will remain marginalized or underdeveloped. As Lisbeth Schorr (2009) wrote, "Policymakers radically diminish the potential of reforms if they allow themselves to be bullied into accepting impoverished definitions of credible evidence."

In their discussions of Obama administration research policy, key officials continue to use phrases such as "what works" and "methodological rigor" that were invoked in the Bush administration to support highly restrictive definitions of appropriate research methods. In June, 2009, for example, the Director of the Office of Management and Budget, Peter R. Orszag, announced that the administration's focus on evidence-based policy decisions called for "new initiatives to build rigorous data about what works." Under the prior administration, the quest to determine �what works" was understood to require randomized controlled trials. In a July address to the Regional Education Labs, the newly-appointed director of the Institute of Education Sciences, John Easton, affirmed his predecessor's commitment to "methodological rigor" in education research, a commitment that likewise appeared to call for randomized controlled trials.

In what follows we offer our thoughts on three key aspects of education research policy that should be carefully reexamined. First, rather than privileging randomized-controlled trials, we call for an embrace of a broader range of useful forms of research. Second, we highlight a need to better understand the limits and challenges of doing good research in a complex social setting such as schools. Third, we argue that proponents of evidence-based policymaking have an overly optimistic view of the potential role that good science can play in policymaking.

Broadening the scope

In recent years, there has been a confounding of the notion of scientific "rigor" and the approach of randomized-controlled trials. Randomized-controlled trials were championed as a highly desirable method: many termed them the "gold standard." The relative value of randomized-controlled trials has been institutionalized in key ways, including the very definition of research in the Congressional act creating the Institute of Education Science (IES), the characteristics of desirable applications for post-doctoral funding, the funding priorities of the IES, and the preponderance of advocates for randomized-controlled trials on the national governing board that advises IES (U.S. Department of Education, 2002, 2009a, 2009b). While randomized controlled trials are a good (although not the only) way of determining whether a fairly specific intervention is effective, such questions of what works constitute at best only a narrow slice of the questions about education that warrant policymakers' attention (Lareau, 2009).

We suggest that federal department of education decision makers need to acknowledge that there are many different research questions in education, and that different research questions call for different methods. There needs to be a realistic and critical assessment of the limits of randomized-controlled trials and the relatively narrow forms of knowledge that can be gained from their use (Phillips, 2009). Investigations that address a rich range of questions that fall outside the realm of randomized controlled trials need to be supported as well, such as the mechanisms through which parents influence children's schooling experiences, the micro-interactional patterns that build trust among school personnel, or political and organizational impediments to reform.

The School is a Complex Social Setting, Not a Lab

The kind of clean experimental manipulation of "conditions" or "treatments" called for in experimental research is hard to do in real-world settings outside an experimental laboratory. Nowhere are these problems more apparent than in the implementation of randomized-controlled trials in the "naturalistic" settings of schools. Schools are complex social environments in which it is impossible to "control" for the wide range of conditions that influence the delivery of services. Even strong advocates for randomized-controlled trials admit that troubles surface in studies (although they remain confident of the results). For example in a randomized-controlled trial study of the implementation of James Comer's program for at-risk youth in Chicago, Thomas Cook and colleagues acknowledged that some principals in treatment schools did not embrace the program, principals in "control" schools did implement part of the program, one-fifth of schools dropped out, and some concluded that the program was not systematically implemented in any of the schools (Lareau, 2009; Cook et al., 2000). In addition, the policy reform took place in a context of frequent changes in principal leadership, high levels of teacher turnover, shifts in administrative policy, and the placement of one-sixth of the schools in the city on probation (Cook et al., 2000). This appears to be a typical scenario for the introduction of educational reforms (Hubbard, Stein, and Mehan, 2006). But, randomized-controlled trials, by definition, expect that each of the settings be identical in character. It is the identical character of the "treatment" and the "control" that allows randomized-controlled trials to highlight the impact of the intervention on key outcomes. Unfortunately, educational research usually cannot comply with these crucial assumptions. These methodological violations lead to questions about the validity of the results.

In short, there is a significant gap between the ideals of randomized-controlled trials and the reality of research in the real world. If the implementation is so problematic, why not adopt more realistic expectations of what researchers can accomplish? Why set forth a set of standards in funding priorities that, realistically, cannot be met? We would prefer to see a broader, more inclusive, set of goals for fundable research that includes many different kinds of methodological approaches, as well as a realistic sense of what researchers can actually accomplish given the turbulent, complex, and often chaotic conditions for carrying out research in schools today. Unfortunately, many federal grants programs offer overly narrow funding guidelines, which are in tension with a range of methodological approaches (Becker, 2009; Ragin, Nagel, & White, 2004).

Dubious Link between Evidence and Policy

A key underlying assumption of "evidence-based policy" is that good research leads to good policy and practice. This is studiously naïve. In part, this assumption rests on an assertion that the key impediment to making good policy decisions is a shortage of good empirical evidence in education. Ironically, this very assertion is undermined by the weight of empirical evidence: research on policy development shows that research findings, information, and statistical facts typically play a very small role in policy development. Instead, political factors generally carry the day. In the case of education, structures such as the decentralization of authority, the power of stakeholders such as business roundtables or teachers' organizations, and political values of individualism, and legal systems, have played a pivotal role in the development of policies on topics such as charter schools, class size, desegregation, and students' rights (see Kelman, 1988; Kingdon, 2002; McDonnell, 2007; Stone, 2001). Educational policymaking proves no exception to these general patterns.

Even under the best of circumstances -- that is, strong evidence on which political decisions could be based -- good evidence would likely play only a small role in policymakers� decisions. As Carol Weiss (2007) said, "policymakers at federal, state, and local levels have not displayed concerted eagerness to be guided by research" (p. 286). That's not to say that policymakers do not invoke science in support of their positions. But the claim of "evidence-based policy" generally has the sequence backwards. Rather than choose their positions based on the preponderance of evidence, policy makers use scientific evidence as "just one more resource... as they attempt to balance among competing interests in an essentially political environment" (McDonnell, 1988, p. 91). As Chester Finn put it, research findings are most likely to be used "as an 'arsenal' for those already fighting the policy wars" (Finn, 2008). The selective use of research findings on charter schools and vouchers by policy activists is a case in point. Hence, the quest for "evidence-based policy" is at best misguided. At worst, it is an attempt to use a cloak of scientific legitimacy to obscure the political motivations behind policymaking.

Final Thoughts

The Secretary of Education, Arne Duncan, has proclaimed that, "we have a perfect storm for [educational] reform" (2009). As policy makers move forward, we urge them to make a deliberate and thoughtful correction to the overly restrictive policies of the past. For it is only in surpassing the overly restrictive research approach of randomized-controlled trials -- with its potential for narrow answers to complex social problems -- that we stand to make genuine progress in this important area. We need richer, varied, and meaningful research results. We need realistic assessments of the kind of research we can, and cannot, undertake. Since there are limited ways in which research findings typically influence policy decisions, we should consider ways in which that decision-making process might be made to better respond to good research.

References

Becker, H.S. (2009). How to find out how to do qualitative research. Retrieved March 3, 2010, from http://ijoc.org/ojs/index.php/ijoc/article/viewFile/550/329

Cook, T.D., Murphy, R.F., & Hunt, H.D. (2000). Comer's school development program in Chicago: A theory-based evaluation. American Educational Research Journal, 36(3), 535-597.

Duncan, A. (2009). Address to the Governors Educational Symposium, June 14, 2009. Retrieved March 3, 2010, from http://www.ed.gov/news/speeches/2009/06/06142009.pdf

Finn, C.E., Jr. (2008). Contribution to Spencer Foundation essays, Research that has had an impact on practice and/or policy. Retrieved March 3, 2010, from http://www.spencer.org/content.cmf/grant-effectiveness-project

Hubbard, L., Stein, M.K., & Mehan, H. (2006). Reform as learning: When school reform collides with school culture and community politics. New York: Routledge.

Kelman, S. (1988). Making public policy: A hopeful view of American government. New York: Basic.

Kingdon, J.W. (2002). Agendas, alternatives, and public policies. (2nd ed.). New York: Longman.

Lareau, A. (2009}. Narrow questions, narrow answers: The limited value of randomized control trials for education research. In P. B. Walters, A. Lareau, & S. Ranis (Eds.), Education research on trial: Policy reform and the call for scientific rigor (pp. 145-163). New York: Routledge.

McDonnell, L.M. (1988). The politics of education: Influencing policy and beyond. In S.H. Fuhrman, D.K. Cohen, & F. Mosher (Eds.), The state of education policy research (pp. 19-39). Mahway, NJ: Lawrence Erlbaum Associates.

Obama, B. (2009). Inaugural address (January 20, 2009). Retrieved March 3, 2010, from www.whitehouse.gov/blog/inaugural-address

Phillips, D.C. (2009). A Quixotic quest? Philosophical issues in assessing the quality of education research. In P. B. Walters, A. Lareau, & S. Ranis (Eds.), Education research on trial: Policy reform and the call for scientific rigor (pp. 163-195). New York: Routledge.

Ragin, C., Nagel, J., & White, P. (2004). Workshop on scientific foundations of qualitative research. (Washington DC, National Science Foundation). Retrieved March 3, 2010, from www.nsf.gov/pubs/2004/nsfo4219/nsfo4219.pdf

Schorr, L.B. (2009, August 25). Innovative reforms require innovative scorekeeping. Education Week,. Retrieved March 3, 2010, from http:lisbethschorr.org/doc/Innovativereformsaug2009

Stone, D. (2001). Policy paradox: The art of political decision making. (Revised ed.). New York: WW Norton.

U. S. Department of Education. (2002). Education Sciences Reform Act, passed on January 23, 2002 by the 107th Congress, HR 3901-4, Washington D.C. http://www.ed.gov/policy/rschstat/leg/PL107-279.pdf

U. S. Department of Education. (2009a). Request for Applications, Postdoctoral Research Training Program in the Education Sciences, Institute of Education Sciences, CFDA Number 84.305B. Retrieved March 3, 2010, from http://www.ies.ed.gov/funding/pdf/2010_84305B.pdf

U.S. Department of Education. (2009b). National Board for Education Sciences, Minutes of July 27, 2009 meeting. Washington DC. Retrieved March 3, 2010, from http://ies.ed.gov/director/board/minutes/minutes07_27_09.asp

Walters, P.B. (2009). The politics of science: Battles for scientific authority in the field of education research. In P.B. Walters, A. Lareau, & S. Ranis (Eds.) Education research on trial: Policy reform and the call for scientific rigor (pp. 17-50). New York: Routledge.

Walters, P.B., Lareau, A., & Ranis, S. (Eds.). (2009). Education research on trial: Policy reform and the call for scientific rigor. New York: Routledge.

Weiss, C.H. (2007). Can we influence education reform through research? In S.H. Fuhrman, D.K. Cohen, & F. Mosher (Eds.),The State of Education Policy Research (pp. 281-287). Mahwah, NJ: Lawrence Erlbaum Associates.

— Annette Lareau & Pamela Barnhouse Walters
Teachers College Record
3/01/10
http://www.tcrecord.org/Content.asp?ContentID=15915


INDEX OF RESEARCH THAT COUNTS


FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of education issues vital to a democracy. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information click here. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.