Orwell Award Announcement SusanOhanian.Org Home


The Trouble With Research, Part 3

Susan Notes: Nothing works for everyone. What a notion in today's climate, where whole districts insist that teachers not only use the same book but that they be on the same page every day.


Subscribe to Phi Delta Kappan. Then you can read Bracey's research every month. Info at 800-766-1156.

AFTER DEVOTING the March and April columns to various kinds of "trouble" with research, I thought I was done with it. But I had yet to encounter a December 2003 publication from the U.S. Department of Education (ED) titled Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide.

This guide, while certainly user-friendly, comes as close as anything I've ever seen in nearly 40 years in the field to establishing an orthodoxy. ED now speaks ex cathedra. In "The Trouble with Research, Part 2," I wrote: "There are at least three problems with holding up randomized experiments as the ideal for educational researchers to strive toward." First, we research sentient and volitional beings; second, we have a tough time generalizing across sites; and third, "idealizing the randomized experiment . . . defines science as method." I quoted eminent scientists emphasizing the openness of science to all forms of inquiry.

Yet in its guide ED exalts the randomized field study to the detriment of all other designs. And it insults educational researchers in the process. Here's an example:

A randomized controlled trial of the Summer Training and Education Program . . . found that program's short-term impact on participants' reading ability was positive. Specifically, while the reading ability of the control group members eroded by a full grade-level during the first summer of the program, the reading ability of participants in the program eroded by only a half grade-level.

If a pre-post design rather than a randomized design had been used in this study, the study would have concluded erroneously that the program was harmful. That is, the study would have found a decline in participants' reading ability and attributed it to the program. In fact, however, the participants' decline in reading ability was the result of other factors -- such as the natural erosion of reading ability during the summer vacation months -- as evidenced by the even greater decline for members of the control group. (Emphasis in original, p. 2.)


The "natural erosion of reading ability"? A squishier, less scientific concept is hard to imagine, and, obviously, ED has not been reading my columns about who suffers summer loss and why. (See Research, March and September 2002.) I cannot imagine any researcher today conducting such a pre-post study. Any researchers who did would know that, when they report the results, the first question tossed their way will be, "Why didn't you have a control group?" If the answer turned out to be that circumstances made it impossible, the researchers would note that fact and caution readers about interpretations in the absence of a control group. The either/or contrast between pre-post studies and randomized field trials is bogus.

There is another dichotomy in the ED report that is implicit but more insidious. The dichotomy reflects a dangerous but necessary assumption for a department that establishes a "What Works" clearinghouse predicated on a one-size-fits-all pedagogy: treatments work or they don't.

But nothing works for everyone. This is true even in pharmaceuticals, whose randomized field trials ED repeatedly holds up as the gold standard for educators to emulate. Consider this: "The vast majority of drugs -- more than 90% -- only work in 30% or 50% of the people." These words did not emanate from some Web-based alternative medicine guru. They were uttered in London at almost precisely the same time that ED published its guide, and they were spoken by Allan Roses, a geneticist and vice president of GlaxoSmithKline (GSK), the fourth-largest pharmaceutical corporation in the world.

Some gasped. Others, according to the Independent, thought he "deserved credit for being honest about a little publicized fact known to the drug industry for many years." Roses' comments received much attention in Britain, virtually none here.

Actually, the effectiveness rate is probably worse than Roses stated. When drug companies pay for the trials, an increasingly common practice, the trials are much more likely to show positive effects; New York Attorney General Eliot Spitzer has sued GSK for not reporting trials that showed no effect or showed negative effects; and serious questions have been raised about the fact that scientists at the National Institutes of Health (freed since 1995 to be paid consultants) have been pocketing fat fees from drug companies. Whatever the research design, the research can be no better than the integrity of the researchers conducting it.

So suppose that we educational researchers abandon all designs save randomized experiments. Will that improve practice? Only if we can devise wildly more successful educational treatments in the future than we have in the past 100 years. Imagine a study that compares Treatment A and Treatment B and finds an effect size of +.30 for Treatment A, an effect size generally thought to have practical implications. Should schools adopt Treatment A? That depends. What if Treatment A costs three times as much as Treatment B? Or what if it involves much more teacher training? The need for judgment is never removed by statistics.

More important, if we graphed the distribution of scores from Treatment A and Treatment B, with an effect size of +.30, we would find that the distributions overlap considerably. This means that while, on average, Treatment A was superior, for some kids it was no better than Treatment B. Indeed, for some kids it was actually worse. The teacher's task is to use her wisdom, experience, and hunches to try and determine which treatment will work for this child and that child, and a randomized field trial doesn't help her very much with that task.

At the annual meeting of the American Educational Research Association last April, David Berliner elaborated on these points, noting that American students in low-poverty schools are world class in international comparisons while American students in high-poverty schools are Third World. (See Research, June 2003, for these outcomes.) Berliner asked, "How in the world will higher-quality educational research and research more relevant to practitioners fix the problems we have if the problem is not just about instruction in the schools but exists also in the funding formulas?"

Paraphrasing other Berliner questions: Or what if the problems exist because teachers in poor schools are inexperienced or teaching out of field or because poor schools must hire teachers without full certification or because the students in poor schools suffer high rates of asthma and are absent too much of the year?

Berliner suggested that we adopt the engineers' use of design experiments: working backward from our goals and "tinkering with the system to get to our goals." He warned, though, that, if the goal is merely to get higher scores on inexpensive tests, "We will simply end up designing, once again, the best system for the 19th century" (which seems to be the goal of Achieve, Inc., and of the Business Roundtable).

Preceding Berliner at the podium was Judith Ramaley of the National Science Foundation. Early in her speech, Ramaley contrasted the method-driven approach to "science" espoused by ED with the principles of scientific research issued by the National Research Council, which apply to any field, including education:

1. Pose significant questions that can be investigated empirically.

2. Link research to relevant theory so that, over the long term, scientific inquiry can generate theories that can offer stable explanations of phenomena that generalize beyond the particular.

3. Use methods that permit direct investigation of the question.

4. Provide a coherent and explicit chain of reasoning, based on what is known and what is observed, that leads from evidence to theory and back again.

5. Replicate and generalize across studies.

6. Disclose research to encourage professional scrutiny and critique. Such ongoing, collaborative, and public discussion and evaluation is a sign of a healthy scientific enterprise.

These are appropriate principles that relate to questions to be asked, not methods to be followed. Principles 3 and 5 seem to me especially difficult for educational research. We often must approach investigations obliquely, and replication, while a requirement in the natural sciences, is little valued in education. (Translation: you don't get promotion and tenure from doing replications.)

Like Berliner, Ramaley expressed interest in "design research," but she noted that educators are not likely to get much help from others in improving achievement: "Most disciplinary faculty in the arts and sciences and engineering are simply not interested in thinking about education in their disciplines or in K-12 or in general education at the undergraduate level." I agree, but this remains as mysterious to me now as it was when I noticed it 25 years ago. After all, K-12 students later become the majors and graduate students in arts, sciences, and engineering. Whatever develops their talents in elementary and secondary school should make professors' lives more intellectually rewarding later on. As Harold Hodgkinson put it some years ago in the title of a booklet, it really is All One System.

Clearly, Ramaley has a broader conception of a good teacher than the current one at ED, which values high verbal skills and high content knowledge and is generally indifferent to pedagogical skills. "It is rare," she said, "to find an expectation that teachers will also be scholars, contributors to their profession, and participants in a culture of scientific inquiry. This must change."

But will it? I am doubtful. These ideas have been around since Dewey. They were given a powerful boost in 1967 when Robert Schaefer delivered, appropriately enough, the Dewey Lecture at Teachers College. It was titled "The School as a Center of Inquiry" and later published as a book with the same title. Schaefer opened by saying that virtually none of the research conducted at universities makes it through the schoolhouse door. This sad state of affairs derived from teachers' roles. Teachers, Schaefer argued, had assumed a role as consumers of knowledge, and the knowledge wasn't always relevant to their professional lives. They needed to become producers of knowledge.

While Schaefer's ideas have continued to percolate over the years in such formulations as action research, professional development schools, teachers-as-researchers, and Donald Schön's "reflective practitioners," they haven't yet reached the whole profession. Indeed, as the testing juggernaut has rumbled across education, they've probably become more marginal than ever. And that's too bad.




--------------------------------------------------------------------------------
GERALD W. BRACEY is an associate for the High/Scope Foundation, Ypsilanti, Mich., and an associate professor at George Mason University, Fairfax, Va. His most recent book is On the Death of Childhood and the Destruction of Public Schools: The Folly of Today's Education Policies and Practices (Heinemann, 2003). He lives in the Washington, D.C., area.

— Gerald W. Bracey
Phi Delta Kappan

http://www.pdkintl.org/kappan/k_v86/k0409bra.htm


INDEX OF RESEARCH THAT COUNTS


FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of education issues vital to a democracy. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information click here. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.