Orwell Award Announcement SusanOhanian.Org Home


The DIBELS Tests: Is Speed of Barking at Print What We Mean by Reading Fluency?

Susan Notes: In addition to other concerns, Jay Samuels says that DIBELS misuses the term fluency. He says DIBELS doesn't measure fluency; it only measures speed.


by S. Jay Samuels, University of Minnesota, Minneapolis

As one of the reviewers of Brant Riedel's study of DIBELS, I reflected on his
revisions and findings. I concluded that his study was important for the
following reasons. He investigated the extent to which five DIBELS instruments
predict poor versus satisfactory comprehension; his literature review
incorporated a comprehensive overview of research that attempted to validate DIBELS as a
test, and he pulled together reports that are critical of DIBELS.

In Riedel's study, students were administered the following tests: Letter
Naming Fluency, Nonsense Word Fluency, Phoneme Segmentation Fluency, Oral Reading
Fluency, and Retell Fluency. These tests were used to determine if they could
predict students' first and second-grade reading comprehension status. He
concluded that:

âIf the goal of DIBELS administration is to identify students at risk for
reading comprehension difficulties, the present results suggest that by the
middle of first grade, administration of DIBELS subtests other than ORF is not
necessary. The minimal gains do not justify the time and effort.â

Other than the test of Oral Reading Fluency, Riedel's findings lead one to
question whether the widespread use of the other DIBELS tests is justified.

The DIBELS's battery of tests, which are used to assess more than 1,800,000
students from kindergarten to grade 6, aim to identify students who may be at
risk of reading failure, to monitor their progress, and to guide instruction.
With the widespread use of DIBELS tests, a number of scholars in the field of
reading have evaluated them, and not all of their evaluations have been
flattering. For example, Pearson (2006, p. v) stated,

âI have built a reputation for taking positions characterized as situated in
'the radical middle'. Not so on DIBELS. I have decided to join that group
convinced that DIBELS is the worst thing to happen to the teaching of reading
since the development of flash cards.â

Goodman (2006), who was one of the key developers of whole language, is
concerned that despite warnings to the contrary, the tests have become a de facto
curriculum in which the emphasis on speed convinces students that the goal in
reading is to be able to read fast and that understanding is of secondary
importance. Pressley, Hilden, and Shankland (2005, p. 2) studied the Oral Reading
Fluency and Retelling Fluency measures that are part of DIBELS. They concluded
that âDIBELS mispredicts reading performance much of the time, and at best is
a measure of who reads quickly without regard to whether the reader
comprehends what is read.â

If Riedel's conclusion that administration of subtests other than Oral
Reading Fluency is not necessary for prediction of end-of-first- and second-grade
comprehension, in combination with the critical evaluations of DIBELS by some of
our leading scholars in reading is not enough to raise the red flag of
caution about the widespread use of DIBELS instruments, I have an additional concern
about the misuse of the term fluency that is attached to each of the tests.
Because each of the tests is labeled as a fluency test, it is only fair game to
see if that term is justified. I contend that with the exception of the
Retell Fluency test, none of the DIBELS instruments are tests of fluency, only
speed, and that the Retell Fluency test is so hampered by the unreliability of
accurately counting the stream of words the student utters as to make that test
worthless. Let us not forget that, in the absence of reliability, no test is
valid. To understand the essential characteristic of fluency, and what its
window dressings are, we must look to automaticity theory for guidance (LaBerge &
Samuels, 1974). At the risk of over-simplification, in order to comprehend a
text, one must identify the words on the page and one must construct their
meaning. If all of a reader's cognitive resources are focused on and consumed by
word recognition, as happens with beginning reading, then comprehension cannot
occur at the same time. However, once beginning readers have identified the
words in the text, they then switch their cognitive resources to constructing
meaning. Note that a beginning reader's strategy is sequential, first word
recognition and then comprehension.

How can we describe the reading process for students who have become
automatic at word recognition as the result of one or more years of instruction and
practice? For them, the reading process is different. When the decoding task is
automatic, the student can do both the decoding and the comprehension tasks at
the same time. The fluency section in the National Reading Panel report
(National Institute of Child Health and Human Development, 2000, p. 3-8) stated
precisely the same idea: âThe fluent reader is one who can perform multiple
tasks-such as word recognition and comprehension-at the same time.â

It is the simultaneity of decoding and comprehension that is the essential
characteristic of reading fluency. Secondary characteristics of fluency such as
speed, accuracy, and expression are indicators, but not the essential
characteristics. For example, I can read Spanish orally with accuracy and speed, but I
am unable to understand what I have read. Am I fluent in Spanish? No! Nor
does the ability to read nonsense jabberwocky with expression capture the
essential characteristic of fluency. Thus, one criticism I have of the DIBELS tests
is that, despite their labels, they are not valid tests of the construct of
fluency as it is widely understood and defined. They assess only accuracy and
speed. The creators of DIBELS are guilty of reification. By attaching the term
fluency to their tests, they create the false assumption that that is what their
tests measure. I have another criticism. As Riedel reports in his research,
about 15% of the students who take the Oral Reading Fluency test get
misidentified as good readers, when, in fact, they have poor comprehension. These
misidentified students are often English-language learners who have vocabulary
problems that interfere with comprehension.

Almost all the validation studies for DIBELS have used a procedure that
mimics what beginning readers do when they read a text, but not what fluent readers
do. As I described (1994, p. 821) in my updated version of the LaBerge and
Samuels (1974) model of reading, beginning readers first decode the words in the
text. Having decoded the words, the reader then switches attention over to
getting meaning, a two-step process. In validating the DIBELS tests, the
researcher is likely to get a reading speed score on the DIBELS test, and at a
different time, on a different test, such as comprehension, the researcher gets a
score for the student on the second test. Then the scores are correlated, and
under these conditions the two scores may correlate quite well. However, this
two-step sequence is what beginning readers do when they read, not what skilled
readers do.

What we need, instead, are tests that mimic fluent reading, that demand
simultaneous decoding and comprehension. In order to do that, the researcher must
inform students that as soon as the oral reading is done, the student will be
asked comprehension questions. Under these conditions, the student must decode
and comprehend at the same time. When such testing conditions are used, at
least one researcher (Cramer, in press) has failed to find a significant
correlation between oral reading speed and comprehension. Failure to find that reading
speed and comprehension correlate significantly makes sense when the reading
task demands simultaneous decoding and comprehension. If one reads too fast,
comprehension is compromised; thus, slowing down can improve comprehension.
Graduate students studying for exams know that reading speed can be the enemy of
comprehension.

Let me summarize my position. The most legitimate use of oral reading speed
is as Deno (1985) brilliantly conceptualized it: a way to monitor student
progress. However, the danger of using reading speed as the measure of progress is
that some students and teachers focus on speed at the expense of
understanding. One should note that the misuse of the term fluency is not found in Deno's
original groundbreaking work on curriculum-based measurement. The developers of
DIBELS would do the reading field a service by dropping the term fluency from
their tests. That said, there is need to develop new tests that that can
measure fluency developmentally across grades. To do this, the testing demands
would require that students simultaneously decode and comprehend using texts that
increase in difficulty. Because the reading field already has a theory for
conceptualizing fluency, and the expertise for developing standardized tests, it
is time to move forward in developing theoretically and pedagogically sound
measures of fluency.


S. JAY SAMUELS is a professor of education psychology and of curriculum and
instruction at the University of Minnesota (College of Education and Human
Development, Department of Educational Psychology, Burton Hall, Minneapolis, MN
55455, USA; e-mail samue001@umn.edu). He served on the National Reading Panel
and coauthored the fluency section of the panel's report. He is the recipient of
the International Reading Association's William S. Gray Citation of Merit and
the National Reading Conference's Oscar S. Causey Award, and is a member of
the Reading Hall of Fame.


References

Cramer, K. (in press). Effect of degree of challenge on reading performance.
Reading and Writing Quarterly.

Deno, S. (1985). Curriculum based measurement: The emerging alternative.
Exceptional Children, 52, 219-232.

Goodman, K.S. (2006). A critical review of DIBELS. In K.S. Goodman (Ed.), The
truth about DIBELS: What it is, what it does (pp. 1-39). Portsmouth, NH:
Heinemann.

LaBerge, D., & Samuels, S.J. (1974). Toward a theory of automatic information
processing in reading. Cognitive Psychology, 6, 293-323.

National Institute of Child Health and Human Development. (2000). Report of
the National Reading Panel. Teaching children to read: An evidence-based
assessment of the scientific research literature on reading and its implications for
reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S.
Government Printing Office.

Pearson, P.D. (2006). Foreword. In K.S. Goodman (Ed.), The truth about
DIBELS: What it is, what it does (pp. v-xix). Portsmouth, NH: Heinemann.

Pressley, M., Hilden, K., & Shankland, R. (2005). An evaluation of
end-grade-3 Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Speed reading
without comprehension, predicting little (Tech. Rep.). East Lansing, MI:
Michigan State University, Literacy Achievement Research Center.

Samuels, S.J. (1994). Toward a theory of automatic information processing in
reading, revisited. In R. Ruddell, M.R. Ruddell, & H. Singer (Eds.),
Theoretical models and processes of reading (4th ed., pp. 816-837). Newark, DE:
International Reading Association.





— S. Jay Samuels
Reading Research Quarterly
2007-10-01


INDEX OF RESEARCH THAT COUNTS


FAIR USE NOTICE
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of education issues vital to a democracy. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information click here. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.