ACT's irresponsible report. ACT joins the fear-mongerers
Susan Notes: Gerald Bracey's new book is Reading Educational Research: How to Avoid Getting Statistically Snookered.
Here's the Table of Contents
1. Data, Their Uses, and Their Abuses
2. The Nature of Variables
3. Making Inferences, Finding Relationships: Statistical Significance and Correlation Coefficients
4. Testing: A Major Source of Data—and Maybe Child Abuse
You can read the Introduction, which includes the 32 Principles of Data Interpretation
It is highly irresponsible at this point in time to release a report on trends in student achievement and not present those trends by ethnic group along with changes in ethnic composition of the whole group over time. Even No Child Left Behind knows that. The makeup of this country is vastly changed from what it was. Almost 10 million Hispanics were added to the nation between 1995 and 2005. In 1981, whites made up 85 percent of SAT testtakers, in 2005, 63%.
Yet ACT has released just such a report, Reading Between the Lines. The report shows the proportion of ACT-tested students meeting its college reading benchmark rising from 1994 to 1999 (from 52 to 55%) then falling to its lowest point, 51% in 2005. It also claims that NAEP trends corroborate the reading benchmark trend, 17-year-old reading scores falling 5 points from 1992 to 2004.
ACT neglects to mention that the NAEP score in 2004 was identical to that in 1971, 285, a time when I don't recall people being worried about texts declining because they contained too many graphics, too simple English, were too impaired because computers had removed the emphasis on print, or because Toni Morrison had replaced Shakespeare in the English canon (not surprising since Morrison's first novel appeared in 1970).
ACT also doesn't mention it cherry-picked the highest NAEP score. Reading scores for 17-year-olds have varied by only 5 points over the 33 year history from 1971 to 2004.
I don't know if ACT ignores SAT trends because ETS is a rival or because the SAT trends contradict the ACT results. Here are the demographic changes in the SAT pool. ACT does not report them for its test:
. . . . . . . .1981 . . .2005
whites 85 . . . . . 63
blacks 9 . . . . . 12
Asian 3 . . . . . 11
Mexican. 2 . . . . . 5
Puerto Rican 1 . . . . . 1
Am. Indian 0 . . . . . 1
The numbers for 2005 do not sum to 100 because the College Board now uses two categories not present in 1981, when it first started releasing scores by ethnicity, Latino (4%) and Other (4%).
The average change in the "national" average for the SAT, 1981-2005: 4 points
Change for ethnic groups:
whites +10 points
blacks +21 points
Asians +37 points
Mexicans +15 points
P. Ricans +23 points
Am. Indians +18 points
That the ethnic subgroups can show so much progress and the national aggregate so little is the result of a common phenomenon known as Simpson's Paradox. All groups have gained, but the lower scoring groups have increased in size. Having lower scoring, even though improving, students make up a larger share of the total pie attenuates the national average. Of course, discussing a falling average due to adding more and more people who score low is quite different from saying that everyone is getting more stupid which is what ACT is saying. Simpson's Paradox is explained for both trends over time and for measures taken at a single point in time on pages 62-67 of Reading Educational Research: How to Avoid Getting Statistically Snookered.
ACT devotes one 6-line, numbers-free paragraph claiming that "nearly all" ethnic groups show the same general pattern, but without the demographic changes taken into account, the ethnic data are not that meaningful.
ACT even manages to indulge in fear-mongering over what some might consider good news. Reporting results from PISA, ACT says "[PISA showed] nine countries ranking statistically significantly higher than the U. S. in average performance." Nine? Only nine? Out of 41 nations in the study? The language of fear pervades the document.
There is also some question about the validity of the benchmark. While those who met it did better in college than those who did not, 64% of those who did not got a C or better in a freshman U. S. History course and 68% got a C or better in psychology. At start of the second year, 78% of those who met the benchmark were back on campus compared to 67% of those who did not.
Is there some political motivation behind this? I don't know, but the references contain a lot of cites from those in favor: American Diploma Project, Business Roundtable, Fordham Foundation, National Association of Manufacturers, Education Trust, Mike Cohen, Jay P. Greene, Reid Lyon, and Sandra Stotsky.
If you want to see for yourself http://www.act.org gets you to a link, but putting "Reading Between the Lines" plus ACT in Google also gets you "A Libby-Cheney Conspiracy, Or Worse?" By John Dean.
INDEX OF RESEARCH THAT COUNTS