David Berliner Discusses Common Core Assessment Consortia
NOTE: This is from Larry Ferlazzo's blog at Education Week. David Berliner discusses the two consortia developing assessments to be aligned with the Common Core State Curriculum Standards.
by David Berliner
I see problems with both consortia. First, I think the Partnership for the Assessment of Readiness for College and Career (PARCC) is doing too much testing. As I understand it they plan three high-stakes testing sessions during the school year and then a high-stakes end of year summative assessment in both reading/language arts and mathematics. This is likely to result in eight high-stakes tests a year! I think this is nuts! It says to me that none of the designers of this testing system understand the realities of classroom teaching. This approach is sure to produce eight anxiety-ridden times a year for the students and their teachers, as well as eight days that will be destroyed for instruction, and many more days lost in preparing for the tests. Furthermore this is much more likely to occur in the schools that serve our poorer children, thus producing a high likelihood of boring many of our neediest children to death at an early age.
I see another potential problem in that both consortia plan adaptive tests, particularly the Smarter Balanced Assessment Consortium (SBAC). It is nice that all the test developers are pledging not to use so many multiple-choice test items. But adaptive testing works really well with multiple-choice items. As I understand it adaptive testing is harder to do with other item formats, so I expect a heavy weighting of multiple-choice, low cognitive level, single-right-answer, cheap to produce items to be featured on these tests. This kind of testing has been used in America for a long time, but has become particularly salient since NCLB was made law. This over-reliance on multiple-choice items for accountability may be the reason why there are some credible claims that the USA is engaging in creaticide--the deliberate destruction of creativity in our youth through national educational policies (see Berliner, 2011).
Worst of all, however, is that both tests are designed to make inferences about what students know and can do, but do not do a very good job of assessing what teachers taught well or poorly. However, at the insistence of the Federal government and politicians without psychometic knowledge, the tests will be used for both student and teacher assessment. The validity of the inference about student competencies vis a vis other students may be defensible, although a students' score will surely be heavily loaded with social class factors, thus invalidating that score as a credible measure of school and teacher effects. Given this situation the validity of the inference about what value teachers' add is sure to be quite faulty.
But it is worse: Items for these tests are likely to be picked to spread out the students, as in adaptive testing and in norm referenced approaches to assessment. Traditionally, to do that, you pick items that have around a 50% probability of being answered correctly. But here is the craziness: The test developers are likely to be throwing out items reflecting standards that teachers' have taught well. That is, items on which teachers have demonstrated effects, such that the items are answered correctly by more than, say, 70% of the students, are items eliminated from the test battery because they appear too easy. This traditional approach to assessing students, systematically eliminates items that reflect teaching skill, thus making inferences about teachers' skills invalid. Designing a test to validly infer student capability may prove to be a source of invalidity for inferring teacher capability!
In sum, although I hope I am proved wrong, I currently see three worrisome factors emerging. These are the promotion of too much formal testing, too much reliance on cheap and fast to administer multiple choice test items, and validity problems if the tests are to be used to judge teacher competence as well as student rank.
My reference is to a chapter I did recently:
Berliner, David C. (2011/in press). "Narrowing Curriculum, Assessments, and Conceptions of What it Means to be Smart in the US Schools: Creaticide by Design." Chapter 9 In Ambrose, D. & Sternberg, R.J. (eds.) Dogmatism and High Ability: The Erosion and Warping of Creative Intelligence. NY: Routledge/Taylor & Francis.
David C. Berliner is Regents' Professor Emeritus at Arizona State University, He has written, coauthored or edited over 200+ books, articles, papers, and chapters, among them The Manufactured Crisis: Myths, Fraud, And The Attack On America's Public Schools and Collateral Damage: How High-stakes Testing Corrupts American Schools.
Classroom Q & A with Larry Ferlazzo