All Assessments Are Not Created Equal
Ohanian Comment: I'm with this teacher most of the way, but I can't fathom his conclusion about the testing. Does it really work out ok because all the district tests are so messed up? That said, speaking as a teacher, I can also offer first-hand testimony to the lunacy of the exams that result when teacher committees write them. An interesting, not-nice psychology seems to take over, with teacher competing to write clever questions. I experienced the same phenomenon while taking my M. A. oral exams--lots of oneupmanship among the professors on my committee.
I very much respond to the "key to the city" and the Chilean miner anecdotes. When I taught in an alternative high school, I subscribed to Time Magazine, among several. My inner-city students could read the words but did not have a clue about the meaning of the articles. They were totally lost.
by Matt Amaral
In today's climate of high stakes testing, I havenĂ˘€™t heard too many people talking about the tests themselves. For example, who is making these tests, and what are some problems we're having with them here at the ground level?
I'll tell you some concerns I have. As dire as some of them may seem, there is a silver lining.
Last year, an essay question on the STAR test asked something like this, "What is the biggest thing you would change if you were given the key to your city?" The problem a lot of students had with this question, aside from being really bad at writing, was they didnĂ˘€™t have any idea what having a key to a city meant? They didn't have the cultural capital, language proficiency, or life experience to know what that expression meant. So this prompt immediately relegated a large group of students to failure because they didnĂ˘€™t understand what the question was asking of them.
Detractors might say something like, "Well, that's ridiculous. Those kids obviously aren't even attempting to learn. How can they not know what a key to a city is? They have never had an interest in their own education." Once again, it seems that society's perceptions at large, and even those of the test makers, assumes these kids are a lot smarter than they are.
Let me put it this way. When the Chilean miners were being rescued, I showed my students some of CNN's footage from my computer. Guess what? Over half of the students had no idea who the Chilean miners were. We sometimes forget how young and inexperienced these students are. We assume everyone has 900 channels, but you have to remember, many of these kids can't afford cable. Hardly any of them have computers at home, and even less that have one hooked up to a printer that has ink. Internet? Forget about it.
We forget that while corporations and the rich are becoming more technologically advanced, the poor are lagging very far behind.
Now, I'm not arguing for dumbing-down test questions to their level, so to speak. Just because some of them spell their names wrong doesn't mean our assessments should assess nothing more than getting the date right. I like the idea of thinking about what you would change in your city; I think that is a good writing prompt, I just think you need to make sure low-income students, ELL students, and students who can't afford computers, the internet, or own a television that is simply plugged into a wall, understand the idioms and expressions you're using.
Last month, at my school, we gave the same assessments in every high school. So every 9th grader in my English classes took the same assessment for House on Mango Street as every other 9th grader in the other two high schools in my city. Now I'm not even going to go into the fact that some teachers didn't even teach that book, or that some classes don't even have teachers yet (yes, still going on today on December 8th at my school). I'm just going to talk about the fact that after we assessed them, and uploaded their scores into the District database, a group of English teachers in my department got together to assess how they did on certain questions. What we found was a bit depressing. In our opinion, 8 of the questions were wrong. Either they were worded wrong, there was more than one answer, the answer given was wrong, there was a better answer-- you get the picture. Now, on a test with less than 30 questions, how can we possibly assess anything if 8 of them are wrong?
So who is making these tests? Well, in our case, the tests were thought up by actual teachers at multiple sites, along with some district officials. Teachers (me included) took the tests beforehand and gave feedback as to which ones were wrong or needed work. The problem there was many of us differ about what is wrong with the tests and what needs to be reworded. Yet still, the finished product was still a test with many flaws.
What that tells me is that we need a greater focus in this profession about writing test questions, prompts, and we need to examine whether questions are really assessing what we want them to.
HereĂ˘€™s the kicker. They also need to be age appropriate and culturally sensitive.
So with all the things it takes to write a good prompt, or design a decent test, I just don't think we have enough experts out there who are good at this.
Here's a quick, but pathetic anecdote. A friend of mine got a job working for the education company in charge of monitoring my high school when we were taken over by NCLB. He was hired because of his business background, and his background in sales. He told me that at one point HE was helping write assessments-- a man with absolutely no background in education. We both laughed about it. Then I went to work and put my head in my hands.
I am skeptical about every single assessment I give out, whether it's a STAR test for NCLB or a District Assessment. I have serious reservations about what exactly we are assessing, and I am worried about how this will reflect on students and teachers.
I am all for increased teacher evaluation and accountability. But it is things like this that hurt our fight to show evaluations can work. How can you evaluate a teacher on a 28-question test when 8 of them are wrong? Instead of the class average being a 70%, it is now a 60%. If you compare that with how your students did last year in 8th grade, all that's going to show is that in 9th grade (when they had YOU as a teacher), their scores dropped from Proficient to Basic. And five years from now, when we have REAL evaluations, those kinds of mistakes in the assessment itself will be forgotten, theyĂ˘€™ll just see how large chunks of kids lost headway under your watch.
Okay, here's the good news.
As far as I can see, even though a lot of these assessments are horrendous, they are pretty consistent. That is, they are consistently horrible, they always have many, many mistakes, so we can just hope all of the tests we are giving have a quarter of the questions wrong. In the end, the smartest kids still score the highest, and the struggling learners still struggle. These irregularities are not completely destroying our assessments of these kids. The danger is when the little problems result in bigger classifications-- going from Proficient to Basic. That is a huge jump, even if it might only be a few points.
So our next big assessment is coming up. I just took the proposed exam and scored around a 78, That's okay, because all the tests the students have been taking every year are this messed up, so hopefully it won't reflect on what I'm teaching them.
Although I wish it would.
Matt Amaral is a writer and high school English teacher from the San Francisco Bay Area. He received his undergraduate degree in English Literature from the University of California at Davis and an MFA in Creative Writing from National University. Matt is a featured Blogger at EducationNews.org, a leading international website for Education, and New America Media, the nationĂ˘€™s leading ethnic news organization. He is the former Editor-In-Chief of The Gnu Literary Journal. You can also read his work in the 2010 issues of TeachHub, EmPower Magazine, The Dirty Napkin, Diverse Voices Quarterly, Eclectic Flash, BirdĂ˘€™s Eye ReView, TravelMag, Escape From America Magazine and InTravel Magazine.