Judge Rules in Favor of Releasing Teacher Test Scores; Data Dump Would Promote a Flawed and Cynical Method of Accountability
The judge's decision reveals that whether the data is good or not is irrelevant. She states clearly that she doesn't care about "the value, accuracy, or reliability" of the data. In her words:
This court is not passing judgment on the wisdom of the decision of the DOE . . . nor is this court making any determination as to the value, accuracy, or reliability of the TDRs. This court is deciding the only issue before it, the purely legal issue under Article 78 of whether the DOEs decision was without a rational basis, rendering it arbitrary and capricious.
It looks like the union sued on the wrong grounds.
The judge also ruled that release of this data isn't an "unwarranted" invasion of privacy "since the data at issue relates to the teachers' work and performance" and "does not relate to their personal lives." In other words, the Department of Education can't release teachers' phone numbers but can release statistics on how their students do on flawed standardized tests.
Various newspapers had filed Freedom of Information requests to get this info. Here's the New York Daily News Headline on the judge's decision: Judge rightly blocks Mulgrew from using UFT muscle to keep teacher performance reviews from parents Here's the New York Post: Open schoolbooks on teacher grades: judge UFT loss is parent win.
A New York Post reader suggested that STUDENT/PARENT data be posted next to test data.
This includes discipline records of students, attendance of students, rap sheets of parents, how many are on any form of welfare, single parents, race, legal/illegal students...
You can see where the Blame Game leads. What we need to remember here
The Problem is Poverty. Saying this is not making excuses; it is facing reality. Norm Scott's headline is good. Value-added teacher evaluation is cynical as well as flawed. It is a diversion from the real issue:
By Norm Scott
Would you gauge the effectiveness of individual doctors by the percentage of patients who live or die under their care? Should firemen be held accountable when a building burns down? Should individual soldiers in Afghanistan be compared to each other on the basis of "success" or "failure" in controlling the Taliban in a given area?
Any effort to do so would spark a major outcry. But when it comes to teaching, there is a different standard.
On Monday Manhattan Supreme Court Justice Cynthia Kern ruled that the NYC Department of Education was obliged to release the names of individual teachers with "value-added" test score results that purport to measure teacher effectiveness. Judge Kern brushed aside arguments by the United Federation of Teachers (UFT) that the release of unreliable data would unjustly harm teachers' reputations writing "there is no requirement that data be reliable for it to be disclosed."
The data dump will affect more than 12,000 classroom educators in Grades 4 to 8. The UFT is expected to appeal. There is some irony here as it was the UFT that signed off on the use of value-added in the first place after Joel Klein promised the results would not be made public, while many skeptical critics in the union raised questions about that deal and warned it would turn into a disaster for teachers and the union.
If the value-added data is ultimately released, expect a feeding frenzy as teachers are judged and shamed on an individual basis in the media. The larger purpose of such a data dump by DOE would be to further erode public support for teachers and force their union to renounce a seniority-based system just as Mayor Michael Bloomberg and his new Schools Chancellor Cathie Black are talking about having to lay off thousands of teachers due to budget shortfalls.
Ironically, it was only six months ago that the NY State Department of Education revealed that years of test score advances by city students had turned out to be a mirage causing Bloomberg and his former Chancellor Joel Klein a good deal of embarrassment. No matter -- the Mayor is ready once again to wield unreliable test scores as a political weapon and most media in this city have deliberately short memories, having all too often been active partners in attempts to eviscerate teachers.
The value-added approach is the latest attempt to undermine teachers, the teaching profession and the teacher union by measuring teachers based on the performance of their students on standardized tests from year to year.
Crucial backing for such initiatives has come from private foundations led by billionaires like Bill Gates and Eli Broad who assert that data-driven models in the private sector can be transferred to public schools.
Their dream of using data to accurately measure the effectiveness of individual teachers is rooted in a vision of the school as a factory in which teachers are assembly line workers and rising student test scores equals rising workforce productivity. At long last, value-added supporters claim, good teachers will be rewarded and the poor ones forced to improve at risk of losing their jobs.
In reality, value-added measures are seriously flawed. They don't fully account for external circumstances such as poverty or family turmoil that can affect a child's performance from year-to-year. Nor can they account for the fact the same child can take tests on different occasions and under different conditions and the results will differ.
A study by Mathematica Policy Research done for the US Department of Education showed that one-fourth of average teachers will be mistakenly identified for special rewards while one-fourth of teachers who differ from average performance by three to four months of student learning will be overlooked.
A recent study by Sean Corcoran of NYU demonstrated that the New York City teacher data reports have an average margin of error of 34 to 61 percentage points out of 100. The National Academy of Sciences has also warned of the potentially damaging consequences of implementing these unfair and inherently unreliable evaluation systems. Even the NYC Department of Education's own consultants have warned against using data for teacher evaluation.
Value-added measures can not only be in error but they provide incentives for teachers to manipulate scores by using large amounts of classroom time practicing for tests or engaging in various forms of cheating. To the extent a teacher cuts corners one year to deliver improved test scores, a student's next teacher will face that much greater of a challenge to deliver similar or even better results.
Teachers under the gun of having their very livelihood threatened will be very careful about working with troubled children who could drag down their value-added ratings. Accountable Talk wrote about dealing with a request to take a class full of difficult students:
I did something I am still not proud of. I quit. No, I didn't quit teaching. I just quit volunteering to teach the very children who needed me most. When my AP [assistant principal] asked me to take them on again (which he would not do unless he knew I'd been successful), I said no. This year, those kids are with another teacher who has difficulty just getting them to sit in their seats.
One of the political goals of the value-added approach is to break teacher unity by pitting them against each other. The competitive, zero-sum logic of value-added also undermines the spirit of collaboration which is essential to them refining and developing their craft. If sharing tips with fellow teachers will help them improve their value-added rankings, is it prudent to reach out and help teachers you are competing with?
The downside of a value-added approach doesn't faze leading proponents like Eric Hanushek
, a Hoover Institution economist who has written that teachers' scores should be made public even if they are flawed.
Several news organizations including The New York Post, The New York Times
and The Wall Street Journal
have filed Freedom of Information requests for New York City teacher test score data with which the normally secretive NYC Department of Education has been eager to comply.
Still, it is wise to remember that not everything that counts can be measured and not everything that can be measured counts.
Norm Scott worked in the New York City public school system from 1967 to 2002. He publishes commentary about current issues in New York City public education at http://www.ednotesonline.blogspot.com
INDEX OF RESEARCH THAT COUNTS