The NY Times on Value-Added Scores
When the debate over releasing teacher data scores first began, I hoped that whatever the result, a thoughtful, thorough discussion of value-added scores would take place. The New York Times has done a pretty good job of doing just that. In a lengthy piece published December 26, Sharon Otterman dissects the pitfalls of the value-added system, as well as the merits.
One of the first issues with value-added scores is that they don't address every teacher. They don't even address most teachers. They affect teachers in grades 4-8 who teach math and/or ELA. This is a problem, but for the sake of this post, I'm going to focus on what value-added data means for the teachers who have teacher data reports.
Among the problems that Otterman highlights is the inconsistency with the score. Studies have shown that a teacher's ranking according to the data report is really just a best guess within a range of up to 35%. So a teacher who falls somewhere in the middle (like me) has a tough time knowing if they're in the top, middle or bottom third of teachers.
The value-added data reports are most consistent in highlighting the teachers in the top 10% and the bottom 10%. This may be frustrating for those of us residing in the murky middle of the rankings, and for principals and other stakeholders looking for clarity. For me at least, knowing that I'm not in the top 10% is enough information to push me to reform my practice. More importantly, identifying the most and least effective teachers presents a profound opportunity to change the teaching profession.
Currently, 97% of New York City teachers have Satisfactory evaluations. If you look at the system, or as a teacher you just look around your own school, you know this doesn't represent reality. Identifying only 3% of teachers as Unsatisfactory, and lumping the rest of the city's teachers together as Satisfactory is demeaning to the teachers out there who know they are working harder, and achieving better results.
Finding the top 10% and the bottom 10% with the help of value-added data is the first step in remedying this. Whether you believe those bottom 10% should be fired, or whether they should be designated for remediation (i.e. intensive professional development + mentoring), something needs to be done to recognize the fact that there are teachers in the system who are ineffective. Meanwhile, whether the top 10% should receive a bonus, or some sort of honorific like "master teacher", it is past time these teachers receive credit.
One of the first issues with value-added scores is that they don't address every teacher. They don't even address most teachers. They affect teachers in grades 4-8 who teach math and/or ELA. This is a problem, but for the sake of this post, I'm going to focus on what value-added data means for the teachers who have teacher data reports.
Among the problems that Otterman highlights is the inconsistency with the score. Studies have shown that a teacher's ranking according to the data report is really just a best guess within a range of up to 35%. So a teacher who falls somewhere in the middle (like me) has a tough time knowing if they're in the top, middle or bottom third of teachers.
The value-added data reports are most consistent in highlighting the teachers in the top 10% and the bottom 10%. This may be frustrating for those of us residing in the murky middle of the rankings, and for principals and other stakeholders looking for clarity. For me at least, knowing that I'm not in the top 10% is enough information to push me to reform my practice. More importantly, identifying the most and least effective teachers presents a profound opportunity to change the teaching profession.
Currently, 97% of New York City teachers have Satisfactory evaluations. If you look at the system, or as a teacher you just look around your own school, you know this doesn't represent reality. Identifying only 3% of teachers as Unsatisfactory, and lumping the rest of the city's teachers together as Satisfactory is demeaning to the teachers out there who know they are working harder, and achieving better results.
Finding the top 10% and the bottom 10% with the help of value-added data is the first step in remedying this. Whether you believe those bottom 10% should be fired, or whether they should be designated for remediation (i.e. intensive professional development + mentoring), something needs to be done to recognize the fact that there are teachers in the system who are ineffective. Meanwhile, whether the top 10% should receive a bonus, or some sort of honorific like "master teacher", it is past time these teachers receive credit.
Comments