When Students Give the Grades
An article in The New York Times detailed the findings, which show that student surveys are surprisingly effective in judging teacher quality as measured by value-added data. Teachers in the study were ranked based on value-added scores, then these scores were compared to student responses:
Naysayers will likely take issue with the use of value-added data as a major component in this study. Teaching to the test, they'll argue, will once again be valued over substantive instruction. However, by using student surveys, the study actually debunks that criticism of value-added data. In fact, "Teachers whose students agreed with the statement, 'We spend a lot of time in this class practicing for the state test,' tended to make smaller gains on those exams than other teachers."
Classrooms where a majority of students said they agreed with the statement, “Our class stays busy and doesn’t waste time,” tended to be led by teachers with high value-added scores, the report said.
The same was true for teachers whose students agreed with the statements, “In this class, we learn to correct our mistakes,” and, “My teacher has several good ways to explain each topic that we cover in this class.”
This article was a pleasant surprise for me. I wonder if student surveys could play a significant role in teacher evaluations in the future. If the students were simply grading their teachers on a scale of 1-4 or something similar, I imagine my initial doubts would be justified. But it seems the economists and social scientists behind the study did a pretty decent job putting together a survey that gives students a chance to make meaningful assessments of their classroom environments. As more information from the Gates study comes out, I'm interested to see what other surprises were uncovered.