Most mathematical models didn’t get the 2016 election right. Why would we trust them to measure teacher performance? In his latest op-ed piece for the Huffington Post, mathematician John Ewing (president of Math for America, which runs the Master Teacher Fellowship that I have had since 2009) criticizes the use of the value-added model to measure teacher quality. The value-added model theoretically uses gains in standardized test scores to measure the amount of student knowledge added by a teacher.
Ewing points out that the value-added model is based on flawed assumptions (For example, today’s students come into contact with many teachers, so which teacher adds value?) and unreliable data (How do you measure socio-economic status? How do we measure what languages are spoken at a student’s home?). Ewing concludes, “We would never accept mathematical authority in politics; we would never decide elections based on mathematical models that predict the outcome. Then why are we willing to do this in education?”
As a teacher, I know that many factors out of my control determine how well my students do on tests. Mathematicians know that models are useful but are based on assumptions and don’t capture everything. What is particularly sad is that in New York City, teachers are never even told how our students are expected to perform. The value-added model might have some value if at the beginning of the year, we were actually told how our each of our students was expected to perform on state tests and what factors went into that calculation. I could then gauge my students’ progress throughout the year and adjust my instruction accordingly. Instead, teachers simply sent a report a few months after our students take state tests.
Measuring my students’ expected performance after my students leave my class doesn’t enable me to help them. What’s the value in that?
John Ewing’s piece is here: http://huff.to/2gjBzvC .