Rethinking Schools, Winter 2010 pp. 34-38 http://www.rethinkingschools.org/archive/25_02/25_02_au.shtml
Au introduces the recent history of basing teacher performance evaluation on test scores, or "Value Added Measurement," and introduces six "critical issues" from the research that demonstrate problems with VAM.
Intended Audience: anyone interested in discussing this recent emotionally charged education issue with other people; Au intends the article as a starter and a list of "talking points"
Key Points:
- a statistical study from the USDE finds the error rate when using VAM to assess teachers in the 25-35% range, meaning that a given teacher has a 1/4 or better chance of being mistakenly rated as "below average"
- a second study finds that test scores vary so broadly that a teacher ranked highly by VAM one year may be on the bottom of the heap the following year, and vice versa
- a third study, or "research report," finds that an enormous percentage of variability in students' test scores (improvement or decline) can be attributed to factors entirely outside of teachers' control, including things as simple as whether or not the student had breakfast that morning
- the fourth cited report finds that nonrandom student assignments (grouping students into race, socioeconomic class, etc.) when assessing using VAM, while it may seem like a traditional statistical control method, greatly skews results by ignoring the most salient factors of those groupings
- student performance may not reflect the abilities of the teacher in the subject she scored highly (or lowly) on: her history teacher may have taught her the writing skills that her English teacher gets credit for
- "The social safety net is the responsibility of a much broader socioeconomic network-- not the sole responsibility of the teacher." To think that teachers can overcome whatever obstacles students' face is an unrealistic expectation when assessing them with VAM.
No comments:
Post a Comment