Published by EducationNews.org — Important judgments should include BOTH quantitative and qualitative information.
I’m personally embroiled in a quandary over assessment and evaluation — not watching from journalistic distance for a change, but actually doing it. For over a decade, I taught at the college level, but that was years ago. Since then the federal No Child Left Behind law has been defining quality with exclusively quantitative metrics. In NCLB’s wake, America has developed a mania for grading, ranking and evaluating. A set of numbers on a checklist should prove, mathematically, that students’ classwork, a teacher, or a school is good or bad. Many teacher evaluations are designed to be legally defensible in the event of a challenge.
Now that I’m teaching again, I find that my students’ entire academic careers have been steeped in the numerically-based rubrics brought about by NCLB. They want guarantees that if all the boxes are checked, that’s an “A.”
It’s not their fault. It’s their culture. We educated them this way. Numbers rule.
Every one of my students is ambitious, talented and praiseworthy. But to generate a course grade, I need to know that they’ve digested the course’s principles and can apply them. There’s no checklist for that. It’s a judgment call. In my pre-NCLB days of teaching, students certainly came to argue about grades, but they weren’t armed with expectations about objective numeric scores.
Back in 1976, social scientist Donald Campbell wrote a prophetic essay noting that statistical measurements are godsends for those of us trying to understand complex human environments like schools. But they can’t replace wisdom. “Too often qualitative social scientists… presume that in true science, quantitative knowing replaces qualitative, common-sense knowing.” Sigh. Where is respect for common sense these days?
Campbell’s essay led to “Campbell’s Law: The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” Examples in a moment.
Mind you, I am a greedy consumer of data. Statistics anchor the narrative pictures that form my opinions. Important judgments should include BOTH quantitative and qualitative information.
When rubrics came into fashion in the mid-1990s, I was a great fan. Finally, students were let in on the secrets of how their grades came about. As a student myself, I assumed that grades were based on a combination of my level of compliance with the course’s demands and the extent to which the teacher “liked” me, whatever that meant. As an adult journalist, it was a joy to observe classrooms of kids develop rubrics for projects they were about to do. Coming to a shared sense of what “good” meant gave them, as it would most people, an investment in reaching an achievable goal.
So rubrics are a good thing. Checklists not so much.
“Good” can no longer mean things like: vivid, well-described, thoughtfully-argumentative or elegantly-executed. Those qualities are impossible to measure. Conversely, is the paper X pages long, proofread, and using at least 3 citations? Check, check and check. The checklist math defines good, bad and indifferent.
Campbell explains that Americans are particularly averse to collecting social statistics, because the numbers aren’t used to improve programs so much as to judge the people administering them. The numbers identify whose head should roll. If the object is, say, to improve children’s ability to read, quickly assigning blame in the event of bad scores prevents the harder work of finding solutions to a wide complex of obstacles to learning.
Campbell notes that on the month that Romania declared abortion illegal, stillbirths spiked. People’s behavior didn’t change; they called it something else.
If a nail factory holds employees accountable by the weight of their output, the foreman will have them overproduce big, heavier nails, regardless of consumer demand. If the goal is expressed as the number of units produced, the factory will overproduce the littlest nails.
If the staff of an employment office is assessed according to the number of people they process, they’ll do quick, ineffective interviews and placements. If the goal becomes the number of placements, staff will cream off the easy cases and ignore the ones that need more help.
During the Viet Nam war, the military reported “enemy casualties,” thereby feeding a media frenzy with inflated and inaccurate numbers. But switching to “body counts,” an objective measure with smaller yields, led to the My Lai massacre. The high body-count goal was met. Too bad they were mostly women and children.
Lastly, Campbell says, “When test scores become the goal of the teaching process, they both lose their value as indicators of educational status, and distort the educational process in undesirable ways.” Amen.
Numbers totally matter. Stats are invaluable. All facts are friendly. But human judgment must play its part, hopefully with wisdom and broad perspective, to make a final call. The best we can do is to balance hard science with, as Campbell calls it, common sense.
So, to my dear students who’ve been such a pleasure, truly: I calls ‘em as I sees ‘em.
Julia Steiny is a freelance columnist whose work also regularly appears at GoLocalProv.com and GoLocalWorcester.com. She is the founding director of the Youth Restoration Project, a restorative-practices initiative, currently building a demonstration project in Central Falls, Rhode Island. She consults for schools and government initiatives, including regular work for The Providence Plan for whom she analyzes data. For more detail, see juliasteiny.com or contact her at [email protected] or c/o GoLocalProv, 44 Weybosset Street.