Assessment Challenge: 1) Pushback on MCQ will not abatePosted: February 12, 2015
The obviousness of the fact that selected response questions do not require the same sort of thought processes as actual problem solving skills is fairly apparent to everyone from students to parents, teachers to administrators, and even to psychometrics themselves. However on one group truly appreciates the core value they provide — reliable and consistent measurement if nothing else, but provable relevant and capable in the task of educational measurement as well. And that group, psychometricians, is vanishingly small and at times it seems, even less influential.
The inspiration for this thought is the recent publication of a scholarly article on how and why the “School reform” movement in US education is loosing steam if not failing outright. It lays the the blame for this failure on the (over)use of the all-to-familiar “multiple-choice question”-based “standardized tests”, tests which generally inspire no end of worthy objection. The objections are all too familiar — “life is not a multiple choice activity” (and other aspects of construct irrelevancy); “teaching to the test” that takes away from the valuable things teachers otherwise want to teach (or the tension between rote and formulaic learning versus problem-solving and critical thinking skills) which not only undermines proper instruction but results in students being rated for the “wrong” skills — among many others (but these are perhaps the best objections).
Standardized testing still essentially means “selected response” testing, which still largely raises hackles over it’s “in-authentic” nature, its essential failure to represent constructs directly and well, and — at least for me — the issue that when constructed response items are used, they are today (early 2015) most often created, validated and scored by processes that are far to generous in their willingness to extract (or claim) measurement from the noise in their output. The first two of these issues are raised well and are thoughtfully discussed in the American Scholar piece (and many others of late); the value add of the Mike Rose piece is that it ties the scourge of high-stakes multiple-guess questions directly key elements in the push-back against educational reform. This latter issue — about the difficulty of scoring other sorts of questions — will be the subject of a future blog post. AmericanScholar_SchoolReformFails_the_test-winter2015-MikeRose-141210c
Meanwhile my thinking is that even re-vamped, new and improved standardized tests that continue to rely almost entirely on selected response questions — such as are reflected in the newest SBAC and PARCC sample items — will struggle for legitimacy and acceptance, which is too bad, but also has simply “kicked the can” further down the field for others to solve.