Assessment Challenge: 2) Better tools to measure critical thinkingPosted: February 12, 2015
The limitations of selected response questions aside, many researchers have spend years determining what skills are the most important for developing students to cultivate, and additional years figuring out how to measure these qualities. There seems to be a pretty clear (and perhaps longstanding) consensus on both these topics, which, given their (lack of) visibility in the marketplace, is pretty surprising. The consensus answers are a) students should learn “critical thinking” and “problem solving” skills — and yes, there is much debate about how to define these things, but there is also much practical work completed on doing so; and b) these skills are best assessed in the context of a range of situations broadly described as “performance tasks”, but which can, in their simplest form, rely on written responses from examinees.
The best writing I’ve seen of late on this topic comes from the Council for Aid to Education (and associated individuals). Their most recent work includes an item from May 2013 that called out the problem of “multiple-choice testing” problem out with some clarity, and offered at least a partial solution. This monograph is entitled “The Case for Critical-Thinking and Performance Assessment“, authored by well known and articulate voices in assessment and education policy. Even as most major test publishing companies in the US education market strive to move their instruments toward calibrating higher-order skills and knowledge with the rationale that “twenty-first century skills” need for the “jobs of the future” will be a lot more cognitively demanding, K12 testing remains anchored in MCQ-land with recent signs of retreat from prior ambition for greater use of constructed response in both math and ELA by the eading assessment consortia.
This second piece, by Benjamin Rogers et al, goes directly at almost every aspect of the problems identified in the prior article on the failings of current standardized testing efforts — advocating as it does the use of performance tasks to measure critical thinking skills, rather than traditional approaches like selected-response questions used to measure “domain mastery”-related to specific course subject content at of specific points in time.
So with these two very sold examinations of both the “problem” — high stakes multiple-choice tests — and of potential solutions — “performance tasks” with long form constructed response — an obvious stampede might be expected toward a resolution. Sadly this will not be the case anytime soon, and maybe not even several years from now, unless one or more practical, economic and high quality solutions emerge to an unstated (or under-stated) aspect of the “performance task”, “critical-thinking” assessment approach: the lack of reliable, accurate and cost-effective scoring for such assessments.
The workable approach to the horrible economics, and at times questionable psychometrics, of trying to score performance tasks at scale in a reliable and consistent way is not presently at hand. Progress is being made, and ideas for improvement will be the subject of a future post.