Dr. Robert Mislevy, a leading expert in educational assessment, technology, and cognitive science, shares his insights on the integration of UDL and assessment. Dr. Mislevy points out that a principled application of UDL can increase the value and validity of large scale assessment for a greater number of students.
Gordon, D.T., Gravel, J.W., & Schifter, L.A. (2009).
A policy reader in universal design for learning
(pp. 209-218). Cambridge, MA: Harvard Education Press.
Listen to excerpts from a phone conversation with Dr. Mislevy explaining why he thinks this article is especially important in today’s educational landscape. Click on the dropdown menus below to listen to Dr. Mislevy's answers or read the transcript.
These issues are very relevant for two reasons. One is that assessment nowadays has much higher importance and visibility in policy making. If you’re unable to reach the breadth of students that we have, you are going to be missing important parts of the population when you make policy, when you evaluate how well it’s working, when you figure out curriculum and instruction to improve learning, assessment is an important part of that. So, with all of this high stakes testing, you need to have ways of getting meaningful information from your full range of students.
The second reason that it’s timely is that technology is providing us with a much broader array of ways to capture assessment performances. I mentioned simulation, interactive context, get at knowledge and skills and capabilities in ways that fixed form, standard tests can’t do. So figuring out how to do that well is a challenge, and the principles of UDL and of evidence-centered design are among the foundations that you need to think through how to use the new technology validly and effectively.
I think this is an important article for policymakers because it gives them a glimpse of the possibility of principled ways of developing and using large-scale assessments that are beyond the “every student, every way” paradigm, and they need those right now. And, there has been very little available in the literature to help them systematically. There’s been a lot of good ideas and a lot of good work done in accommodations, modifications, UDL. But how you integrate that with the responsibilities that they have for high volume, high stakes, low cost – that’s been the challenge for them. So, we now know how to do that, so this article points them in the direction of machinery that is becoming very, very important to them.
Three points that I think educators can take away from this article are the following. First, that UDL and validity are fully compatible ideals. I think in the past, people have seen them as playing off, or trading off, against one another, and we are learning that that is not the case. What’s so exciting about our work here is that we can bring together insights and tools from those two lines of work for an integrated framework in which we can build sound assessments from the start.
Second, is this broader view of assessment, not simply a measurement perspective to an evidentiary argument perspective, that includes measurement tools. And they’re invaluable for engineering assessments, and we’re definitely still going to be leaning on them. But the measurement tools in and of themselves don’t help you think through those issues of administering and scoring different forms of assessments for different students. That framework of assessment as evidentiary argument gives us conceptual tools we need to develop assessments that go beyond the traditional paradigm - the same task for everyone, administered, scored, and interpreted in the same way for everyone. So this encompasses not only UDL principles but assessments involving interactive simulations and games, student choice, and projects, complex performances, and the like. So it’s really busted a logjam.
And then the third that I think is important is that this way of thinking puts the focus on the learning not on the surface features that are involved in an assessment. Open ended, interactive tasks - they can be worthless if they don’t really address the capabilities you want students to develop. While, on the other hand, even a well-crafted multiple choice item can give you some useful evidence about those if you build it right and use it in the right circumstances. So, you are starting with the capabilities you want to help students develop and work back to all of the ways that you could know it when you see it – it might be different ones for different students. I would say that Grant Wiggins’s work uses these ideas – they are very compatible with our work, and they’re aimed to support teachers and curriculum developers. Our work is more aimed at researchers in the assessment community. But the two of these together are both necessary in moving assessment ahead in the directions of more valid assessment for all students.
Last Updated: 03/21/2011