Stiggins (2008) asserts that new parameters must be used to judge the quality of our assessment tools, and that the impact on students must be of central importance (pp. 26-27):
Quality must also include the evaluation of the impact of those scores on the learner during the learning. The most valid and reliable assessment in the world that has the effect of causing students to give up in hopelessness cannot be regarded as productive because it does far more harm than good. Thus, quality must become a function of the instrument and its score evaluated in terms of (or considered simultaneously with) the context within and manner in which it is used. Quality control frameworks of the past have not taken into account the impact on the learner. (p. 27, emphasis in original)
The vision of assessment for learning, as well as assessment of learning, as outlined by Stiggins, aligns with all of the principles of a DI approach. Stiggins (2008) states further:
If they (assessments) are to have a productive impact on the learner, the nature of our assessment practices must continue to evolve in specific directions. For instance, the assessment results must go beyond merely providing judgments about to providing rich descriptions of student performance. In other words, if assessments are to support improvements in student learning, their results must inform students how to do better the next time. This will require communication of results that transmit sufficient understandable detail to guide the learner’s actions. In such contexts, single scores or grades will not suffice. (p. 27, emphasis in original)
Stiggins reinforces the central role of the students in the assessment process when he reframes students as “consumers of assessment information too, using evidence both to see their current successes and to understand what comes next for them” (p. 32). Stiggins stresses this point with the following words:
I believe that the importance of this change in our assessment paradigm cannot be overstated. Over the decades, school improvement experts have made the mistake of believing that the adults in the system are the most important assessment user or data-based instructional decision-makers–that is, we have believed that, as the adults make better instructional decisions, schools will become more effective. Clearly parents, teachers, school leaders, and policy makers make crucial decisions that influence the quality of schools, and the more data-based those decisions are, the better. But this perspective overlooks the reality that students may be even more important data-based instructional decision makers than adults. (p. 32)
Teachers who embrace a DI approach will heartily agree with Stiggins’ call for a more balanced approach to assessment that takes into account how any assessment can either motivate students to persevere, or not. Assessment systems that appear punitive to students only motivate them to persevere to get out of schoolwork, out of class, or out of school altogether. Historically, students complete a march through education and, after 13 years of cumulative progress, are then rank ordered within their graduation class. Stiggins observes that such a system does little to give students who struggle a reason to persevere, but rather, encourages students to give up. Such school systems have been the norm for so long that it is hard to imagine anything else, as Stiggins’ 2008 reflection conveys (p. 31):
In these schools, if some students worked hard and learned a great deal, that was a positive result, as they would finish high in the rank order. And, if some students gave up in the face of what they believed to be inevitable failure, that was an acceptable result for the institution too, because they would occupy places very low in the rank order. The greater the spread of achievement from top to bottom, the more dependable would be the rank order. Mission accomplished. This is why, if a student gave up and stopped trying (even dropped out of school), it was regarded as that student’s problem, not the teacher’s or school’s problem. The school’s responsibility was to provide an opportunity to learn. If students didn’t take advantage of the opportunity, that was not the system’s responsibility.
The important lesson we must learn is that the student’s emotional reaction to any set of assessment results, whether high, mid-range, or low, will determine what the student thinks, feels, and does in response to those results. They can respond in either of two ways to any set of assessment results, one productive and the other not. The productive reaction leaves the student saying, “I understand these results. I know what to do next to learn more. I can handle this. I choose to keep trying.” The counter-productive response leaves students saying, “I don’t know what these results mean for me.” Or, “I have no idea what to do next.” Or, “I’m too dense, slow, and stupid to learn this. I quit.”