On January 9, Educating for Good presented at the virtual REMOTE K-12 Summit, hosted by Arizona State University. We were part of the Innovation in Adversity thread sponsored by the Next Generation Learning Challenge.
Gary: That was a very speedy half-hour! Holy cow. I appreciate brevity, but getting our discussion — Assessment for Good: Ethical, Equitable, and Justice-Oriented — down to 19 minutes was a challenge. It was a good challenge, to sort out what is absolutely essential, but a challenge nonetheless. I was especially pleased about doing this event because, like the ISTE conference last month, these were entirely new people for us. It’s useful, and harrowing, to bring our ideas out of our circle to see what folks in “the real world” think, and gratifying when they respond positively, which is what seemed to happen today. It reminds me of a few times this past autumn, during our circles, when in the evaluations, participants would say some version of, “This is the conversation we need to be having.” The questioning of our practices — are we harming kids? — has become more urgent for more teachers during the pandemic for what feel like obvious reasons.
The first question during the Q&A is the one that has stuck with me the most. They specifically asked about how to do these very individualized, context-aware assessments in a district that is apparently devoted to common assessments. Your response interrogating their meaning of “common assessment” was right on. Doing the exact same assessment for every kid is, indeed, common, but it is not an example of fairness or equity. It’s just another form of the imposition of a single definition of learning (usually the white, middle class definition) on a wide variety of kids, and then having your biases confirmed when the white, middle class kids are sorted to the top. What do you think of when you think about fairness, equity, and “common assessments,” Carisa? Also, what did you think of the statement saying that memorization is “still” valuable and that without it math is much harder? Did we even say anything about memorization?
Carisa: Check the tape, Gary. I definitely said something about memorization and the pedestal we have put it on in the past 50-100 years. I think we did a decent job of addressing memorization and from two very different angles. My problem with memorization is memorization for the sake of memorization, which honestly was most of my school experience. I didn’t know I could use geometry to help me build and engineer things until my dad, a high school drop-out, showed me, well after I had forgotten the processes of solving those formulas. I had memorized geometry formulas to get a grade, a grade that I needed to graduate, and a good enough grade to get me to college. As a new teacher, I had the task of making sure ninth graders could pass their basic math test before they could take a high school math class. That was the hardest task because they understood that the speed of the calculator, for them, was much more efficient. They had passed the point of memorizing the basic facts. It took too much time. It was not useful to them. They understood how to set up a formula in excel to track inventory and sales or how to use a calculator to help them understand how much an item on sale cost. In the end, they all passed that basic math test, but it took multiple attempts and after that, I imagine they were not denied a calculator.
Your explanation was also helpful. You explained that once you do something over and over again, it naturally becomes part of a knowledge base. A carpenter, for example, is going to take a lot less time in figuring if they have certain calculations memorized. A lawyer might do his work faster if the statutes are part of their knowledge base. And someone can still create a great piece of furniture if they have to use a calculator to make a calculation and a lawyer who has to take time to check a statute for accuracy can be just as effective in making their argument. Memorization has its place, but I’m not convinced that it needs to be put on the pedestal our schools and assessment systems have put it on. Not everyone is good at memorizing things that are not important to them, we have a lot of evidence of that.
As for common assessments, that’s a giant explanation that we did not have time for during the session. It requires its own session. Most of the time the interpretation of “common” is problematic to me, as is the word “assessment.” First, assessment is what teachers do. They give students a set of tasks and experiences to engage in and collect evidence of student learning from those experiences. That evidence includes student created products, observations and student reflections. All of those are put together to create a body of evidence; teachers make sense of that body of evidence in relation to the expectations and report out. We should be giving feedback along the way, formative assessment, but not making a full assessment until the end of the journey with that teacher. That last assessment is the common assessment and to tell the most honest story of a student’s skills, it needs to be an assessment of a body of evidence.
To get to fairness and equity, when systems commit to learning outcomes, they really should create a Guideline for Sufficiency. How much information do we think we need to understand where students are in relation to the learning expectations? When should that evidence be produced, under what circumstances? That guideline should be applied system- wide, and it should also be flexible. It’s a fine balance, and we all have to remember that education is human work, not perfect but we can make it as fair and equitable as possible. Creating enough opportunities for students to practice and demonstrate attainable skills ensures fairness. Ensuring students are able to demonstrate their skills in ways that make sense to them is an equity practice. Making sure that teachers are calibrated is an equity check on the system. I’m trying to do all of this in two paragraphs. Am I making sense?
Gary: Good job trying to tackle all the questions I threw at you! My internet must have glitched when you said that because I checked the tape and it is there, but I didn’t hear it yesterday.
I’m thinking of the three things we tell teachers about what makes a quality assessment system? Validity. Reliability. Sufficiency. #validandreliable comes pretty easily to them, because those words have been part of the lingo (jargon) for so long, but sufficiency is something that surprises them. Possibly because they didn’t realize it was a live question? (“A bunch of quizzes and a test seem to be working,” they might be saying, “Why change that? Okay, let’s throw in one performance based project.”) Possibly because they didn’t realize they had control over issues of sufficiency in their classroom? Because the department, school, or state have been determining what assessments are common and summative. We can’t forget that in the past one of the goals of policy makers and administrators has been to create “teacher-proof” assessments.
The idea of a Guideline for Sufficiency is interesting (and appealing!) because I really struggle with the idea of “enough,” the idea of quantifying learning in that way. It’s not just a question of how much do we need to gather (quantity of data), but also how good is that data (quality). Not only, “Is it enough?” But, “Is it good enough?” Structural solutions like common assessments and standardized rubrics are attempted answers to the first question, but, because they don’t take into account the actual kid and their context, can never be an answer to the second question.
Carisa: That’s why it’s a guideline, and how we can ensure fairness and equity in a system. The goals are clearest from the start, and include context and conditions that will be accepted. It answers not only how many, but under what conditions and when. In the end, teachers in the system might say, that was too much evidence, or that was too little and the whole system can adjust. When we’re thinking about how to ensure all students in one local system are held to the expectations of that community, which is really all we can hope to influence, then we can report out success in a meaningful, reliable way to our communities. In all this assessment work, there’s no perfection, and that’s the enduring understanding to hold onto. If we’re focused on the good, we’ll continue to do better.