Since returning back to the classroom, I have been trying to gather more assessment information based on what I am seeing and hearing. In other words, I am drawing on more observations and conversations to support student learning for assessment for learning.
But one of the things I have been thinking about is how can I use observations and conversations for assessment of learning. That is, how can I gather evidence of learning, that is not a product, and have reliable assessment information that I can use to generate a grade representing the students’ learning?
So, this year I committed to using observation for the first assessment for learning opportunity in one of my classes.
The assessment was based on students participating in writing workshop groups. The writing workshop groups are opportunities for students to bring drafts of their work to their peers and receive comments about the merits and drawbacks in their work. I wanted to assess how well students could provide feedback that would be useful to their peers as writers. The goal is for students to be able to offer feedback, both positive and negative, to their fellow writers so that they could know when their writing was effective and be prompted to make revisions where needed.
If feedback providers were effective, I should see and hear them
- Identify specific points or parts in the writing to praise (give warm feedback) or note things that were not effective (give cool feedback)
- Describe, in specific terms, the effect, positive or negative, of that part or aspect of the writing
- Explain suggestions the writer may want to consider
Essentially, this is the success criteria. I will write more about success criteria in an upcoming entry; however, for now I’ll say that without success criteria, I think it would be difficult to make focused observations. Without success criteria, I think it would be easy to be distracted by other things that might be connected, but not directly related to the focus of the learning.
On the way to the assessment of learning, I needed to, like any other assessment opportunity, make sure that students had multiple opportunities to learn, practice and receive feedback on the learning goal. (And yes, I know, it’s feedback on feedback.) My process to do this worked as follows
- First I shared a protocol with students that they would use to allow them a process to give feedback effectively and efficiently (with timing and roles).
- Then, I took students through the protocol using a piece of my own writing. During this modelling, I played the dual role of writer, seeking feedback, and coach, guiding them through the process. In my coach role, I prompted students, for example, to be more specific and clear in their feedback or to expand on their comments when needed.
- Based on the modelled experience, we brainstormed a list of criteria that would make an effective participant in the workshop. These included social norms (e.g., giving my full attention to the group) that were related to, but not directly about, the learning goal. We made a distinction between those social norms, which were important, and the descriptors that were directly related to the learning goal.
- Students brought their writing drafts to their workshop groups (with 4-5 members in each group) to work through the protocol and to give each other feedback. While they workshopped, I observed and listened to the feedback they gave, and I documented what I saw (e.g., students referring to specific passages in the writing) and heard (e.g., describing what they saw in the writing, positive and negative, and offering suggestions).
- After the first round, I interrupted the workshops briefly and described to students observations I had made from the first round. I connected these observations to the success criteria. Specifically, I noted three different kinds of observations: where I heard specific and detailed feedback (what it sounded like), where the feedback was superficial (what was lacking) , and I heard no feedback (having no evidence). I gave feedback (on their feedback) on what could be improved, and explained the impact of not offering feedback at all, both from an assessment perspective, and as a missed opportunity to support a fellow writer.
- For subsequent rounds: Where I heard superficial responses (or no responses at all), I stepped in to coach individual students, either by prompting for more detail or clarification or by offering some sentence stems to initiate the talk.
One of the things that made the observation manageable was that there were several rounds of feedback happening, and so I had a number of opportunities to focus on the members of each group for a sustained amount of time (while other workshop groups were happening simultaneously) and document the learning. Because of this format, I was able to circle back to groups a number of times to confirm earlier evidence (e.g., student demonstrating the same level of proficiency with another peer’s writing) or to see if my earlier coaching and feedback had an impact for students who needed to improve.
By the end of the workshop rounds, I had notes (documentation) that represented multiple observations of each student. I felt a fairly high degree of confidence that I had good evidence that was directly tied to my learning goal. I also felt that students, where needed, received additional support in meeting that learning goal. For this opportunity, I was able to assign a level of achievement based on evidence. I felt that the assessment of learning was sound and valid. I also felt that, for this particular learning goal, making observations of students during workshop was the most authentic and reliable way to assess their learning.