Hack your marking and ease the cognitive load

Amongst the tasks involved in teaching, marking can be particularly exhausting and overwhelming. Marking is itself tiring and draining, and marking deteriorates as tiredness sets in and attention spans wane. Worst of all, marking necessarily comes at the end – at the end of the unit, the end of the course, the end of the semester. It comes at the point when you are most thinly stretched. Marking typically spills over into the time allotted for the next task, so that it is not unusual to have to start preparing the next lecture, unit, or syllabus while still snowed under with a huge amount of marking from the last one.

There are, however, some ways to do a portion of the cognitively hard work of marking ahead of time and thereby make your marking more fair, less taxing, and easier to manage. This post suggests a couple of ways to ease the cognitive load of marking by doing a significant portion of the decision-making before the first student even submits their assignment.

1. Multiple-Choice

Using multiple-choice questions on tests or quizzes for some part of your assessment portfolio is one way to do the hard work before you get to the marking stage. I have been completely persuaded by Rob Loftis’ excellent paper, “Beyond Information Recall: Sophisticated Multiple-Choice Questions in Philosophy”. Amongst his arguments, Loftis points out that most of the subjective and cognitively difficult tasks involved in assessment get front-loaded in multiple-choice assessments: subjective decision-making occurs at the stage of composing the questions, rather than at the stage of marking each individual answer or response. By the time the students take the test, the cognitively difficult and subjective decision-making part of assessment has been completed. Moreover, exactly the same subjective decision-making is shared equally by every student who answers a particular question. That is, students are not subject to the unfairness of repeated and distinct subjective decision-making processes every time a question is marked.

David DiBattista’s paper “Making the Most of Multiple-Choice Questions: Getting Beyond Remembering” points out that multiple-choice tests make it possible to cover a relatively wide scope of material in a relatively short test. He points out that it is furthermore relatively easy – and quick – to mark the tests, even in the context of large or online courses. Learning Management Systems (or LMS systems such as Canvas, D2L, Moodle, and Blackboard) or Optical Readers facilitate the automated marking of multiple-choice tests, which can nonetheless be tailored to focus on higher-order skills such as analysis, understanding, application, and evaluation. I note that I am also persuaded by Loftis to offer students the opportunity to explain their answers. This does add to the cognitive load of marking, but not substantially so. It has helped me catch unnoticed ambiguities in my questions in the past, without penalizing students for it.

The trick, of course, is to make time to compose multiple-choice questions early enough, and at regular intervals throughout the term. But even if you end up composing the test or the set of questions at the last minute, you will still have moved the cognitively difficult, subjective, decision-making task earlier in the assessment process, and prior to the ‘marking’ stage. In fact, you will have moved it prior to the student submission stage.

2. Rubrics

A rubric generally details and specifies the assessment criteria for an assignment. An analytic rubric provides specific criteria, and describes levels of achievement for each criterion. From the instructor’s perspective, rubrics enable timely feedback, and make marking more consistent and fair. Preparing a detailed rubric in advance eases the cognitive load of marking. At the marking stage, the grader can replace a cognitively complex and holistic assessment such as, “What grade does this essay deserve?” with a narrower and cognitively easier set of assessments. Assigning rubric levels according to set descriptions is a narrower and easier task, especially if the task is repeated for dozens or hundreds of assignments. Descriptive feedback is more easily attached to each assignment, and the process of marking is made easier overall for the marker.

From the student’s perspective, rubrics clarify expectations and marking criteria, all the more so if distributed in advance of the assignment deadline. According to Jönsson and Panadero’s chapter, “The Use and Design of Rubrics to Support Assessment for Learning”, rubrics are transparent, make it easier for students’ to use feedback, facilitate peer assessment, and they may reduce student anxiety or support self-regulated learning. The feedback provided by rubrics can be easier for students to process and respond to. The transparency and clarification of criteria provided by a rubric can help students with peer- and self-assessment of their work, and these in turn can reduce student anxiety surrounding assignment submissions.

3. Self- and Peer-Assessment

Of course, nothing will ease the cognitive burden of assessment quite like getting someone else to do it! But more seriously, students supporting each other’s learning through guided peer-assessment can provided genuinely helpful feedback in a timely manner, which again eases the cognitive burden related to marking. In this case, it eases the cognitive burden of providing specific, timely, and relevant feedback, which is a key part of the assessment process. I am by no means suggesting students give each other heavily weighted final grades or even unguided holistic assessments of each other’s work. They don’t necessarily have the training or the experience to be able to recognize what makes a strong (or weak) paper on its own merits.

Best practices in peer assessment would use an anonymized and guided system to help students offer each other feedback, and might give them credit for doing so. In order for peer-assessment to front-load the cognitive burden of marking, clear assessment criteria – perhaps a rubric, or perhaps a peer assessment form- should be prepared in advance. In “The Role of Self and Peer Assessment in Higher Education”, Pérez et. al. demonstrate that well-guided peer assessments can concord with lecturer assessments of the same work, using the same guided criteria. But even peer assessments that only provided content feedback on a draft can provide the student with useful and valuable information prior to a final submission, and would ease the cognitive load of the instructor marking the subsequent submission.

Peer assessment can be designed to use rubrics, or can be designed to provide feedback at a mid-point in a scaffolded assignment. In a recent course where I was a student, we were given the opportunity to submit an early draft for peer review (anonymized through Canvas), and only if we completed the peer review process were we eligible for detailed feedback on the final submission from the professor. Whether the peer assessment generates a mark, or a completion mark, or merely generates feedback, peer-assessment can help provide students with feedback in a format they can use, and thereby reduce the feedback burden of marking.

Guided self-assessments as part of a scaffolded assignment can likewise help students generate feedback and recognize where they have strayed from the stated criteria.

Scaling Up

A note of caution about self- and peer-assessment: they do not scale up as easily as multiple-choice and rubrics. On a large scale, self- and peer-assessment can still be used to help students provide feedback to each other, can be part of a scaffolding process, and can allow students the opportunity to learn from each other. But, when used to generate marks, self- and peer-assessment can be more easily manipulated in ways that might violate academic integrity. Accordingly, my suggestion would be to use peer- and self-assessment to facilitate students providing each other anonymous feedback, and to credit it with a small completion mark or credit worthy of the effort.

Multiple-choice and rubrics, on the other hand, scale up very well. They make the assessment more fair and less subjective on an individual assessment level, and they ease the cognitive load at the final stage of marking.

Leave a Reply

Your email address will not be published. Required fields are marked *