Choose your deadline: A simple gradebook hack for flexible assignments

For the last several years, I have implemented a flexible assignment system: students choose on which occasions to submit low-stakes assignments. These assignments are designed to pace students through the course material while giving me a little glimpse into what they are understanding – or misunderstanding – as we go. I want to incentivize submitting the assignments without nudging the students towards perfectionism (or generative AI, for that matter!). Catching misunderstandings as they arise helps guide my teaching. Timely feedback helps the students correct their misunderstandings before they move on to higher stakes assessments.

Pedagogically, it is more important that students submit these assignments honestly and regularly than that they “get it right”. That means that I want them to submit work that represents their understanding without being penalized for it.

Drop the lowest score

Dropping the lowest score – including dropping any zeros – is a great way to do this, and it’s built in to most Learning Management Systems (LMS), as well as any decent spreadsheet program. If students underperform on a low-stakes assignment, they can complete an extra assignment in that category to make up for it. If the student is unable to complete the assignment in any given week, the zero will be dropped provided that they complete enough assignments throughout the term. This will vary from LMS to LMS, but the structure of what you have to do will be the same.

The first step in most Learning Management Systems is to create an ”Assignment Group’ (Canvas) or ‘Grade Category’ (Blackboard and D2L/Brightspace and Moodle).

The next step is to calculate how many assignments will need to be dropped. If there are 10 opportunities to submit 5 assignments, you will set up the Assignment Group/Grade Category to drop the lowest 5 scores. If there are 12 opportunities to submit 10 assignments, you will set up the Assignment Group/Grade Category to drop the lowest 2 grades.

The final step is to link the Assignments to the Gradebook, making sure to designate the assignment and grade within the defined group or category.

Manage grades and troubleshoot

Gradebooks might need to change as the term progresses. Don’t panic if you have skipped a step, or if you find that you have to change a previous step. Most LMS allow limited editing of assignments, quizzes, and gradebooks even after they have been published to students. Using the ‘create grade’ tool, you can create new categories of grades or create new grades and add them to existing categories. Using the ‘manage grades’ or ‘edit’ tool, you can add existing grades to an existing category, or change the total number of grades to drop.

If you are looking for more personalized help with setting up your own gradebook, feel free to reach out.

Effective, Fair, and Efficient: Using Multiple Choice in the Humanities

Photo by Pixabay on Pexels.com Image description: A black and white photo of a room with seven identical white doors. Large black and white patterned wallpaper covers the wall around the doors.

Presented as a workshop at the American Association of Philosophy Teachers (AAPT) “How We Teach” conference, July 14, 2021

Jennifer Szende

In July 2021, I led a virtual workshop session on multiple choice testing in philosophy as part of the AAPT series on “How We Teach”. This is an adapted version of the handout I distributed with the presentation. Multiple choice testing is sometimes dismissed as too easy for students, too open to dishonesty, or too difficult to design for instructors. Here, I give an argument in favour of multiple choice testing, I respond to some concerns, and I offer some tips and best practices resources for effective multiple choice testing in philosophy. Much of what I say here will be relevant to other academic disciplines and other testing scenarios in addition to academic philosophy.

Why use multiple choice?

There are many good reasons to include multiple choice within a balanced assessment portfolio. I focus on effectiveness, fairness, and efficiency.

Effectiveness: Multiple Choice questions can be an effective way to assess the learner’s ability to recall, understand, apply, analyze, and evaluate. Standardized tests typically use case studies and sight passages to assess student’s understanding, application of concepts, analysis, and evaluation of novel information. So, to an extent, many of us are familiar with multiple choice testing that is designed to assess skills beyond information recall. One frequent objection to those types of tests is that they assess ‘test taking ability’ or ‘familiarity with the test format’ rather than assessing analysis or understanding. Keeping this worry in mind, I have tended to design my tests as open book and without time limit. Open book because I am very happy to build formative assessments that force students to look at the course material in a new light, and give students the opportunity to examine what they find in that new light. If they read a passage for the first time, or re-read it for a subsequent time, in order to answer the question, the test has done its job.

Fairness: Sometimes, assessing students can be unavoidably subjective. For essays and presentations, the bias and subjectivity is mostly located at the stage of marking or assessing student work, with some biases and subjectivity located in the design of the assignment. Rubrics can help standardize the subjectivity across students (thereby increasing fairness), but a level of subjectivity remains. Think of cases where TAs and instructors standardize each other’s ‘A’ paper, ‘B’ paper, and ‘C’ paper, or cases where students appeal a grade by comparing marks and assignments with other students in the class. For multiple choice, the subjectivity of assessment is located at the stage of writing questions, rather than at the stage of marking questions. As a result, the subjectivity and bias are more fairly distributed across all test takers (Loftis 2019). I have taken Rob Loftis’s advice seriously, and have taken to offering students a space in which to explain their answer. I don’t read these explanations for correct answers, but find I am often able to give partial or full credit to students who misunderstood the question but demonstrate understanding of the material, and other times these responses help me to recognize and rectify (with full credit) questions that were unintentionally ambiguous.

Efficiency: A multiple choice test allows you to assess a lot of material in a relatively short assessment, which is furthermore easy to mark. Multiple-choice tests can cover a large scope of material in a relatively small/short test. They are easy to mark, even for large and online courses. See David DiBattista’s argument here. In some cases, the Learning Management System (LMS) or scantron system can be used to automatically mark the test, or mark it pending instructor review and approval. In particular, the cognitive burden of marking is reduced. Reducing the cognitive burden of marking is no small feat, even if much of the cognitive burden shifts to the stage of test design. When I have large (90+ student) classes, these tests allow me to save some of myself for other types of student engagements and assessments. In the case of LMS tests with automatically generated feedback, there is a possibility to give immediate feedback to learners, so that the student gets an explanation of the correct answer.

So, why would philosophers avoid using MC?

It’s too difficult for the instructor! Constructed answer questions are much easier to produce (DiBattista and Kurzawa 2011). The easiest multiple-choice questions to produce assess information recall, and many teachers aren’t interested in assessing information recall. Genuinely challenging, formative multiple-choice questions, especially those that assess understanding, analysis, application, or evaluation, can be challenging and time consuming to write/design.

  • Practice writing questions in a variety of styles, for a range of skills.
  • Use some of the question-writing tips offered here or in the further resources linked below.
  • Pace yourself throughout the term. Write 1-3 questions per week, or per lecture. Schedule time to write questions after each lecture, when the material and discussion are fresh in your mind.

Multiple Choice is too easy for my students, or too low on Bloom’s taxonomy (DiBattista and Kurzawa 2011; DiBattista 2008; Loftis 2019). Many instructors worry that students will just use a search function to find the answers. The solution is to design the test/write questions with this worry in mind.

  • First, ask yourself: “What is the purpose of the test?” Are you assessing whether students have attended lecture/read the material? Whether they have understood the material? Whether they can apply a concept to a novel situation? It might turn out that you want to assess information recall in a particular instance. But, if so, it might be an appropriate occasion on which to set a time limit (with appropriate extensions for students who need it), or it might work best for an in-class test. If, however, you want to assess understanding, analysis, or application, remove the time limit and design questions to be open book. Invite students to take the time to look up the answers. You may wish to use paraphrasing to avoid searchable terms. Alternatively, you may actually choose to have your students look it up, perhaps using a search function. If they haven’t reviewed the material very closely yet, maybe the test is a good way to get them to read key passages.
  • MC can be formative, medium to high on Bloom’s taxonomy, and can provide a valid measure of student achievement.
  • Skills that can be tested with MC: Recall; Understanding; Apply; Analyze; Evaluate? (Loftis)

Academic dishonesty. Lots of worries arise on teaching forums about students paying someone else to write the test, working together, or copying each other. If that is your worry, design with it in mind. But also, learn a bit more about triggers of academic dishonesty, and try to design your evaluation to avoid these.

  • Again, consider: ‘What is the purpose of the test?’ Choose an appropriate assessment strategy for the thing being tested. Multiple choice tests can be formative, and the purpose of testing might be to familiarize students with key concepts. The process of looking up the answer and reading through the questions might be exactly what you want to test. Consider providing a provision and permission for students to work on these questions together, such as an unmarked fill in the blank ‘I worked on this test with the following person/people…. ‘.
  • Use low stakes multiple choice testing. Frequent (open book?) tests worth 2-5% with the lowest marks dropped are less likely to lead students to feel under pressure than one-time exams worth 30-40%.
  • Use randomization. Learning Management Software such as D2L/Brightspace, Blackboard, or Canvas allow multiple forms of randomization in testing. Build question ‘pools’ or ‘banks’ with a larger number of questions on each topic than will appear on the test. The LMS will randomly generate a set of questions, and will randomize the order that they appear in for each student (within parameters set by the instructor or test designer). The LMS can even randomize the order that the options appear in within each multiple choice question, which encourages closer reading of the question.
  • Consider using untimed test and/or open book tests. Design a test that will require looking up (some? Most?) answers, and give students time and permission to do so. If the test is designed to be open book, looking up the answer will not constitute cheating.

General best practices for Multiple Choice:

Image of Bloom’s Taxonomy from Vanderbilt University Center for Teaching. Bloom’s taxonomy ranks cognitive skills. From bottom to top, Bloom lists: remember, understand, apply, analyze, evaluate, create.

Some MC question writing strategies:

What follows are a few question-writing strategies that I have used in the past to generate questions. I keep this list handy when I am trying to generate 1-2 questions each week based on the discussion. I review my lecture notes or power points, any examples discussed in class – especially those raised by students – and try to write a question stem and the correct answer before generating distractor responses also based on lecture, discussion, or written material.

  1. Paraphrase, and use the paraphrase rather than quotations in the stem or the multiple choice options:
    • Paraphrase the thesis of an article.
    • Paraphrase definitions for key terms.
    • Paraphrase key objections.
  2. Use key terms and key concepts in multiple-choice, but try to use them in novel situations/case studies/ examples.
  3. Use comparisons/contrasts/lists drawn from course material or discussions.
  4. What does the example show?
    • Example from the reading: What point is Author making when they use X?
    • Example from the news/ film/ popular culture: What would Author say about X?
    • Example from the news/film/popular culture: Which Author would make which of the following claims?
  5. Who (which Author) would agree with [paraphrase]?
  6. How might Author A respond to Author B’s question/quote/example/concern?
  7. Author A and Author B agree about X.
    • True or False?
    • Which reason would each give for X?

Hack your marking and ease the cognitive load

Amongst the tasks involved in teaching, marking can be particularly exhausting and overwhelming. Marking is itself tiring and draining, and marking deteriorates as tiredness sets in and attention spans wane. Worst of all, marking necessarily comes at the end – at the end of the unit, the end of the course, the end of the semester. It comes at the point when you are most thinly stretched. Marking typically spills over into the time allotted for the next task, so that it is not unusual to have to start preparing the next lecture, unit, or syllabus while still snowed under with a huge amount of marking from the last one.

There are, however, some ways to do a portion of the cognitively hard work of marking ahead of time and thereby make your marking more fair, less taxing, and easier to manage. This post suggests a couple of ways to ease the cognitive load of marking by doing a significant portion of the decision-making before the first student even submits their assignment.

1. Multiple-Choice

Using multiple-choice questions on tests or quizzes for some part of your assessment portfolio is one way to do the hard work before you get to the marking stage. I have been completely persuaded by Rob Loftis’ excellent paper, “Beyond Information Recall: Sophisticated Multiple-Choice Questions in Philosophy”. Amongst his arguments, Loftis points out that most of the subjective and cognitively difficult tasks involved in assessment get front-loaded in multiple-choice assessments: subjective decision-making occurs at the stage of composing the questions, rather than at the stage of marking each individual answer or response. By the time the students take the test, the cognitively difficult and subjective decision-making part of assessment has been completed. Moreover, exactly the same subjective decision-making is shared equally by every student who answers a particular question. That is, students are not subject to the unfairness of repeated and distinct subjective decision-making processes every time a question is marked.

David DiBattista’s paper “Making the Most of Multiple-Choice Questions: Getting Beyond Remembering” points out that multiple-choice tests make it possible to cover a relatively wide scope of material in a relatively short test. He points out that it is furthermore relatively easy – and quick – to mark the tests, even in the context of large or online courses. Learning Management Systems (or LMS systems such as Canvas, D2L, Moodle, and Blackboard) or Optical Readers facilitate the automated marking of multiple-choice tests, which can nonetheless be tailored to focus on higher-order skills such as analysis, understanding, application, and evaluation. I note that I am also persuaded by Loftis to offer students the opportunity to explain their answers. This does add to the cognitive load of marking, but not substantially so. It has helped me catch unnoticed ambiguities in my questions in the past, without penalizing students for it.

The trick, of course, is to make time to compose multiple-choice questions early enough, and at regular intervals throughout the term. But even if you end up composing the test or the set of questions at the last minute, you will still have moved the cognitively difficult, subjective, decision-making task earlier in the assessment process, and prior to the ‘marking’ stage. In fact, you will have moved it prior to the student submission stage.

2. Rubrics

A rubric generally details and specifies the assessment criteria for an assignment. An analytic rubric provides specific criteria, and describes levels of achievement for each criterion. From the instructor’s perspective, rubrics enable timely feedback, and make marking more consistent and fair. Preparing a detailed rubric in advance eases the cognitive load of marking. At the marking stage, the grader can replace a cognitively complex and holistic assessment such as, “What grade does this essay deserve?” with a narrower and cognitively easier set of assessments. Assigning rubric levels according to set descriptions is a narrower and easier task, especially if the task is repeated for dozens or hundreds of assignments. Descriptive feedback is more easily attached to each assignment, and the process of marking is made easier overall for the marker.

From the student’s perspective, rubrics clarify expectations and marking criteria, all the more so if distributed in advance of the assignment deadline. According to Jönsson and Panadero’s chapter, “The Use and Design of Rubrics to Support Assessment for Learning”, rubrics are transparent, make it easier for students’ to use feedback, facilitate peer assessment, and they may reduce student anxiety or support self-regulated learning. The feedback provided by rubrics can be easier for students to process and respond to. The transparency and clarification of criteria provided by a rubric can help students with peer- and self-assessment of their work, and these in turn can reduce student anxiety surrounding assignment submissions.

3. Self- and Peer-Assessment

Of course, nothing will ease the cognitive burden of assessment quite like getting someone else to do it! But more seriously, students supporting each other’s learning through guided peer-assessment can provided genuinely helpful feedback in a timely manner, which again eases the cognitive burden related to marking. In this case, it eases the cognitive burden of providing specific, timely, and relevant feedback, which is a key part of the assessment process. I am by no means suggesting students give each other heavily weighted final grades or even unguided holistic assessments of each other’s work. They don’t necessarily have the training or the experience to be able to recognize what makes a strong (or weak) paper on its own merits.

Best practices in peer assessment would use an anonymized and guided system to help students offer each other feedback, and might give them credit for doing so. In order for peer-assessment to front-load the cognitive burden of marking, clear assessment criteria – perhaps a rubric, or perhaps a peer assessment form- should be prepared in advance. In “The Role of Self and Peer Assessment in Higher Education”, Pérez et. al. demonstrate that well-guided peer assessments can concord with lecturer assessments of the same work, using the same guided criteria. But even peer assessments that only provided content feedback on a draft can provide the student with useful and valuable information prior to a final submission, and would ease the cognitive load of the instructor marking the subsequent submission.

Peer assessment can be designed to use rubrics, or can be designed to provide feedback at a mid-point in a scaffolded assignment. In a recent course where I was a student, we were given the opportunity to submit an early draft for peer review (anonymized through Canvas), and only if we completed the peer review process were we eligible for detailed feedback on the final submission from the professor. Whether the peer assessment generates a mark, or a completion mark, or merely generates feedback, peer-assessment can help provide students with feedback in a format they can use, and thereby reduce the feedback burden of marking.

Guided self-assessments as part of a scaffolded assignment can likewise help students generate feedback and recognize where they have strayed from the stated criteria.

Scaling Up

A note of caution about self- and peer-assessment: they do not scale up as easily as multiple-choice and rubrics. On a large scale, self- and peer-assessment can still be used to help students provide feedback to each other, can be part of a scaffolding process, and can allow students the opportunity to learn from each other. But, when used to generate marks, self- and peer-assessment can be more easily manipulated in ways that might violate academic integrity. Accordingly, my suggestion would be to use peer- and self-assessment to facilitate students providing each other anonymous feedback, and to credit it with a small completion mark or credit worthy of the effort.

Multiple-choice and rubrics, on the other hand, scale up very well. They make the assessment more fair and less subjective on an individual assessment level, and they ease the cognitive load at the final stage of marking.

Caring for your future self by reverse engineering the syllabus

There are only so many weeks in a semester. There is also a certain amount of material that must be covered, assignments that must be set, and marking deadlines that each course must meet. So, there will inevitably be a few heavier weeks from the instructor’s perspective, just like there will inevitably be some heavier parts of the semester for students. Some of these are likely to occur near the end of the course once most of the material is on the table. This is how so many of us end up with an overwhelming end of semester push, or an impossible week of marking and covering new material, or too short a marking turnaround time. 

Sometimes, in calculating how to manage and spread out such weeks, it can be tempting to compartmentalize our teaching responsibilities as though we were not also people with lives outside of the classroom. It can be equally tempting to assume the same of our students: that ours is their only class, that their only responsibilities are as students, or even that their priorities are course-related. 

None of these assumptions is fair, either to ourselves or to our students. 

So, one purpose of this post is to serve as a reminder at the course design stage that we all have obligations, responsibilities, deadlines, and expectations beyond any given course. And part of the purpose of this post is to serve as a reminder that we cannot predict the future, nor know every obstacle and emergency that will arise in a given semester. And if all I convince you to do is acknowledge these facts as part of your instructional design process, then this post will have been useful. 

But if you want some advice on how to take account of these features, then read on to learn a bit more about how to reverse engineer your syllabus to hopefully meet more of your own needs as a complete person, and more of your students’ needs as people with lives outside of your class. Partly, this is an exercise in brainstorming about foreseeable conflicts in an attempt to better prepare for them. 

Start with the most mechanical, most fixed features of the course itself: class meetings and times, university holidays, grade submission deadlines. You may wish to use a syllabus date generator such as this one: http://wcaleb.rice.edu/syllabusmaker/generic/ 

Next, add personal dates and appointments. I put these next because you are the only one who can account for these. Your birthday. Important family events (cousin’s wedding? Spouse’s birthday? Parent’s anniversary?) Concert tickets. Grant deadlines. Medical appointments. Conference dates. Travel dates. Daycare pick up. Vet appointments. Note which of these are absolutely fixed, and which are more flexible. After all, you have a life outside of the classroom, and you are best placed to know what that entails. 

Third, add department, professional, college obligations and dates. You likely have less control over these, and potentially less prior warning about them, but they are worth trying to account for even if only roughly. Is there an open house that you’ll be expected to attend? Are there set department meeting times on Tuesday afternoons? Internal scholarship deadlines when students will be asking for reference letters? If you are not starting a new job, you may be in a position to anticipate an approximate pace and time commitment. But even in a new position, some preliminary approximation should be possible, and some fixed expectations should be available. These are reasonable questions to ask.

After these 3 steps, you might be able to recognize some weeks that are already heavy and stressful, prior to your scheduling any course materials or deadlines. Try to flag these. If nothing else, it is useful to be able to anticipate when an intensive week is coming. But ideally, you might be able to schedule some easier teaching weeks to coincide with these heavier commitments weeks, or at least avoid scheduling your heaviest teaching weeks to coincide with already heavy weeks. 

Now, start to draft readings and assignments, but do so by working backwards. When is your marking deadline? How many days or weeks will you need to mark and provide feedback for a given class size or assessment style? How many classes of new material between assignments or tests? How many days or weeks between returning assignments with feedback and starting the next assignment? In each cycle of new material, assessment, and providing feedback, try to schedule some wiggle room in case something comes up. 

On your already flagged heavy weeks, consider scheduling a film viewing or a review session if it’s pedagogically appropriate. Schedule material that you are very comfortable with. Do not schedule your midterm with a 3-day marking turnaround on the week of your parents’ 50th wedding anniversary, nor in the same week as that completely new material. 

At this stage, you may have a preliminary outline of the course and a good preliminary idea of how the pacing will work for you. Highlight the heavy weeks and post reminders in your calendar or other system of organization.

I advise at this stage that you spend a little time thinking through these same issues from a student perspective, which has the added bonus of continuing the project of managing your own pacing and workload. Admittedly, I spend less time on this, but I want to notice ahead of time if I am assigning the two longest and densest readings in the course back to back, or if one week is both a heavy reading week and a heavy assessment week. 

Part of my process is to add a ‘number of pages’ column to the reading list for the course. Obviously, page count is not the only relevant indicator, but it is a tangible metric that students can use to anticipate heavier and (somewhat) lighter weeks. Again, I try to pace the heavy weeks, and sometimes draw out a longer reading over two classes, or replace an excessively long reading with shorter pieces. Making this information available in the syllabus will help students to do the same sort of semester planning.

Inclusive responses to ChatGPT

In response to ChatGPT, there is a lot of temptation to return to ‘old school’ assessments, such as oral exams, in-class essays, or pen and paper exams. These types of assessments would make academic dishonesty such as using ChatGPT very difficult, and so would also present the clearest proof that assessments were completed with academic integrity. Unfortunately, they also heavily penalize any student unable to attend or perform the skill in question at the appointed time. In order to avoid this exclusionary pitfall, I advocate for keeping learner-centred pedagogy and Universal Design for Learning (UDL) at the core of any changes you may make in response to ChatGPT. Ultimately, that will mean maintaining some space for choice and flexibility in your course design.

The learner-centred approach advocates that students should be actively involved in their learning. Time and space allocated for course work should prioritize student learning and student needs, as opposed to lecturer or teacher needs. That means that class time allocated for assessment should prioritize formative over summative assessment, and should be balanced by class time devoted to non-assessed, learner-centred activities. That may include flipped classroom techniques. It may mean a level of choice and flexibility in the assessment designs. Or, it may mean prioritizing activities that help learners to make their own meaning out of the materials of the course – perhaps through discussion, or free-writing, or other creative forms of response. That is, it is unlikely to mean frequent in-class testing.

In Universal Design for Learning (UDL), there is a recognition of the great diversity amongst students: diversity in needs, in capacities, in interests, in perspectives, and in abilities. UDL advocates anticipating as broad a range of learners as possible, so that by the time students sign up for a course, the course will already have been designed to meet their needs. UDL advocates designing a level of flexibility and choice into the course, materials, and assessments from the outset, so that students are empowered to adapt the course to their own needs. For example, in-class writing assignments could be an option and a way to fulfill a particular assessment requirement, but it would be better if one-time, in-class assessments were not the only way to fulfill that requirement.

In advocating for learner-centred course design and UDL, there is a broad category of response to ChatGPT and LLM that has me particularly worried: high-stakes, in-person, one time assessments. In a previous post, I highlighted the ways that prior research about academic dishonesty might give us insight into the ‘new’ world brought about by ChatGPT and other large language model AIs. In short, academic dishonesty is most likely to arise where there is time-pressure, grade-pressure, or peer pressure. High-stakes, one-time assessments increase those pressures rather than alleviating them. If ungrading is not an option (and it is not an option in many contexts), consider a variety of low-stakes and flexible assignments where the time, grade, and peer pressure are alleviated, and the temptation towards academic dishonesty is lower.

Against high-stakes all-or-nothing assessments

Many proposed responses to ChatGPT suggest replacing at-home, untimed assessments (such as essays and take-home tests) with in-class, time-limited assessments. Some proposed responses swap word-processed assignments for handwritten assignments. And some proposed responses replace written tests and assignments with oral presentations or exams. Each of these proposals take relatively flexible written assignment styles and replace them with fixed and unimodal assignment styles, and each of these responses narrows the range of students who can succeed at the assignment. In the process, this narrowing diverges from principles of Universal Design for Learning .

Of course, no one intends to exclude students with diagnosed accommodation needs. Indeed, students with diagnosed and documented accommodations are often legally protected, and will have relatively clear (though nonetheless onerous) procedures for accessing accommodations such as extensions, submission procedures, and alternative assignment formats. But these procedures can only be initiated after the course has been set in motion. They require individual exceptions to be carved out, one at a time, and place a burden on the student – not to mention the instructor who responds to them individually.

However, students with diagnosed accommodation needs are not the only learners whose needs are undermined by high-stakes, in person, one-time assessments. In general, UDL does not limit itself to responding to the diagnosed and delineated needs certified and circumscribed by accommodation gatekeepers, but rather anticipates a wide diversity of learning needs – documented, diagnosed, or otherwise – and designs the course to meet as many of them as possible.

Students with undocumented accommodation needs will have no prescribed procedures available to them, yet their learning needs can still be anticipated by adhering to principles of UDL. The student with a child home sick from daycare, or the student caring for a parent in hospital will have to explicitly disclose their circumstances in order to plead their case for accommodation, and they might still be refused. The student whose emergent medical situation is awaiting assessment by a specialist, or the student whose evolving mental health crisis is in flux, may not be in a position to explain or request the accommodation they need. Indeed, they may not know what they need until they try a few things. But in navigating a course with a level of choice or flexibility, they can still find their own path up to a point, and do so without having to rely on disclosures or instructor mercy.

The student with a broken arm might not have much difficulty pleading their case, but accommodating their in-class, handwritten assignment might nonetheless be onerous for both learner and instructor – requiring out of class dictation, testing centre bookings, or other time-heavy individual accommodations. But allowing flexible modes or occasions of submission for low-stakes assignments, and limiting (if not eliminating) the use of high-stakes, one-time, in-person assessments will help diverse students navigate the course according to their own learning needs.

Indeed, high-stakes, one-time, in-person assessment practices may place unbearable burdens on instructors as well, depending on class size, student population, and the availability of university support. Oral exams of 10 minutes per student would require about 7 hours of exam time for a 42 student class, assuming there were no scheduling issues or time overruns. In-class and timed assessments also presume that the instructor will never have an unexpected emergency, that the university will never have a snow day or tornado warning, and that the fire alarm will never go off during class time. Yet, all of those things and worse have been known to happen.

For these and other reasons, implementing high-stakes, in-person, one-time assessment strategies as a response to ChatGPT will exclude many students, place a burden on instructors, and place particular burdens on vulnerable students. To the extent that you can recognize ChatGPT and also maintain a flexible, student-centred learning environment, everyone involved in the course will benefit.

A learner-centred and non-punitive approach could acknowledge the existence of ChatGPT, recognize it as a temptation, and also offer learners the tools to help resist that temptation. But, in the very least, do not accept excluding students in the name of maintaining academic integrity.

ChatGPT changes everything… or does it?

Some of the recent ChatGPT panic in higher education is focused around academic integrity. ChatGPT seems to increase the ease of producing coherent writing with very little personal effort, and moreover to do so in a way that cannot be easily detected using pre-existing plagiarism-detection software. On that description, it seems worthy of panic: ‘easier to cheat and harder to detect’ seems a very dramatic shift in the academic integrity landscape.

But looking through the literature on academic integrity, there are at least 2 explanations of academic dishonesty that have not changed with the launch of ChatGPT.

First, the psychology of academic dishonesty seems likely to remain the same even if cost-benefit analysis might shift in the context of ChatGPT. Many studies over the years have asked students and researchers whether they have ever engaged in academically dishonest behaviour, and if so, why? Many of the reasons given in 2019 or 2021 are still the reasons given in the ChatGPT era, so they are worth delving into.

Prior to the launch of ChatGPT, the received wisdom was that conscious violations of academic integrity policies were most likely when students felt cornered. De Maio and Dixon (2022)‘s review finds that students who admit to academic misconduct converge on several reasons or explanations: students tend to cite time pressure, peer pressure, and grade pressure as reasons for their academically dishonest behaviours. Tindall & Curtis (2020) find that “negative emotions” such as stress, anxiety, and depression are correlated with positive attitudes towards plagiarism. In short, students are most likely to engage in academic dishonesty when they feel under pressure, one way or another. Some of these pressures may arise for reasons internal to course design, while others might arise because students are three dimensional people with lives (and pressures) outside of any particular course they may take. Syllabus design can potentially mitigate both.

Secondly, unknowing or ignorant violations of academic integrity requirements represent a significant portion of academic misconduct cases. Ignorance of the nuance of the requirements of academic integrity is frequent amongst students (and not nonexistent among faculty and researchers), so mistaken or accidental or ignorant violations of academic integrity requirements are to be expected. ChatGPT might add to a pre-existing confusion, but only by a matter of degree.

Proceeding with caution

These two features (stress and confusion) are not new with ChatGPT, and they frame my two pronged approach to academic integrity in the era of ChatGPT.

First, in recognition of the proportion of students and academics who are confused by what ChatGPT is or does, take an educative rather than a punitive approach to the existence of ChatGPT. Teach what academic integrity requires for your educational context and explicitly mention ChatGPT. De Maio and Dixon (2022) emphasize that clear academic integrity statements and policies are important for maintaining a culture of academic integrity. McGee (2013) describes how cases of academic dishonesty are higher when expectations are unclear, and points to improvements in academic integrity when academic integrity expectations are made explicit at both the course and institutional level. If your institution has any specific resources on academic integrity or Honor Codes, show your students how to access them.

In the ChatGPT era, this might be as simple as including an explicit ChatGPT statement in the syllabus or in each assignment description. “For this assignment, use of ChatGPT and other LLM AI is not permitted for either generating the assignment nor for editing and improving the assignment.” But, if you go this route, be consistent throughout your course and assessment designs. Include an explicit ChatGPT statement in your syllabus and in each assignment description, because its omission in only one place might be noticed.

If your preference or context allows, you may wish to redesign some assignments to explicitly allow ChatGPT. There are some good ideas for reimagining assessment using ChatGPT out there, with some key points being to keep ChatGPT components explicit and optional, and keep the credit proportional to the effort involved (i.e. likely low-stakes). If you are in a course context where assignment redesign is possible (e.g. small enrolment, low prep load, TA support, full control over assignment and syllabus design, etc.) your colleagues who are not so positioned will appreciate it if you explicitly mention that your ChatGPT-friendly policy applies only to your course, and does not extend to other courses.

But while we are on the subject of assessment design, the second component of my approach to ChatGPT is to acknowledge and work to minimize the psychological stressors known to be used as explanations of explicitly academically dishonest behaviour. Low-stakes and flexible assessments help students mitigate grade-pressure and time-pressure. Offering students choices of prompts, choices of material, choices of occasions, or flexibility with deadlines can all empower students in ways that help them manage their stress. Flexibility in course and assessment design – such as flexible deadlines, flexible forms of engagement, and flexibility in modes of delivery – are part of a Universal Design for Learning approach. Fovet (2020) explains how flexibility and choice serve as components of UDL, and Amrane-Cooper et al (2021) point to a connection between flexibility in assessment design and easing of anxiety.

Sotiriadou et al (2020) suggest that authentic assessment design can reduce incidents of academic dishonesty. Similarly, De Maio and Dixon (2022) point to authentic assessment design and authentic curricula as reducing incidents of academic dishonesty.

Finally, Bretag et al (2019) point to personalized and reflective assignments as less associated with academically dishonest behaviour. Bretag et al (2019) also point to in-class assignments and vivas (oral examinations) as less associated with academically dishonest behaviour, although these assessment styles are more likely to be limited to smaller class sizes.

Over the coming months and years, ChatGPT will get better at doing what it does, and AI detectors may also become more effective. There will be various iterations of large language model AI, and each will have its foibles and its virtues. But if the circumstances surrounding your teaching do not permit a complete course re-design every semester, educating about academic integrity requirements will help students better understand the contours of the academic integrity terrain, while using assessment and course design to acknowledge and support students navigating the pressures and anxieties related to assessment will make it easier for students to continue to choose academic integrity.

In short, not quite everything has changed with the advent of ChatGPT, and not everything can change in response to ChatGPT.

It’s in the Syllabus!

When I was an undergraduate student, a syllabus consisted of only the most basic information for a course, and typically, it fit on a single sheet of paper. A syllabus was a simple list, usually without flourishes:

  • Course name and course code.
  • Professor’s name, contact information, and office hours.
  • Classroom and meeting times for the course.
  • Basic assessments and deadlines (essays, exams, assignments, and how much each was worth).
  • Sometimes, a rough schedule of the weekly reading schedule for the semester.

Times have changed. At the institution I taught at most recently, the syllabus template was 6 pages long prior to describing any of the assessments, or listing any readings. My syllabi over the last few years have typically stretched to about 12 pages, and sometimes more.

The modern syllabus is a much more formal document than the single sided photocopy I was handed on my first day of class. It is sometimes treated as a contract between students and instructor, sometimes treated as a pedagogical tool, and nearly always understood to be the fundamental source of information about a course. It tends to specify policies, rules and regulations, texts, and assignments for a given iteration of a course, especially in cases where courses are offered in multiple sections by different instructors. That makes it a very important resource for each student, and an opportunity for each instructor and course designer.

The problem is that as syllabi have grown in length and detail, they have also become increasingly inaccessible and overwhelming. Understandably, students don’t always read these multi-page documents full of legalese policy statements and multiple changes in formatting. Even when students do read the syllabus, they may predictably miss important details. Students miss crucial course information so ubiquitously that an industry of memes has popped up around redirecting students towards the syllabus. Mocking students for struggling to find crucial course information is neither mature nor productive, and I don’t recommend it. But, what should instructors do?

There are very good reasons to include as much course information as possible in the syllabus, but the more information is included, the more likely it is that any particular piece of information will be overlooked. It is a good idea to include as much information as possible, because this is the document that students will return to the most. Many colleges and universities view the syllabus as a policy document or contract, deviation from which is problematic. For this reason, the syllabus really does need to be comprehensive. But, as a result, students will not read every word of it, and that’s okay. I have a few strategies to help you – and your students – get the most out of the syllabus.

  1. Reframe

First, recognize that the student is not the problem. The syllabus’s length and comprehensiveness is both the problem and a desirable feature, and it is more or less set in stone by institutional policies. Focus your energy on incentivizing referencing the syllabus regularly, and additional energy on building nets (and networks) to help everyone get the information they need, when they need it. Additionally, recognize that every student is different, and a diversity of student learners is a good thing. Some students will read through the entirety of every syllabus, but that it is perfectly reasonable that some (or many) students will not.

2. Redundancy

It is worth building redundancy into the syllabus. Make sure that the most important information (deadlines and avenues of communication) are communicated in more than one way. Perhaps they arise more than once on the syllabus, and perhaps they are additionally available through course webpages. As deadlines approach, highlight them in course communications (lectures, announcements, emails, etc.). Offer multiple pathways to find the same information.

3. Easter Eggs

Incentivize reading through the syllabus. Some people use syllabus quizzes to ensure that their students absorb the most important information in the syllabus. I once taught a course that offered a bonus mark (of 0.5%) for completing an introductory discussion post. It wasn’t hidden in the course materials, but the bonus mark wasn’t highlighted as much as other material. If they found information about the bonus mark, they were likely to find other more important information about the course in the process. I’m not sure what I think about hiding money and directing students to it in the syllabus, but it seems designed to prove that students don’t read the syllabus, rather than incentivizing reading the syllabus in the first place.

In recent courses, I have asked students to include a cute animal picture with each email. I view it as my own version of the brown M&M rider. If students email me with a cute picture attached, I know that they have had a good look at the syllabus and haven’t found the information they were looking for. (And they are probably not alone.) I’ll answer the question and add a course announcement on the course webpage, or a post on an FAQ page, or send out a course-wide email. If they email me without the cute picture attached, it’s no big deal, but it’s slightly more likely that asking me is their first port of call. And that is useful information, too. In either case, I will try to answer the question they have asked and also direct them to where they might find more information, but I’m less likely to generalize my response to an email without a cute animal picture.

4. Accessibility

Of course, all syllabi should use accessible and screen-reader-compatible formats. Choose accessible fonts and document formats. Provide alt-text for any images. Check the (minimum) accessibility requirements at your institution, but see if you can aim higher than that.

It should be clear by now that I am not impressed by the ‘It’s in the Syllabus!’ memes. They belittle students for something that is perfectly reasonable and understandable. So, when you are working on your next syllabus, remember to think about it from the student’s perspective, and create a document that will help the whole class to get on the same page throughout the course.

Be there or be square: Attendance and Academic Success

We all know that attendance is linked to academic performance, right? Better attendance typically means higher grades, and lower attendance/higher rates of absenteeism is linked to lower academic performance? This is so well known in academic circles that many university professors include the connection between attendance and academic performance in their introductory lectures or in their syllabi.

Yet, in the Covid era, many have also become increasingly aware of impossibility of perfect attendance in many cases. Between illness and caregiving responsibilities and work commitments and financial hardship, there are lots of students for whom attendance in class is not the top priority. Universal design for learning suggests that we should design our courses so that students who face barriers to attendance don’t face additional barriers to academic success. After all, mere attendance shouldn’t be a pedagogical aim for most of us.

The temptation to make attendance part of assessment is real. Every instructor wants to encourage attendance in general. We want our students to do well, and we want them to get the most out of our classes. We want to encourage attendance because it is so hard to lecture to an empty room and impossible to run a lab or an engaging activity with too few students. We need to strike a balance between accommodating students who can’t attend, encouraging attendance for those who can, and building pathways to success in the course for both sets of students.

One worry is that attendance is an easily measurable proxy for learning outcomes, and it seems to have a predictive quality to it. We know as the semester progresses whether attendance is waning, and we try to adjust in the hopes of achieving better learning outcomes. Educators at all levels should care about learning outcomes, but should we therefore care about attendance? What if we’re tracking the wrong variable, for the wrong reasons?

The answer, it would seem, is mixed. Yes, there are plenty of studies that show high attendance rates correspond, in general, with high achievement rates, and high rates of absenteeism correspond, in general, with lower academic achievement rates.

But many studies suggest only a weak statistical connection between attendance and learning outcomes or academic results, and there are a lot of hints in the research that suggest that the causal connection is elsewhere.

One reason to think that attendance itself is not doing the work comes from this study showing that students who attend their undergraduate lectures achieve the highest test scores, but the students most likely to attend lectures are also students who had the highest admission scores to university. That is, students who already have a record of doing well academically are more likely to attend class. That strong students attend class and do well isn’t surprising. But what about the students who aren’t attending class, or who don’t have a record of high achievement? What about the students who can’t attend class, now, regardless of what their prior record suggests?

We should (and do!) worry about the reasons why students are present or absent from class. There is a very real worry that we may be systematically disadvantaging students who cannot attend class. In the past, students have made unprompted disclosures to me for missing class when: a war broke out in their home country; they were caring for a family member in hospital; their chronic illness flared up; they were in mental health crisis; they had to be in court; their caregiving responsibilities conflicted with class; they were offered a work shift that they couldn’t afford to turn down; they couldn’t afford bus fare to come to campus for every class. I’ve lost count of the number of hospital selfies and medical charts I’ve received (though, I wish the answer were zero). All of these reasons are legitimate and worthy of accommodation in my books, although very few of them qualify for official accommodation within the university’s policy. And none of these are explanations that students should feel the need to disclose to their professor.

Universal Design for Learning proposes that we consider these students at the syllabus design stage, before the course starts. If the syllabus is designed for the widest possible range of students in mind, the course will be accessible to them without anyone having to disclose their reason for absence.

This study suggests that mere attendance is only weakly correlated with exam performance, but levels of cognitive engagement (to a certain extent) and behavioural engagement (to a greater extent) are highly correlated with exam performance. Büchele argues that what matters most is the level of engagement when students attend class. Cognitive engagement means getting the students thinking in class. Examples of cognitive engagement include using a ‘flipped classroom‘, student activities that make comparisons or connections such as mind mapping, and various types of experiential learning. But in a large class, and on a large scale, those can be hard to achieve. The good news about Büchele’s study is that even (mere) behavioural engagement has a positive impact on exam performance. Behavioural engagement is stimulus-response type of engagement including classroom response activities (such as Poll Everywhere or Kahoot).

Finally, this study finds that although attendance positively correlates with student performance, attendance policies do not have a significant effect on attendance. My take away is that the attendance policy is not going to do the work of motivating students to attend on its own. After all, there is good reason to think that student absences are experienced with regret, or chosen for good reason. Awarding marks for attendance or penalizing non-attendance with mark deductions might artificially widen the performance gap between those who attend and those who are absent. But it won’t have changed how much the students have learned.

A better strategy is to acknowledge the reality of absenteeism and try to build pathways to engagement for students who cannot attend. Perhaps an asynchronous discussion board or online mind-mapping activity for those who cannot attend with the rest of the class, in parallel with in-person discussions and synchronous mind-mapping activities for those who are able to attend.

A final note about the pandemic’s impact on attendance and burnout. Rates of student absenteeism are much higher than they ever have been. And for some portions of the pandemic, even the correlation between attendance and academic performance has dropped off. Many people reported much lower than usual academic performance, irrespective of attendance rates. Pandemic life is hard on all of us, and it is especially demoralizing to find that engagement strategies that worked in the past are no longer as effective.

This brief survey of the literature on attendance and performance suggests that engagement rather than attendance is the thing we should all be tracking. But, of course, engagement is very hard to track. It is much more difficult to track than attendance. As higher ed goes through rapid changes brought about by the Covid-19 pandemic, learning outcomes remain the end point aim for.