Generative AI and Assessment

Our generative AI assessment review model

Generative AI assessment review tool. A three stage model: (1) review; (2) reduce; (3) develop

We propose a simple review model for course teams and departments to review their coursework in the light of GenAI and to help them to make plans for ongoing development. This has been shaped by the work of both academic and professional services colleagues at Bath, who are already engaging with and adapting their approach to assessment design and development.  The review tool also builds on the guidance developed by Lodge et al at the University of Queensland and UCL.

The three steps below are designed to support course teams to review, adapt and set assessments and to ensure that for each piece of assessment, staff and students have a shared understanding of the extent to which GenAI tools can be used and how much, and where, in the assessment process.  Once Step 1 is complete, we recommend that staff inform students as soon as possible students which category that their assessments falls and to reinforce any expectations around the use of GenAI. A key message is that whatever approach is adopted, in all cases the work must remain that of the student – they are in the driving seat and retain responsibility for their work.

Step 1: Review assessments

Course teams should Review their course assessment portfolio to identify where, when and to what extent the use of GenAI is permitted.

  • Consider the impact of GenAI on your course ILOs, and the strengths and weaknesses of different assessment types.
  • Identify which of the three categories listed below current coursework falls into and communicate this to students as soon as possible.
  • Plan which assessments may require longer-term changes (e.g. changing assessment type) and speak to the CLT and/or Registry as needed.
  • Action any immediate short-term opportunities to assessments (e.g. formative opportunities, introducing scenario-based questions or designing fictional scenarios so that students are unable to enter direct questions into GenAI tools).

If you wish to change an assessment type to reduce risk (e.g. open book to invigilated exam), note that this will require careful consideration and guidance from Academic Registry and CLT, especially where the change would materially affect students.  It is important to note there are limits to the changes we will be able to make in-year to assessment given CMA compliance.

Step 2: Reduce risk

Where course teams identify assessments which either currently (or in the future) do not permit the use of AI (Type A), teams should consider and plan for how they can Reduce and mitigate any immediate risks associated with the use of AI. This may require short or longer-term changes to their assessments. For Types B & C, pay particular attention to communicating clear expectations and strengthening messaging around good academic integrity.

  • Consider opportunities to strengthen messaging within the course around expectations around academic integrity and academic citizenship.
  • Identify how course teams can support students to better develop their ethical and effective use of GenAI in the context of their discipline.
  • Plan how you will communicate expectations as a course team to students around the use of GenAI in coursework.
  • Action any additional layers of safety (e.g. formative vivas) which can both help to reduce risk and develop students’ AI literacy.

The Skills Centre has produced an overview of GenAI tools for students and developed an AI literacy module. The University of Ulster has also produced some guidance text which staff may wish to amend and include in programme handbooks and/or assignment briefs which sets out the pros/cons of GenAI tools and reminds students about ethical and responsible practice. AI for Education has also created a clear graphic which helps students understand how and when to use these tools wisely and responsibly.

  • Certain assessments may require an additional layer of safety to reduce risk associated with generative AI. This could take the form of spot-check formative style viva, additional reflective commentaries or the submission of an annotated script in addition to the assessment itself.  
  • This is an effective approach particularly when working with a smaller cohort. However, this approach may be limited in scope and capacity in terms of staff time and resource; the creation of any additional form of assessment would need to be designed and delivered in way that is realistic and sustainable for staff both now and in the future.
  • This approach will also require careful planning to ensure that additional forms of assessment do not create additional barriers for students who are already at risk of being marginalised. 

Step 3: Develop opportunities

Course teams should also consider which assessments, in the longer term, may benefit from further development or enhancement to ensure their assessments remain robust as GenAI capabilities evolve. Again, the above 3 categories can be used to consider where future assessments may best ‘sit’ to ensure the learning outcomes are met and the ethical use of generative AI is maintained.

  • Consider where GenAI can either be incorporated as an assistive tool or integrated into the assessment process more deeply.
  • Identify the opportunities of embedding GenAI more deeply into assessment design across the course, and the skills and support required to both teach and assess GenAI effectively.
  • Plan for how the course team will continue to investigate and share opportunities in GenAI in their disciplinary context.
  • Action any changes to assignment briefs/marking criteria as necessary.

Course teams may find an interactive resource on AI assessment design, developed by JISC and UCL, a useful starting point to consider how AI can be incorporated into different types of assessments and how these assess different learning outcomes.

  • Generative AI will increasingly become an integral part of future graduate ways of thinking, learning and working.  In a world where information is so readily available, students will need to become experts in challenging, critiquing and investigating; this is central to the culture of Higher Education and will foster further innovation and creativity. Assessments which clearly define the ethical and safe use of generative AI will support students to harness such tools to complement learning rather than replacing it. 
  • Assessments which actively encourage ‘fact checking’, investigate the limitations or weaknesses of knowledge and use tools to enhance critical thinking will better prepare our graduates of the future. 
  • Similarly, assessments which embrace and harness AI to enable ‘cognitive offloading’ will enable our students to focus greater time and energy on problem solving, synthesising new ways of thinking and problem solving in creative ways.  
  • Consider developing questions and scenarios which promote critical thinking, application rather than recall of knowledge, problem-solving in a specific context and/or an increased focus on the assessment process itself.
  • A longer -term approach to considering the broader assessment context, including developing marking criteria, assessment questions and actionable feedback which place greater emphasis or weighting on the application of knowledge and ideas, rather than knowledge recall, will help to shape the broader assessment and feedback culture ensuring that we cultivate a community of learning linked to our assessment practices. 

Further resources

Related Articles