1. Home
  2. Guidance
  3. Assessment and Feedback
  4. Assessment and feedback design
  5. Generative AI – Assessment Categorisation
  1. Home
  2. Guidance
  3. Assessment and Feedback
  4. AI and assessment
  5. Generative AI – Assessment Categorisation
  1. Home
  2. Guidance
  3. Generative AI
  4. Generative AI – Assessment Categorisation

Generative AI – Assessment Categorisation

Overview

The following categories are designed to increase transparency and clarity for both staff and students. Use the categories as a guide: we are aiming for greater clarity at the course level, so consider how the categories work for you in your respective subject discipline, taking into account the needs of your students as well as the graduate attributes with which you wish to equip your students. We recognise, and embrace, that the boundary between each category and how this is interpreted will likely vary due to differences in subject discipline and the learning outcomes being assessed – there are no “right” or “wrong” answers.

Our categories

Assessments should be categorised in terms of where the use of GenAI:

Type A: Is not permitted.
Type B: Is permitted as an assistive tool for specific defined processes within the assessment and its use is not mandatory in order to complete the assessment.
Type C: Has an integral role, the use of GenAI is mandatory, and is used as a primary tool throughout the assessment process.

(adapted from guidance produced by UCL)

It is the broad expectation that Type B will be the “new norm” for the majority of coursework for the foreseeable future, with increasing use of C in time, alongside a moderate use of Type A (for coursework).

This is to ensure that we align to our Senate-approved high-level GenAI principles in Education, existing CT principles of Assessment for Learning and Supporting the Needs of All Learners, as well as our Assessment for Learning Design Principles.  Additionally, it is also pragmatic and future-focused. It is a reality that GenAI tools are increasingly ubiquitous and invisible in the tools that students – and staff – use in their work (e.g. Office365 Products and Internet Browsers). Thus, the line between using GenAI tools and not is blurred, and it will shortly be impossible to use tools such as MS Word without (intentionally or not) using GenAI. Furthermore, the evidence so far indicates that there is no simple way of detecting GenAI in student work – so-called AI Detector tools are fundamentally flawed in concept, do not work effectively, and are prone to bias against certain groups or individual characteristics.  To that end, an approach which predominantly bans the use of such tools is not conducive to preparing students for a future in which GenAI will permeate all aspects of work, research and learning.   

The categories in detail

In certain, specific, circumstances, it is likely that some coursework will require that GenAI is not permitted. Staff can, of course, simply inform students not to use GenAI tools, but in the case of open-book assessment such as essays and reports, there will be no definitive way to prove wrongdoing. In reality, therefore, this option is likely to be limited to a small number of assessments that include in-person tests, practical labs, viva or presentation style activities.  

Therefore, use this category for those assessments where it is imperative that students’ knowledge is of paramount importance, particularly where this is aligned to learning outcomes or graduate attributes which focus on the recall of knowledge; and/or where these learning outcomes are not already assessed via an exam. In so doing, reflect on what challenges GenAI might pose to the integrity of the assessment above and beyond existing challenges, such as essay mills, copying from the internet or collusion. 

Where assessments are identified as Type A, consider whether there are any immediate risks associated with the use of GenAI that can be mitigated for this year. For example, there may be instances where an additional formative assessment type may be needed to provide an added layer of safety. This will require careful consideration and guidance from Academic Registry and CLT, especially if any proposed changes will materially affect students or change an assessment’s type or weighting.  It is also important to note there are limits to the changes we will be able to make in-year to assessment given CMA compliance. 

Note that the deadline for making changes to assessments (e.g. from one type to another) for AY24/5 is at the end of January 2024. Changes within an existing assessment type are normally permitted, so long as the assessment brief has not already been shared with students and/or underway. If in doubt, please flag on our tool any proposed changes and we will get in touch to clarify.  

Type B recognises that certain processes, tasks or attributes will increasingly harness GenAI to enhance student capabilities. This category enables a broad range of uses and applications for GenAI, but limits the extent to which such uses are allowed throughout the assessment. Further, it acknowledges that, as a general rule, whilst GenAI is permitted, its use is generally not mandated: it should be possible to complete without the tools, if this is the student’s preference.  

As an example, GenAI tools may be permitted for specific tasks, or parts of an assignment where the tools enhance efficiency to help students: 

  • to debug programming code. 
  • with their comprehension, spelling, language, and grammar 
  • convert from one medium to another (of particular usage to students with disabilities) 
  • find answers to coursework questions based on information that could be found on the internet.   
  • generate ideas about research or generate material to provide a starting point. 
  • to help get over ‘writer’s block’. 
  • to help revise materials or interact with content. 

(Adapted from guidance provided by The University of Coventry) 

Whilst some ‘parts’ of the assessment may permit the use of GenAI, it is likely that there will also be some aspects of the assessment where the use of AI is inappropriate and/or not relevant to the learning outcomes.  Category B is, therefore, intended to give academics greater autonomy and control over shaping how and when they believe students should engage with these tools to support and enhance learning. Whilst it is not possible to detect if a student has used AI, students should be requested to acknowledge its usage, and provide references where necessary. In this way their usage of GenAI is an extension of existing sources of information or tools such as the Internet available during an open-book assessment.  

Students will increasingly need to demonstrate their ability to use AI tools effectively and critically to tackle complex problems, make informed judgments, and generate creative solutions (UCL). Type C assessments will, therefore, integrate the use of GenAI as a primary and mandatory tool throughout the assessment process. For example, this might include assessments which encourage students:  

  • to generate a solution to a problem harnessing GenAI, reflecting on the limitations of the tools used 
  • to produce different outputs which can then be compared and contrasted, considering limitations 
  • to generate knowledge or content which they can then reflect on and apply to their own context to demonstrate real-world application. 

Whilst Type C integrates GenAI into the assessment as a whole, this is with the intention of harnessing GenAI as a tool to facilitate critical thinking, reflection, creative exploration and problem solving. In other words, the assessment facilitates the development of higher order thinking skills, through the use of such tools. The assessment provides an opportunity for students to demonstrate effective and responsible use of the tools (UCL). 

This will likely require a strong understanding and experience of using GenAI tools as well as AI literacy, both for staff and students; we would not recommend Type C assessments as the ‘starting point’ for summative assessment without appropriate scaffolding and support.  

if students are being asked to engage with GenaAI as part of the assessment process, please consider issues of equity and access. We would recommend, for general purposes, using Bing Chat/Copilot in Creative Mode. This is free, and when logged in with a UoB account, this ensures data is protected and grants access to GPT4 – the most up-to-date model available. It can also retrieve live information from the internet. Whilst not as powerful as the paid-for ChatGPT+, it can also generate images via DALL-E 3. We are actively exploring other GenAI tools with DDAT – e.g. ChatGPT+, MS Copilot (as integrated into Office365), and GitHub CoPilot. If you need these more specialist tools, please let us know. We cannot promise at this stage that these will be fulfilled, but we will work closely to discuss this with DDAT on your behalf. 

Further resources

Was this article helpful?

Related Articles