Published on: 20/09/2023 · Last updated on: 04/12/2025
Evolving our approach to Generative AI in assessment
In 2024/25, like many universities across the sector, we adopted the ‘traffic light’ ABC model to categorise the use of generative AI (GenAI) in assessments. This framework clarified where and to what extent students could use GenAI tools within their coursework.
From 2026/27 onwards, we will move to a simpler, principles-based ‘two-lane’ approach, developed by the Association of Pacific Rim Universities (APRU). This model shifts emphasis from prescribing what students can or cannot do, towards ensuring that assessments authentically test the intended learning outcomes (ILOs) in a GenAI-enabled world, while actively developing students’ AI literacy.
2025/26 will act as a transition year, during which we will continue to support staff in adapting assessment design as we move from policing boundaries to modelling ethical and effective GenAI use. Course documentation for 2025/26 should continue to indicate the relevant category (A, B, or C) while staff begin redesigning for the two-lane approach. For coursework-based assessments, students should be provided with guided opportunities to see whether, when and how GenAI can be used responsibly and productively depending on the learning context. Further guidance in selecting one of the three categories, and template text for assessment briefs, can be found here.
The Two-Lane Approach
The two lanes are: (1) ‘closed’, where GenAI must not be used; (2) and ‘open’, where engagement with GenAI is either necessary or optional for completion of the assessment. To hear more about the two-lane approach, which originated at the University of Sydney, click here, or read about it here.
| Closed | Open |
| GenAI tools are not permitted. Assessment conditions are designed to prevent any use of GenAI. | Use of GenAI is optional (or in some cases integral). |
| Assessment in this category: Time-limited Invigilated In-person Could include lab practicals or exams Often used for assessing foundational knowledge | Assessment in this category: Coursework of all kinds Remote open-book exams Ideal for assessing areas where GenAI would be expected to be used effectively in industry Could be time-limited |
The Two-Lane Approach builds on, and complements, the existing ABC categorisation. During AY 25/26, it should be viewed as an overarching framework that helps staff identify the most appropriate ways to assess different Intended Learning Outcomes (ILOs) in a GenAI-enabled environment.
Practical notes for course teams (AY25/26 transition)
- Keep ABC labelling in handbooks while designing to the lanes.
- Ensure a short AI-use statement requirement is added to coursework briefs (B & C).
- Develop and provide modelling and exemplars of good/poor GenAI use to develop students’ AI Literacy.
- Attend a CLT workshop [link to follow] on getting started with GenAI.
Two-Lane Approach Alignment with the ABC Framework
The table below shows how the two approaches align, highlighting the evolution in emphasis. The table also indicates the implications for assessments’ validity, reliability and effectiveness:
| Closed | Open | ||
| Type A | Type B | Type C | |
| GenAI use: | Is not permitted. | Is permitted as an assistive tool for specific defined processes within the assessment and its use is not mandatory in order to complete the assessment. | Has an integral role, the use of GenAI is mandatory, and is used as a primary tool throughout the assessment process. |
| Change of emphasis: | ‘Don’t use’ becomes ‘can’t use’ (as far as possible). We pivot away from Type A coursework assessment. | Move from allow/ban lists to modelling effective and ethical uses of GenAI, and for defined purposes. Discourage usage which does not develop critical thinking or obscures who did the intellectual work (e.g. full drafting), unless required by the ILOs. Support student awareness of academic integrity, and provide opportunities to develop GenAI literacy. | It remains a necessity that students engage with GenAI, though this need not be directly. |
| Design notes for reliability, validity and effectiveness of assessment: | Validity: aligns to ILOs that target recall, fluency, threshold concepts, or regulated competence. Reliability: controlled conditions (invigilated/on-campus, device restrictions, question banks/randomisation, version control). Effectiveness: focused on individual capability. Maintain fairness and comparability through standard assessment conditions, clear rubrics, calibration and moderation (OfS B4). | Validity: design tasks that require judgement, synthesis, critique and source use—so the construct being assessed remains the student’s thinking. Consider requiring process artefacts (prompt snippets, drafts, rationale). Reliability: transparent marking descriptors and criteria for process quality, and moderation of reflective components. Effectiveness: scaffold skills; authentic tasks mirroring contemporary professional practice. | |
Our generative AI assessment review model
In adapting existing assessments or developing new ones please engage with our guidance on minimising academic misconduct through assessment design. Please get in touch with Abby Osborne or Ellie Kendall if you would like to discuss any of these issues further.
The below three-stage process is adapted from the guidance developed by Lodge et al at the University of Queensland and UCL and is intended to help course teams evolve their approach to GenAI.
| Step 1 Review assessments | Course teams should review their assessment portfolio to identify where GenAI puts the validity of an assessment at risk. |
| Step 2 Reduce risk | They should reduce this risk by testing skills in the appropriate lane (either open or closed), taking care that open assessments are designed to encourage effective, ethical use of GenAI that enhances learning and upholds standards. |
| Step 3 Develop opportunities | Course teams should consider how to further develop their curriculum, pedagogical approaches and assessment types to respond to the context of a GenAI-enabled world. |
These steps are designed to support course teams to review, adapt and set assessments and to ensure that for each piece of assessment, staff and students have a shared understanding of the extent to which GenAI tools should be used and how much, and where, in the assessment process. Staff must inform students which category their assessments falls into and reinforce any expectations around the use of GenAI. A key message is that whatever approach is adopted, in all cases the work must remain that of the student – they are in the driving seat and retain responsibility for their work.