1. Home
  2. Guidance
  3. Generative AI
  4. GenAI at the University
  5. The Two-Lane Approach to GenAI Assessment Categorisation
  6. The Two-Lane Approach to GenAI Assessment Categorisation
  1. Home
  2. Guidance
  3. Assessment and Feedback
  4. Assessment and feedback design
  5. The Two-Lane Approach to GenAI Assessment Categorisation
  6. The Two-Lane Approach to GenAI Assessment Categorisation
  1. Home
  2. Guidance
  3. Assessment and Feedback
  4. AI and assessment
  5. The Two-Lane Approach to GenAI Assessment Categorisation
  6. The Two-Lane Approach to GenAI Assessment Categorisation

The Two-Lane Approach to GenAI Assessment Categorisation

Published on: 20/09/2023 · Last updated on: 04/12/2025

Evolving our approach to Generative AI in assessment

In 2024/25, like many universities across the sector, we adopted the ‘traffic light’ ABC model to categorise the use of generative AI (GenAI) in assessments. This framework clarified where and to what extent students could use GenAI tools within their coursework.

From 2026/27 onwards, we will move to a simpler, principles-based ‘two-lane’ approach, developed by the Association of Pacific Rim Universities (APRU). This model shifts emphasis from prescribing what students can or cannot do, towards ensuring that assessments authentically test the intended learning outcomes (ILOs) in a GenAI-enabled world, while actively developing students’ AI literacy.

2025/26 will act as a transition year, during which we will continue to support staff in adapting assessment design as we move from policing boundaries to modelling ethical and effective GenAI use. Course documentation for 2025/26 should continue to indicate the relevant category (A, B, or C) while staff begin redesigning for the two-lane approach. For coursework-based assessments, students should be provided with guided opportunities to see whether, when and how GenAI can be used responsibly and productively depending on the learning context. Further guidance in selecting one of the three categories, and template text for assessment briefs, can be found here.

The Two-Lane Approach

The two lanes are: (1) ‘closed’, where GenAI must not be used; (2) and ‘open’, where engagement with GenAI is either necessary or optional for completion of the assessment. To hear more about the two-lane approach, which originated at the University of Sydney, click here, or read about it here.

ClosedOpen
GenAI tools are not permitted. Assessment conditions are designed to prevent any use of GenAI.Use of GenAI is optional (or in some cases integral).
Assessment in this category:
Time-limited
Invigilated
In-person
Could include lab practicals or exams
Often used for assessing foundational knowledge
Assessment in this category:
Coursework of all kinds
Remote open-book exams
Ideal for assessing areas where GenAI would be expected to be used effectively in industry
Could be time-limited

The Two-Lane Approach builds on, and complements, the existing ABC categorisation. During AY 25/26, it should be viewed as an overarching framework that helps staff identify the most appropriate ways to assess different Intended Learning Outcomes (ILOs) in a GenAI-enabled environment.

Practical notes for course teams (AY25/26 transition)

  • Keep ABC labelling in handbooks while designing to the lanes.
  • Ensure a short AI-use statement requirement is added to coursework briefs (B & C).
  • Develop and provide modelling and exemplars of good/poor GenAI use to develop students’ AI Literacy.
  • Attend a CLT workshop [link to follow] on getting started with GenAI.

Two-Lane Approach Alignment with the ABC Framework

The table below shows how the two approaches align, highlighting the evolution in emphasis. The table also indicates the implications for assessments’ validity, reliability and effectiveness:

 Closed  Open
 Type A  Type BType C
GenAI use:Is not permitted.Is permitted as an assistive tool for specific defined processes within the assessment and its use is not mandatory in order to complete the assessment.Has an integral role, the use of GenAI is mandatory, and is used as a primary tool throughout the assessment process.
Change of emphasis:‘Don’t use’ becomes ‘can’t use’ (as far as possible). We pivot away from Type A coursework assessment.Move from allow/ban lists to modelling effective and ethical  uses of GenAI, and for defined purposes. Discourage usage which does not develop critical thinking or obscures who did the intellectual work  (e.g. full drafting), unless required by the ILOs. Support student awareness of academic integrity, and provide opportunities to develop GenAI literacy.  It remains a necessity that students engage with GenAI, though this need not be directly.
Design notes for reliability, validity and effectiveness of assessment:Validity: aligns to ILOs that target recall, fluency, threshold concepts, or regulated competence. Reliability: controlled conditions (invigilated/on-campus, device restrictions, question banks/randomisation, version control).
Effectiveness: focused on individual capability. Maintain fairness and comparability through standard assessment conditions, clear rubrics, calibration and moderation (OfS B4).
Validity: design tasks that require judgement, synthesis, critique and source use—so the construct being assessed remains the student’s thinking. Consider requiring process artefacts (prompt snippets, drafts, rationale).
Reliability: transparent marking descriptors and criteria for process quality, and moderation of reflective components.
Effectiveness: scaffold skills; authentic tasks mirroring contemporary professional practice.

Our generative AI assessment review model

In adapting existing assessments or developing new ones please engage with our guidance on minimising academic misconduct through assessment design. Please get in touch with Abby Osborne or Ellie Kendall if you would like to discuss any of these issues further.

The below three-stage process is adapted from the guidance developed by Lodge et al at the University of Queensland and UCL and is intended to help course teams evolve their approach to GenAI.

Step 1 Review assessmentsCourse teams should review their assessment portfolio to identify where GenAI puts the validity of an assessment at risk.
Step 2 Reduce riskThey should reduce this risk by testing skills in the appropriate lane (either open or closed), taking care that open assessments are designed to encourage effective, ethical use of GenAI that enhances learning and upholds standards.  
Step 3 Develop opportunitiesCourse teams should consider how to further develop their curriculum, pedagogical approaches and assessment types to respond to the context of a GenAI-enabled world.  

These steps are designed to support course teams to review, adapt and set assessments and to ensure that for each piece of assessment, staff and students have a shared understanding of the extent to which GenAI tools should be used and how much, and where, in the assessment process.  Staff must inform students which category their assessments falls into and reinforce any expectations around the use of GenAI. A key message is that whatever approach is adopted, in all cases the work must remain that of the student – they are in the driving seat and retain responsibility for their work.

Course teams should Review their course assessment portfolio to identify where GenAI puts the validity of an assessment at risk:

  • Consider the impact of GenAI on your course ILOs, and the strengths and weaknesses of different assessment types.
  • Identify which of the three categories listed below current coursework falls into and communicate this to students as soon as possible.
  • Plan which assessments may require longer-term changes (e.g. changing assessment type) and speak to the CLT and/or Registry as needed.
  • Action any immediate short-term opportunities to assessments (e.g. formative opportunities, introducing scenario-based questions or designing fictional scenarios so that students are unable to enter direct questions into GenAI tools).

If you wish to change an assessment type to reduce risk (e.g. open book to invigilated exam), note that this will require careful consideration and guidance from Academic Registry and CLT, especially where the change would materially affect students.  It is important to note there are limits to the changes we will be able to make in-year to assessment given CMA compliance.

Where course teams identify assessments which either currently (or in the future) will not permit the use of AI (Type A/Open Lane), teams should consider and plan for how they can reduce and mitigate any immediate risks associated with the use of AI. This may require short or longer-term changes to their assessments. For Types B & C (open lane), pay particular attention to communicating clear expectations and strengthening messaging around good academic integrity.

  • Consider opportunities to strengthen messaging within the course around expectations around academic integrity and academic citizenship.
  • Identify how course teams can support students to better develop their ethical and effective use of GenAI in the context of their discipline.
  • Plan how you will communicate expectations as a course team to students around the use of GenAI in coursework.
  • Action any additional layers of safety (e.g. formative vivas) which can both help to reduce risk and develop students’ AI literacy.

The Skills Centre has produced an overview of GenAI tools for students and developed an AI literacy module. The University of Ulster has also produced some guidance text which staff may wish to amend and include in programme handbooks and/or assignment briefs which sets out the pros/cons of GenAI tools and reminds students about ethical and responsible practice. AI for Education has also created a clear graphic which helps students understand how and when to use these tools wisely and responsibly.

  • Certain assessments may require an additional layer of safety to reduce risk associated with generative AI. This could take the form of spot-check formative style viva, additional reflective commentaries or the submission of an annotated script in addition to the assessment itself.  
  • This is an effective approach particularly when working with a smaller cohort. However, this approach may be limited in scope and capacity in terms of staff time and resource; the creation of any additional form of assessment would need to be designed and delivered in way that is realistic and sustainable for staff both now and in the future.
  • This approach will also require careful planning to ensure that additional forms of assessment do not create additional barriers for students who are already at risk of being marginalised. 

Course teams should also consider whether updates are needed to their curriculum, pedagogical approaches or assessment types to respond to the context of a GenAI-enabled world. Consider where future assessments may best ‘sit’ (open or closed lane/A,B or C) to ensure the learning outcomes are met and the ethical use of generative AI is maintained.

  • Consider where GenAI can either be incorporated as an assistive tool or integrated into the assessment process more deeply.
  • Identify the opportunities of embedding GenAI more deeply into assessment design across the course, and the skills and support required to both teach and assess GenAI effectively.
  • Plan for how the course team will continue to investigate and share opportunities in GenAI in their disciplinary context.
  • Action any changes to assignment briefs/marking criteria as necessary.

Course teams may find an interactive resource on AI assessment design, developed by JISC and UCL, a useful starting point to consider how AI can be incorporated into different types of assessments and how these assess different learning outcomes.

  • Generative AI will increasingly become an integral part of future graduate ways of thinking, learning and working.  In a world where information is so readily available, students will need to become experts in challenging, critiquing and investigating; this is central to the culture of Higher Education and will foster further innovation and creativity. Assessments which clearly define the ethical and safe use of generative AI will support students to harness such tools to complement learning rather than replacing it. 
  • Assessments which actively encourage ‘fact checking’, investigate the limitations or weaknesses of knowledge and use tools to enhance critical thinking will better prepare our graduates of the future. 
  • Similarly, assessments which embrace and harness AI to enable ‘cognitive offloading’ will enable our students to focus greater time and energy on problem solving, synthesising new ways of thinking and problem solving in creative ways.  
  • Consider developing questions and scenarios which promote critical thinking, application rather than recall of knowledge, problem-solving in a specific context and/or an increased focus on the assessment process itself.
  • A longer -term approach to considering the broader assessment context, including developing marking criteria, assessment questions and actionable feedback which place greater emphasis or weighting on the application of knowledge and ideas, rather than knowledge recall, will help to shape the broader assessment and feedback culture ensuring that we cultivate a community of learning linked to our assessment practices. 

Further resources

Related Articles