Colleagues across the CLT, Academic Registry, the Skills Centre and Academic Departments are working closely together to develop a joined-up approach to the use of Generative AI (GenAI) at the University. Maintaining academic integrity is a crucial aspect of this work.
Maintaining academic integrity
- The use of GenAI (unless otherwise specified by staff) in coursework is not academic misconduct in-and-of itself, unless used in such a way that it becomes unethical or is used to generate complete assignments that students then attempt to pass off as their own work.
- Likewise, ineffective or poor use of GenAI, such as students not fact-checking or critically evaluating the statements it produces, or providing long swathes of text for which they provide a reference, is not misconduct, but it may constitute poor academic practice.
- Instead, we have updated our academic integrity statement to include the following:
You have not presented content created by generative AI tools (such as Large Language Models like ChatGPT) as though it were your own work.
- This minor change does not ban use of such tools (we need to be careful that we do not inadvertently ban the use of ‘everyday AI’ tools such as those being integrated into Office365), rather is emphasises instead that students should not try to pass off work created by these tools as their own.
- Students confirm that they understand and abide by these conditions when submitting assessments and examination scripts. In Moodle, by default, students are required to acknowledge that:
“By submitting this assessment, I confirm that I agree to the University’s Academic Integrity Statement.”
- This wording is also included on all examination cover sheets, whether exams are being submitted via Inspera or in-person.
- Further, QA53 (Examination and Assessment Offences) has been updated and includes reference artificial intelligence tools.
- As an additional point, we also suggest staff use this opportunity to reinforce key messaging around good academic practice and point students to the Academic Integrity Test in order to underline the importance of academic integrity more generally.
AI detector tools
Please note: from an assessment standpoint, there are no independently validated tools that can reliably and accurately detect GenAI-produced material. Research has shown that AI detector tools (at least currently) are fundamentally flawed in concept, many of these detection tools are not fully developed and the underlying data model used is prone to bias against certain groups or individual characteristics. Further, established methods of looking at past submissions for changes in style and content will only hold short term given that students starting a course now will have full access to AI from the start.