1. Home
  2. Guidance
  3. Assessment and Feedback
  4. Academic Integrity
  5. Generative AI and Academic Integrity
  6. Generative AI and Academic Integrity
  1. Home
  2. Guidance
  3. Generative AI
  4. GenAI at the University
  5. Generative AI and Academic Integrity
  6. Generative AI and Academic Integrity

Generative AI and Academic Integrity

Published on: 20/09/2023 · Last updated on: 02/09/2024

Joined-up approach

Colleagues across the CLT, Academic Registry, the Skills Centre and Academic Departments are working closely together to develop a joined-up approach to the use of Generative AI (GenAI) at the University. Maintaining academic integrity is a crucial aspect of this work.

There is certainly an issue related to academic integrity here, however it’s worth considering this in the context of other threats to integrity such as plagiarism, collusion and the use of essay mills. Whilst the mechanism is different, the underpinning issue of students deliberately trying to pass off work they didn’t produce as if it’s their own is not a new one. As with these other forms of misconduct, we have no reason to suspect the issue is widespread and we shouldn’t assume that students will use AI tools to cheat – we have a chance to frame the conversation around AI as a positive opportunity to engage with a powerful tool to enhance learning and productivity.   

James Fern, DOT (Department of Health), and Member of the AI Task and Finish Group

Maintaining academic integrity

  • The use of GenAI (unless otherwise specified by staff) in coursework is not academic misconduct in-and-of itself, unless used in such a way that it becomes unethical or is used to generate complete assignments that students then attempt to pass off as their own work.  
  • Likewise, ineffective or poor use of GenAI, such as students not fact-checking or critically evaluating the statements it produces, or providing long swathes of text for which they provide a reference, is not misconduct, but it may constitute poor academic practice.  
  • Instead, we have updated our academic integrity statement to include the following:

You have not presented content created by generative AI tools (such as Large Language Models like ChatGPT) as though it were your own work.

  • This minor change does not ban use of such tools (we need to be careful that we do not inadvertently ban the use of ‘everyday AI’ tools such as those being integrated into Office365), rather it emphasises instead that students should not try to pass off work created by these tools as their own.
  • Students confirm that they understand and abide by these conditions when submitting assessments and examination scripts.  In Moodle, by default, students are required to acknowledge that:

“By submitting this assessment, I confirm that I agree to the University’s Academic Integrity Statement.”

  • This wording is also included on all examination cover sheets, whether exams are being submitted via Inspera or in-person.
  • Further, QA53 (Examination and Assessment Offences) has been updated and includes reference to artificial intelligence tools. 
Potential Academic OffenceDefinition and further, non-exhaustive, examples
PlagiarismClaiming, submitting or presenting existing work, ideas or concepts of others as if it is one’s own, without citing the original source. This includes individual or machine-generated paraphrasing.
Self-plagiarism (auto-plagiarism)Duplication of one’s own work (including work at previous institutions), submitting content as if for the first time and without acknowledgement which has previously been assessed.
Collusion (unauthorised collaboration)Submission of work presented as if it is one’s own that has been done in unauthorised collaboration with someone else or something else, such as other people or artificial intelligence and technologies. This does not include permitted collaboration as part of groupwork. It includes:
  1. Sharing of work and/or answers with other persons within or beyond your institution, whether shared privately or via a cheat site.
  2. Acquiring answers or information from artificial intelligence.
  3. Allowing someone else to use your assignment or exam answers for academic credit.
Contract cheating (including impersonation)Fraudulent activity, notably the submission of work presented as one’s own that has been purchased, commissioned, or downloaded from an essay repository, or prepared by someone/something other than yourself. For example:
  1. Buying an assignment in full or in part from a person, repository, or organisation
  2. Purchasing or otherwise acquiring a copy of exam questions, tests, assignments, or answers by any unauthorised means (other than example tests/questions provided by your professor for practice). This includes ‘cheat’ sites such as Chegg, StudDocU, and similar.
  3. Impersonating another student or having someone impersonate oneself in a class, at an exam or test, or in any other situation in which you are being evaluated.
Fabrication or falsificationNegligent, false or misleading representation of evidence, results, data or information which forms part of your submitted work, with the intention to deceive the marker.
Breaching of examination regulationsBreaching:
  1. The University’s Academic Integrity Statement.
  2. Stipulated rules in an exam instruction sheet.
  3. Rule 2: Conduct in Examinations. For example:
    • Any unauthorised communication during the assessment submission window.
    • Obtaining an examination paper in advance of the examination.
    • Impersonation of an examination candidate.
    • Having or using any unauthorised material or an unauthorised device.

  • As an additional point, we also suggest staff use this opportunity to reinforce key messaging around good academic practice and point students to the Academic Integrity Test in order to underline the importance of academic integrity more generally.  

AI detector tools

Please note: from an assessment standpoint, there are no independently validated tools that can reliably and accurately detect GenAI-produced material. Research has shown that AI detector tools (at least currently) are fundamentally flawed in concept, many of these detection tools are not fully developed and the underlying data model used is prone to bias against certain groups or individual characteristics. Further, established methods of looking at past submissions for changes in style and content will only hold short term given that students starting a course now will have full access to AI from the start. 

We wish to remind staff that – however well intentioned – you must not submit student work into such tools as this may compromise students’ intellectual property and personal data rights (which are enshrined in law). It must also be made clear to students that they must not upload any personal data to AI systems without taking into account data protection requirements

Further resources

Related Articles