Maintaining Academic Integrity
We are working closely with colleagues across the CLT, Academic Registry and the Skills Centre to develop a joined-up approach to the use of Generative AI at the University.
To accomplish this:
- We need to focus, going forward, on assessment design.
- We need to provide clear support and guidance for our students in relation to these tools and their appropriate use.
- We have to consider our approach to AI literacy more broadly, and support staff and students to gain a deeper understanding of the pros and cons of AI in Higher Education.
Updated Academic Integrity
The University's current Code of Practice on Academic Integrity, outlined in QA53, Examination and Assessment Offences, together with the procedures for dealing with Academic misconduct, have been reviewed to ensure that they encompass the use of generative AI tools.
Potential Academic Offence |
Definition and further, non-exhaustive, examples |
Plagiarism | Claiming, submitting or presenting existing work, ideas or concepts of others as if it is one's own, without citing the original source. This includes individual or machine-generated paraphrasing.
|
Self-plagiarism (autoplagiarism) | Duplication of one's own work (including work at previous institutions), submitting content as if for the first time and without acknowledgement which has previously been assessed. |
Collusion (unauthorised collaboration) | Submission of work presented as if it is one's own that has been done in unauthorised collaboration with someone else or something else, such as other people or artificial intelligence and technologies. This does not include permitted collaboration as part of groupwork. It includes:
|
Contract Cheating (including impersonation) | Fraudulent activity, notably the submission of work presented as one's own that has been purchased, commissioned, or downloaded from an essay repository, or prepared by someone/something other than yourself. For example:
|
Fabrication or Falsification | Negligent, false or misleading representation of evidence, results, data or information which forms part of your submitted work, with the intention to deceive the marker. |
Breaching of examination regulations | Breaching:
|
The University's Academic Integrity Statement has also been reviewed and a bullet point added (Point 7) to reference use of generative AI tools:
"You have not presented content created by generative AI tools (such as Large Language Models like ChatGPT) as your own work."
This minor change does not ban use of such tools (we need to be careful that we do not inadvertently ban the use of ‘everyday AI’ tools such as those now integrated into Office365), rather is emphasises instead that students should not try to pass off work created by these tools as their own.
Students confirm that they understand and abide by these conditions when submitting assessments and examination scripts. In Moodle, by default, students are required to acknowledge that:
"By submitting this assessment, I confirm that I agree to the University's Academic Integrity Statement."
This wording is also included on all examination cover sheets, whether exams are being submitted via Inspera or in-person.
There is no requirement to include any additional information in assignment briefs if you do not wish your students to use any form of generative AI as this is already covered in the default academic integrity statement. However, you may wish to remind them of your expectations when discussing the assignment and point them to help and support.
Note: there may be circumstances in which you wish your students to engage with one or more generative AI tools to produce their assessment submission.
In these situations, students will need clear guidance.
You will need to specify both the AI tool(s) that students are permitted or required to use (eg ChatGPT), and the purposes for which they are to be used (eg creating a draft, critiquing AI-generated text, using AI as a formative tool to provide feedback), and students must acknowledge the use of AI in their assignment, providing references as required.
You should make this information clear and explicit on the assessment brief. You can use the following default text:
In this assessment, you can only use the following generative Artificial Intelligence (AI) tool(s) – [insert names of AI tools, such as ChatGPT, or types of tools, e.g. image generators]. You may only use the AI tool to [insert the task or activity for which AI is permitted.] You must appropriately acknowledge where you have used the tool(s), and/or provide references where necessary.
Referencing
Guidance on referencing AI-generated content is currently limited but the format below is often used:
OpenAI, ChatGPT, 24 Jan. 2023, https://chat.openai.com/
The Library is working to produce further resources in this area, and have updated the Referencing guide: Harvard Bath. In all situations, you will need to ensure that your students understand when constitutes acceptable use of generative AI tools and when their use is not appropriate. As a starting point, the Skills Centre has produced a short video for students exploring the use of AI tools.
Key links
QAA briefing paper for HE providers: The rise of artificial intelligence software and potential risks for academic integrity
BBC video: What is ChatGPT?
JISC article: Considerations when wording AI advice and policies
Referencing and Generative AI: Guidance from the University of Queensland
Resources & events
CLT blog posts:
ChatGPT: an introduction
Generative AI and Academic Integrity - Part 1
Generative AI and Academic Integrity - Part 2
Hub resources:
ChatGPT and Artificial Intelligence
Generative AI - case studies from Bath
Updated on: 16/03/2023