This page is a work in progress and provides high-level information, guidance, and resources to support our understanding and usage of Artificial Intelligence (AI) tools, such as ChatGPT.
Practically:
- As we note below, our focus should be on assessment design.
- We need to double down on academic integrity support for students.
- We need to be *cautious* in our exploration of these tools in the context of teaching and assessment, being mindful of any data security implications, and ethical issues. For instance, while companies are now developing and releasing AI watermarks and AI "detection" tools, we need to review their efficacy and be ethical in our use of the tools.
- We need to define clarity over when and when not AI can be used in student work (and by staff to generate content or for generating assessments), eg. publishers are banning ChatGPT as a cited author, but some (major ones) are allowing it as a tool for drafting, as long as the (human) author generates the knowledge, and checks that statements are factually correct.
- To that end, we should consider our approach to AI literacy, and supporting staff and students to gain a deeper understanding of the pros and cons of AI in Higher Education.
ChatGPT: A very brief introduction
ChatGPT came to global attention in December 2022, having being launched in November by OpenAI - a research laboratory based in San Francisco (USA). OpenAI has a general mission to build safe Artificial General Intelligence (AGI), by which it means autonomous systems which 'outperform humans at most economically valuable work— [and] benefits all of humanity.'
The news is ablaze with stories ranging from ChatGPT heralding an existential threat to academic integrity, to the future world of work (where white-collar workers will be replaced by our AI overlords), that or the rise of The Terminator. It has even provided relationship advice, and in one example, was used with another tool to create a virtual AI wife.
ChatGPT in Higher Education
ChatGPT is already good enough for undergraduate students to use in their writing process by providing suggestions for sentences, paragraphs, essay plans, feedback on drafts and even producing entire scripts; it can also generate basic code. Indeed, it can be used in many ways in Higher Education - and not just by students; teachers, too, can use it to generate lesson plans, marking criteria - and even mark work! It has, however, a number of known limitations, and its use by students and teachers raises ethical questions, and poses challenges to academic integrity and the way we currently assess.
Below we provide some initial - and developing - examples and guidance for staff to consider in using ChatGPT in their teaching and assessment.
ChatGPT not only poses opportunities (and challenges) to existing assessment practices, but also learning and teaching practices. Lucinda McKnight, from Deakin University (Australia), for instance, identifies the opportunity to:
- Use different AI writers to produce different versions of text on the same topic, for students to compare and evaluate
- Use discrimination to work out where and why AI text, human text or hybrid text are appropriate, and give accounts of this thinking.
- With students, research and establish the specific benefits of AI-based content generators for your discipline. For example, how might it be useful to be able to produce text in multiple languages, in seconds? Or create text optimised for search engines?
- Explore different ways AI writers and their input can be acknowledged and attributed ethically and appropriately in your discipline. Discuss how AI could lead to various forms of plagiarism, and how to avoid this.
As she argues, all of this requires skills in search optimisation, evaluation and editing, and the development of a critical awareness and a developed understanding of the potential negatives of algorithmic content generation. For example, how can we best teach students to question what ‘assumptions’ the AI has used in answering a question, what voices – or data – might be missing in its response; are there any assumptions or injustices built into its reply; and how has the AI been trained – what – if any – ethical safeguards has been ‘baked in’ to its model.
Despite the hype, the issues that ChatGPT might be seen to raise for assessment practice are not necessarily new to Higher Education. Responding to such challenges align with positive opportunities to transform why and how we carry out assessment, including:
- How we measure learners' critical thinking, problem-solving and reasoning skills rather than essay-writing abilities.
- How we shift from recall of knowledge to real world application.
- How we develop students who are able to reflect on their learning and performance.
As we seek to educate and enable our students to establish themselves in the values of Academic Integrity, so ensuring students have clarity around the process of assessment and explicit understanding around skills they are developing, will reduce dependency on AI related tools. Indeed, ChatGPT could be used to support:
- Student understanding and familiarity with marking criteria, by generating examples of work at different levels and allowing students (individual or in groups) to mark this, discussing their marks with teaching staff.
- Produce text on different subjects and in different forms, for students to practice skills such as analysis and evaluation.
Sustainable solutions to the challenges to academic integrity posed by tools such ChatGPT will require ongoing dialogue with students, the promotion of the values of academic integrity, and focus on effective assessment design. Here are some of the actions that can be taken now, as detailed in the QAA's briefing paper for HE providers (published 31 January 2023):
- Communication with students: engage early with students to provide information about the capabilities and limitations of AI software tools (such as inappropriate forms of citation and referencing and implicit bias) and how indiscriminate use may not only harm the quality of their education, but also undermine confidence in the qualification they are working towards.
- Emphasise student learning: support students to understand they will miss out on developing key skills such as critical thinking, evaluating evidence and academic writing if they rely on the uncritical use of AI tools, and extend existing institutional digital literacy strategies to encompass AI literacy.
- Communicate the value of integrity: discuss with students how the advancement of knowledge has relied on integrity in both research and academic practice and that progress is undermined by academic misconduct. This will help them understand the values that underpin their discipline and make it clear about what constitutes academic misconduct and why it has consequences.
- Identify networks of support: develop internal networks of academic integrity support to involve students, given the majority of students are strongly opposed to cheating including the use of AI and essay mills - both for ethical reasons and because they see such malpractice as a threat to the value of their own qualifications.
- Signpost sources of support: provide clear signposting of the individuals and services within the institution who are available to help and support students in understanding good academic practice. This information should also be available to personal tutors.
You can signpost students to details about the University's policies on Academic Integrity and Plagiarism, as well as links to training resources and sources of support, on the University Academic Integrity Training and Test webpage.
In addition to the above, training and support for staff and students is detailed on the CLT Academic Integrity webpage.
There are countless more examples appearing on the internet every day. We will continue to keep abridge of these, but perhaps the most important and powerful response to this topic is you, and your students. The lessons you learn in your teaching and assessment practices, and the different ways students come to use these tools. To that end please do contact us if you have any need for further support, or have a great example (of a success or failure), that you are willing to share.
Key links
QAA briefing paper for HE providers: The rise of artificial intelligence software and potential risks for academic integrity
BBC video: What is ChatGPT?
JISC article: Does ChatGPT mean the end of the essay?
Kate Lindsay blogpost: ChatGPT and the future of university assessments
Peter Bryant blogpost: ChatGPT: IDGAF (Or: How I Learned to Stop Worrying and Ignore the Bot)
TALMO blogpost: ChatGPT and assessments in the Mathematical Sciences
Resources & events
CLT blog posts:
ChatGPT: an introduction
Generative AI and Academic Integrity - Part 1
Generative AI and Academic Integrity - Part 2
Hub resources:
Generative AI and Academic Integrity
Generative AI - case studies from Bath
Get in touch
In time, we will share examples of how teaching teams at Bath are using ChatGPT and AI, their findings and recommendations to colleagues. If you have an example that you would either like support with to implement, fund, evaluate and/or share, please do let us know and a member of the team will contact you!
Updated on: 16/03/2023