Menu
Point Icon

#1 Victorian uni for graduate employment1

Point Icon

#1 in the world for sport science2

Point Icon

#1 Victorian uni for course satisfaction3

NEXT UP ON this. Next Icon

How to avoid plagiarism: is viva voce the answer?

Here’s the situation: you’ve got multiple assignments due next week, and you’re under the pump now after putting them off for too long. In the pre-Chat GPT era, you’d be out of luck, flipping through textbooks at 3am and scrambling to pull something together at the last minute. But now, generative AI is here to save the day.  

You type your first essay prompt into ChatGPT, hit enter and wait as the screen loads. Just moments later, an entire essay appears before your eyes. You’re not copying off anyone, so no need to worry about how to avoid plagiarism, right? Done. Too easy. 

But when you submit your work, it comes back with red lettering across the top: ‘see me’. When you go to see your lecturer, they let you know that your work has been flagged for a potential academic integrity breach, due to plagiarism. Oh no. What happened?  

Generative AI, like ChatGPT and Copilot, works by drawing from the vast amounts of text it’s been trained on. The downside is that it’s not always great at fact-checking or giving proper attribution to the sources it pulls from.  

This means AI might recycle language and ideas from other authors without crediting them – an act that falls under plagiarism or, at the very least, improper use of AI.  

So, how can students and schools navigate this challenge, especially as AI literacy becomes increasingly important?  

One solution being explored is the shift toward viva voce assessments, or oral exams where students discuss course material in a conversational-style format, rather than relying solely on written essays. However, this shift isn’t without its own challenges.  

To explore the rise of AI and its implications on academic integrity, and to assess the pros and cons of viva voce assessments, we’ve gathered insights from Professor Phillip Dawson from the Centre for Research in Assessment and Digital Learning (CRADLE) and Dr Maria Rae, senior lecturer at Deakin’s School of Humanities and Social Sciences.  

The issue of assessments in the age of AI 

We often hear about the many benefits of AI, such as its ability to analyse large data sets, conduct risk assessments and automate repetitive tasks. But as AI becomes more commonplace, new challenges are emerging – especially in the context of university assessments.  

The rise of generative AI raises serious questions about the integrity and validity of traditional assessments. Why is this a concern? At its core, an assessment is meant to determine whether a student has met the learning outcomes of their course 

Universities, in turn, rely on the validity of these assessments to maintain their reputation – ensuring that graduates possess the skills and knowledge the institution claims they have.  

‘If you aren’t watching someone do something, you can’t be sure it’s really them and their own work,’ says Professor Dawson. ‘While AI might not produce the highest quality outputs all the time, it can reliably give you passable work for most traditional assessment.’ 

In fact, in a lecture given a year ago, Professor Dawson references a study about how ChatGPT performed on the United States Medical Licensing Examination. The answer? ChatGPT performed at a greater than 60% threshold, achieving the equivalent of a passing score for a third-year medical student. Yikes.  

As Professor Dawson says, we can wax poetic about whether AI is ‘good’ or ‘bad’, and whether its use in the academic sphere is ‘right’ or ‘wrong’, but at the end of the day, the real problem is the validity of assessments. 

‘A lot of the moralising and handwringing is disappearing from academic integrity talk, and we’re left with a core question: how can we be sure someone can do what we say they can do?’ says Professor Dawson. ‘This should have always been the question.’

Defining plagiarism and self-plagiarism  

Let’s clear the air on plagiarism. It’s a term we throw around a lot, but what does it actually mean in the age of AI? 

According to Professor Dawson, the issue at hand is perhaps better described as ‘inappropriate AI use’. ‘It’s contested as to whether claiming AI work as your own counts as plagiarism,’ notes Professor Dawson.  

This is a tricky area, as the very nature of AI complicates our traditional understanding of plagiarism.  

Unlike traditional forms of plagiarism, where a student may copy off the person sitting next to them or purchase essays online to submit as their own, AI-generated content is an outcome of a tool synthesising information from a range of sources.  

In that sense, using generative AI typically isn’t stealing from a specific individual, but it can misrepresent the work of many people as the student’s own work.  

There’s also an important distinction between plagiarism and self-plagiarism 

Plagiarism, which is derived from the Latin word for ‘kidnapper’, is defined in the Oxford Dictionary as ‘the action or practice of taking someone else’s work, idea, etc., and passing it off as one’s own.’  

Self-plagiarism, on the other hand, is defined by the American Psychological Association (APA) as presenting your own previously published work as original. While it may seem odd to think you can get in trouble for reusing your own ideas, the issue here lies in presenting old work as new.  

By doing so, you mislead your audience or examiner into thinking that you have put in fresh effort or new thought when, in fact, you are simply repurposing prior content. 

While categorising all AI use in assessments as plagiarism may not be entirely accurate, the point remains: the integrity of a student’s work is compromised when AI is used inappropriately. This makes it all the more important for universities to set clear guidelines and consider alternative assessment formats, and for students to familiarise themselves with the guidelines and expectations for their tasks.   

How do you prevent AI plagiarism? Viva voce – according to one university 

According to Professor Dawson, ‘it has never been possible to create a cheating-proof assessment task, and it’s not likely ever going to be possible to produce an AI-proof one either.’ 

This challenge can be partly explained by the ‘law of least work’, a behavioural principal which suggests that humans have a tendency to try to find shortcuts 

In other words, while there might be a theoretical possibility of creating an AI-proof exam, we can expect that some students will try – and sometimes succeed – in finding a loophole.  

So, what’s the solution for now? Professor Dawson believes an oral exam could be our best option. ‘Viva voce creates a space where we can have more confidence in assessing what students can really do,’ he explains.  

The University of South Australia has already been experimenting with this approach across a range of its science degrees since 2022 and since then, the institution hasn’t recorded a single academic integrity breach on its final examinations.  

The university isn’t alone in exploring this method; a viva voce examination is already a standard final assessment for many medical students, helping assess their ability to think critically and perform under pressure.  

Similarly, for those undertaking a PhD, the viva voce is often the final hurdle, where candidates defend their thesis in front of at least two examiners. 

What is a viva voce? 

At the start, we gave you a topline definition of viva voce, but let’s dive deeper into its history and explore what a viva voce examination looks like in practice.  

Viva voce – Latin for ‘with the living voice’ – is a type of oral examination where students are asked to verbally defend their work, usually before a panel of examiners.  

As mentioned earlier, it’s a common final assessment for doctoral candidates and medical students, largely because this method can test the candidate’s knowledge but also their ability to respond to challenges or questions about their work.  

What do viva voce examinations look like? 

A viva voce examination can take several forms, but the structure typically boils down to this: the student is asked to discuss their work or respond to questions posed by a panel of examiners.  

Viva voce examinations are usually conducted in person but can be adapted to an online or hybrid format. The length of the exam can depend on the complexity of the subject, but it typically lasts between 20 minutes and an hour.  

The panel typically consists of two or more examiners. For doctoral candidates, this includes both external experts in the field and internal examiners from the university. 

While the examination usually begins with the student presenting their work or sharing their knowledge, examiners will then ask questions to clarify specific points, challenge conclusions, or probe methods.  

The goal is to assess how well the student can defend their reasoning and consider alternative perspectives. 

Viva voce examination questions: an example 

The line of questioning in a viva voce examination tends to draw from a few specific purposes: clarification, critical thinking, understanding the broader context and exploring the hypothetical. 

If you’re preparing for an oral exam and are keen to prepare, here are some examples of the types of questions a student might face during a viva voce 

  • What are the limitations of your research? 
  • If you were given a different data set, how might your conclusions change? 
  • How does your research fit into the wider context of the field?  
  • Do any emerging trends or new technologies influence this area of study? 
  • Is this work original or have other people done similar work before?  
  • Are there any alternative explanations for what you found in your research? 

Gender bias and viva voce 

While we’ve explored the numerous benefits of viva voce as an assessment method in the age of AI, it’s important to also acknowledge its potential drawbacks.  

‘There are concerns that viva voce might not be inclusive across gender, language, culture, socioeconomic status, neurodiversity and other differences in cohorts,’ says Professor Dawson. ‘Assessment that is not inclusive isn’t just a moral problem, it’s also less good at judging what people can do.’ 

For institutions and examiners, it’s essential to be mindful of their subconscious biases and how factors such as gender can affect the real and perceived outcomes of a student’s performances during a viva voce 

While many intersectional factors are at play, this discussion will focus specifically on gender. Dr Rae is currently conducting a fellowship exploring the challenges women face in interactive oral assessments, with the goal of designing a more equitable assessment method that addresses gender biases.  

Gendered challenges in oral exams 

According to Dr Rae, studies show that oral presentations have been viewed as more ‘male-oriented’, with men tending to favour this type of assessment while women typically prefer written exams, multiple choice tests and practical work.  

Women have also historically reported higher levels of public speaking fear, and during peer assessments, men tend to mark other males higher than females, while women tend to underrate themselves.  

Noting that this does not apply specifically to oral assessments, Rae says: ‘Generally, young adult female voices who have vocal fry may be perceived as less competent, less educated, less trustworthy, less attractive and less employable.  

‘One study shows that women who have creaky voices are rated negatively on personality traits and women who have smiling voices are considered more charismatic. If women have a breathy voice, it can influence perceptions of a speaker’s perceived sexuality and sensuality.’ 

Designing inclusive viva voce assessments 

Given these biases, how can institutions and examiners design more inclusive viva voce assessments 

‘I think making examiners aware of the discrimination against women’s voices would help inform them to reflect on their unconscious bias,’ says Dr Rae. ‘And ensuring that examination criteria is inclusive too, especially around the aspect of intrapersonal skills such as tone of voice and body language.’ 

While it’s important for oral communication to be clear, concise and delivered at an understandable pace, Dr Rae emphasises that the most important factor is the student’s knowledge of the content.  

To lay the groundwork for more inclusive viva voce examinations, Dr Rae also stresses the need for examiner training. ‘Academics can be trained to run oral examinations in a friendly, approachable and conversational manner that encourages students to perform at their best. Students should also be given training on how to speak confidently so they can best communicate their knowledge.’ 

Viva voce or something else? How universities will avoid AI plagiarism in Australia 

As we’ve discussed, the rise of generative AI tools like ChatGPT presents a significant challenge for maintaining academic integrity in traditional assessments. The convenience of AI-generated content can easily lead to unintended plagiarism or the misrepresentation of a student’s knowledge and effort.  

So, how can universities ensure that assessments genuinely reflect a student’s capability?  

While viva voce presents an interesting alternative assessment method where student responses can’t easily be replicated by AI, it’s not without its challenges. Ultimately, there is no single solution to preventing the inappropriate use of AI in an academic setting.  

As a result, universities should adopt a multi-faceted approach that includes clear guidelines on AI use, the implementation of alternative assessment formats like viva voce and adequate training for examiners to ensure biases don’t influence a student’s mark. 

 

Deakin University’s academic integrity policy defines plagiarism as the use of someone else’s work, including words, ideas, code, media, research findings or other material, as their own without appropriate attribution or referencing. It also addresses contract cheating, where students submit work produced by others, including AI tools, as their own. For more details, you can view the full policy here 

this. featured experts
Phillip Dawson
Phillip Dawson

Professor,

Co-Director, Centre for Research in Assessment and Digital Learning,

Deakin University

Read profile

 

Dr Maria Rae
Dr Maria Rae

Senior Lecturer,

Faculty of Arts and Education,

Deakin University

Read profile

explore more