Are Oral Exams The Solution To ⁠AI Cheating? Education Leaders Weigh In

Are students using AI to ditch learning by themselves? Teachers across Europe think so.

Research commissioned by Epson found that 58% of teachers across Europe believe that when students use AI to complete schoolwork, it has a negative effect on learning. In the UK, 60% agree. Three quarters of teachers say they have noticed pupils using AI tools for homework, and 60% across Europe believe students are sidestepping learning altogether. In the UK that rises to 68%.

More than half of teachers across Europe, 54%, say AI use is leading to poorer exam results because students struggle to perform without it. In the UK, 68% say the same. Nearly three quarters, 73% across Europe and 74% in the UK, think over reliance on AI is reducing students’ ability to spot fake information and think critically.

Dr Lili Yu from Macquarie University’s School of Psychological Sciences says, “Comprehension drops when we are using a screen to read information-dense text, like a textbook for study.” Fabio Girotto of Epson Europe puts it plainly: “AI is undoubtedly changing the world. To make the most of it later in life, students need to use it carefully at school. There needs to be a focus on getting the basics in place first through traditional teaching methods. In short, to create an AI-ready workforce that can think critically and use AI responsibly, we need a strong focus on pen and paper in the classroom.”

 

Is AI Cheating Getting Harder To Detect?

 

Gailene Nelson, Senior Director of Product Management at Turnitin, says the threat is evolving. “As exam season approaches, educators are raising concerns about cheating with the rising use of Generative AI. While much of the conversation has focused on tools like ChatGPT, a more complex threat is emerging: agentic AI. These autonomous systems can complete and submit academic tasks on a student’s behalf, with little to no evidence of their intervention.”

She adds, “This technology makes it extremely difficult for educators to verify that the work genuinely reflects a student’s knowledge. Without visibility into how students generate ideas, refine arguments or make decisions, the assessment becomes detached from the learning journey itself.”

Data from The Education Equality Index 2026 by The Invigilator shows the picture is complicated. More than a third of learners say they perform worse in traditional in person exams because they find them stressful or overwhelming. Nearly one in three say they would have performed better if allowed to complete exams from home. Fewer than four in ten say their institution encourages responsible AI use during assessments, and a quarter feel unprepared for the workplace because they have not been taught how to use AI properly.

 

Could Oral Exams Solve The Problem?

 

More and more universities are starting to think speaking to students directly might help. At Cornell University, biomedical engineering professor Chris Schaffer requires students to defend their work aloud. “You won’t be able to AI your way through an oral exam,” he says.

At the University of Pennsylvania, Emily Hammer pairs oral exams with written papers. “It comes across as if we’re trying to prevent cheating,” Hammer says. “That’s not why we’re doing this. We’re doing this because students are actually losing skills, losing cognitive capacity and creativity.”

Not everyone believes oral exams can scale. Nelson from Turnitin says, “While oral exams are being implemented as a solution, they are not a scalable solution. Oral assessments introduce issues related to subjectivity, accessibility and staff workload, and more importantly, they don’t consistently reveal how a student arrived at an answer.”

She proposes much stricter digital controls instead. “High-stakes exams can be delivered offline, cutting off the internet connectivity that agentic AI tools rely on. Institutions should also consider full device lockdown, going beyond secure browsers, and ensure that exam content is protected with end-to-end encryption.”

 

Do Experts Think Oral Exams Are The Solution To AI Cheating?

 

More education leaders have shared their insights as far as oral exams are concerned. Would it help with the cheating issue? Well, this is what they think…

 

Our Experts:

 

  • Diana Yevsieieva, Strategic Communications & Media Outreach Consultant, Independent
  • Ryoji Morii, Insynergy Inc.
  • Tamsin Deasey-Weinstein, National Digital Transformation Policy Advisor, Cayman Islands
  • Dr Hezekiah Herrera, Ed.D, Independant Consultant
  • David Blobaum, Director of Outreach, National Test Prep Association
  • Dr. Saravanan Thangarajan, Visiting Scientist, Harvard T.H. Chan School of Public Health
  • Syed Asif Ali, Founder, Point Media and Pointika, Digital Identity Architect

 

Diana Yevsieieva, Strategic Communications & Media Outreach Consultant, Independent

 

 

“Oral exams are the Human Firewall of 2026.

“As a Media Strategist focused on Narrative Defense, I believe we are at a tipping point. For years, we’ve optimized for convenience, but AI has turned written assignments into a ‘rental’ market—students simply rent logic from an algorithm. Oral exams return us to Intellectual Sovereignty.

“In a live, spontaneous dialogue, there is no ‘undo’ button and no prompt-engineering buffer. It forces a return to the Socratic Gold Standard: the only way to prove you own a thought is to defend it in real-time. This isn’t a step backward; it’s a strategic pivot toward authentic human verification. Just as I advise brands to protect their core narrative from digital noise, education must now protect the core human ability to think without a machine’s assistance.”

 

Ryoji Morii, Insynergy Inc.

 

 

“Oral exams are all well and good, but they do nothing to stop people from using AI to cheat. Right now exams are written, but if you switch to oral exams you are just going to be evaluating speech, not curing the underlying problem with the current method of student evaluation that AI can exploit.

“My work focuses on the governance of AI, specifically exploring what we might call Decision Design – the design of the structure of the judgement, rather than the individual doing the judging. My understanding is that most of the responses to the use of AI in education to date have involved incorporating humans back into the loop in slightly different ways (e.g. moving from a written paper to an oral examination), rather than challenging what constitutes judgement at all.

“Most current assessments are designed with the intent of measuring student learning; however, they look at learning from the output side. Currently, most assessments look at the end product of a student’s learning, but this is not sufficient as built-in to most assessments. As AI improves significantly in the coming years, output will not be sufficient to determine if a student has truly learned important aspects of human reasoning.”

So, what is missing from the Decision Boundary perspective?
• What portion of the work demonstrates and is a result of the student’s critical thinking skills?
• where AI assistance is acceptable
• Who is answerable for the answer generated?

“While oral exams are a useful hack to try to gain insight into the internal workings of a student’s head, they are a somewhat implicit rather than explicit hack since they are still ultimately dependent on the impression that the student makes on the evaluator, and they do not scale very well as the number of students increases.

“I’m not saying we need to exclude AI from our learning environments. But we should seriously consider where our students make judgments about the evidence and put knowledge and skills to use.

“Rather than simply constructing more larger houses to alleviate the housing shortage, the process of housing policy could be more productive if more were seen of the decision-making process.
• Have students explain how AI was used, and describe locations where student intervention took place.
• Evaluate the answer and solution against correct results – Look at the reasoning behind the answer.
• tells students which decisions they need to make.

“However, it is the advent of Artificial Intelligence that exposes the fact that, in addition to facilitating cheating, our current assessment methods have not articulated the underlying rules for where human judgement is needed.

“We still need to establish some guidelines for what format of written and verbal summaries will be acceptable.”
 

 

Tamsin Deasey-Weinstein, National Digital Transformation Policy Advisor, Cayman Islands

 

 

“Oral exams are a good tool but they are not the only tool. Oral exams force students to think in real time, and shows their level of knowledge. But for students with speaking anxiety, or anyone who processes better on paper than on the spot, you’re not testing their knowledge anymore. You’re testing whether they can perform under pressure. That’s a different skill entirely. We’re swapping one set of problems for another.

“The institutions seeing results are the ones redesigning assessment entirely. Process-based work, personalised topics AI can’t fake, draft histories, oral defences paired with written submissions. We need to use oral exams as one component, not as the whole answer.

“But my biggest driver is teaching people why they should use AI responsibly. AI should not give us the answers and make us dumber. AI should be used to challenge us, be a thought-partner and make us more intelligent. Rather than changing the assessment, let’s focus on changing the education.”

 

Dr Hezekiah Herrera, Ed.D, Independant Consultant

 

 

Oral Assessments Enough to Solve Issue?
“Oral assessments are a viable approach; however, I would say that making them the panacea for all problems related to AI cheating is akin to using one pill to treat every illness. It may fit the bill perfectly for some patients; however, it could also be detrimental if prescribed universally.”

Oral Assessments
“There is great pedagogical benefit to requiring a learner to defend their thought process in the moment. When an individual is required to explain how they arrived at their conclusions to another person, there is little room to help them arrive at those conclusions via AI. There is a sense of true understanding being rewarded via oral assessment, which is difficult to achieve via submitting an essay in the dead of night.”

The Equity Problem
“The vast majority of this conversation has overlooked the fact that oral assessments are not equitable. Students with anxiety disorders, language processing challenges, individuals with Autism Spectrum Disorder, and Expressive Language Delay issues — High Stakes Oral Assessments are not level playing fields for these Learners — they are minefields. We cannot develop policies designed to prevent cheating that unfairly penalize our most vulnerable learners because of how their brains function. This is not academic integrity. This is inequity masquerading as rigor.”

What Is The Real Problem?
“The root of the AI cheating crisis is fundamentally an assessment design crisis. We have asked students to create products (essays, reports, worksheets) for decades, when what we really want to assess is process: critical thinking, synthesis, and application. AI has exposed how superficially narrow much of our conventional assessments have always been. Instead of panicking and pivoting to oral assessments, we should address the real problem — building assessments that were never capable of being gamed in the first place.”

What Works Best Against AI
“The three key features of assessments that are resistant to AI are: they are specific to the learner’s experience and environment; they require iterative review and revision; and they are multi-modal. An example of a learner who is doing work that is resistant to replacement by a chatbot is a learner who: develops an analysis of an issue relevant to their local community; participates in a formal conference with their educator regarding a draft document developed during this process; and makes revisions to their document based upon actual feedback provided by their educator. The process portfolio, Socratic seminar, Project-Based Learning with built-in checkpoints, and some carefully targeted oral assessment activities — these are the tools that comprise a truly forward-thinking classroom.”

The Institutional Challenge
“The harsh reality is that developing valid assessments requires educators to possess significant instructional capacity, planning time, and institutional support — three resources that are consistently under-resourced within public education. Because oral assessments do not require any new curriculum development, they seem like a quick fix. However, quick fixes in education inevitably result in creating new inequities sooner than they resolve existing ones. We owe our students a more thoughtful response than that.”

Final Thought
“AI has not compromised academic integrity. Rather, it has revealed that we are assessing the wrong processes. The answer is not to fear the tool – it is to redesign the task.”

 

David Blobaum, Director of Outreach, National Test Prep Association

 

 

“Oral exams are the most foolproof method to prevent cheating. Even if a student had a camera and an earpiece (which does happen), the speed at which a student needs to verbally reply to questions makes cheating almost impossible. Yet evaluating students one-by-one with an oral exam is very time-consuming. Unless other oral exam options become available that allow students to record their responses in real time, oral exams are unlikely to ever replace other exam administration methods.

“At scale, the more feasible option is often to have students write essays or take exams on school-issued computers using lockdown browsers that prevent access to other applications. Whatever means are used, it’s imperative that teachers employ tools to safeguard academic integrity.”

 

Dr. Saravanan Thangarajan, Visiting Scientist, Harvard T.H. Chan School of Public Health

 

 

“Oral exams are not the fix for AI cheating. They can help in some settings, but they are hard to run fairly and even harder to scale. What AI has exposed is not just a cheating problem, but a design problem.

“Too many assessments still reward polished answers without showing whether students can reason, adapt, or defend their thinking under pressure. The answer is not to drag everything back to oral testing. It is to design assessment differently, with live reasoning, applied tasks, staged work, and short oral defense that makes thinking visible.”

 

Syed Asif Ali, Founder, Point Media and Pointika, Digital Identity Architect

 

 

“Oral exams can reduce AI-assisted cheating to some extent, but they’re not a complete solution.

“What we’re seeing isn’t just a cheating problem – it’s a structural shift. AI has changed how easily information can be generated, so traditional assessment methods that rely on output alone are becoming less reliable.

“Oral exams work because they test thinking in real time, but they also come with limitations. They’re harder to scale, can introduce bias, and don’t always reflect how people actually work in modern, AI-assisted environments.

“The deeper issue is that education systems are still measuring performance in ways that assume information is scarce, when in reality it’s now abundant and instantly accessible.

“A more effective approach is to redesign assessments around reasoning, decision-making, and context – not just final answers. This could include a mix of oral components, applied tasks, and process-based evaluation.

“In many ways, the question is no longer “how do we stop AI use?” but “how do we evaluate people in a world where AI is part of how they think and work?”