False Positives and Real Anxiety: What Happens When Universities Can’t Define AI Cheating
Unclear AI policies in education can create confusion, anxiety, and unfairness for both students and teachers navigating the use of ChatGPT and other AI tools. This comprehensive study interviewing 58 students and 12 teachers uncovered widespread uncertainty about what constitutes acceptable AI use, with some students being falsely accused and penalised for cheating even when they wrote their work entirely themselves. The research exposed how vague institutional guidelines shifted the burden onto individual teachers to interpret rules differently, leading to situations where using Grammarly was banned in one class but AI-generated essays went undetected in another.
Beyond enforcement challenges, the study revealed deeper concerns about educational equity and learning quality. Students who could afford premium AI tools like ChatGPT Plus gained unfair advantages over peers using free versions, while widespread AI use for assignments led some students to realise they were losing critical thinking skills and becoming overly dependent on technology. Surprisingly, several students in the study voluntarily stepped back from AI use after experiencing its limitations firsthand – like when advanced math or coding problems exceeded AI’s capabilities. The findings suggest universities need clearer policies and to communicate them better, but more importantly, they need to help students and teachers develop healthier relationships with AI that enhanced learning. As one student reflected, the real challenge isn’t preventing AI use but learning when and how to use it without losing the skills that matter most.
Read more here.
Jack Tsao (2025): Trajectories of AI policy in higher education: Interpretations, discourses, and enactments of students and teachers, Computers and Education: Artificial Intelligence, DOI: https://doi.org/10.1016/j.caeai.2025.100496
Keywords: Generative artificial intelligence (GenAI); AI ethics; Academic integrity; Large language models (LLM); Higher education policy and governance; Hong Kong