Oxford and Cambridge banned ChatGPT at the beginning of 2023, just like many other British universities did. The policy’s wording was clear: using the tool was considered academic misconduct. Looking back, it seems almost charming. The seventeen-year-old was told by the university she was applying to that the technology was prohibited while she was sitting in an Oxford library writing a practice essay or a personal statement with a phone in her pocket that could produce a passable version of either in thirty seconds. The ban was probably about as effective as telling someone not to use a calculator and then wondering why they can’t do long division, though it wasn’t exactly meaningless.
Both organizations had changed significantly by the end of 2025. Ten specific suggestions for how AI can enhance student learning are provided in a guide to using AI tools in education published by Oxford’s Centre for Teaching and Learning. At its December 2025 schools conference, Cambridge International Education released “Guiding Students Towards Conscious Use of AI,” which reframes the technology as a personalized tutor, writing assistant, and creative partner rather than a danger to integrity. Teaching the AI-Native Generation, a report published by Oxford University Press in November 2025, was based on surveys of 2,000 UK students between the ages of 13 and 18. Compared to most institutional policy documents, the findings are more candid in their description of the situation, which makes them unsettling and worthwhile.
| Oxford & Cambridge Initial Response | Banned ChatGPT in early 2023; decreed use as academic misconduct |
| Oxford University Press Report | “Teaching the AI-Native Generation” (Nov 2025) — 2,000 UK students aged 13–18 surveyed |
| AI Use in Schoolwork | 80% of surveyed UK students use AI in their schoolwork |
| AI Output Verification Confidence | Only 47% feel confident identifying accurate AI-generated information; 32% admit they can’t tell if content is true |
| Student Concerns | 60% worry AI encourages copying over original thinking; 51% fear it reinforces bias; 48% believe peers secretly use AI |
| Student Requests | 48% want teachers to explain AI trustworthiness; 51% want clearer rules on when/how to use AI tools |
| Student Benefits Reported | 9 in 10 say AI helped them develop a skill; 62% also report negative effects including reduced creative thinking |
| Cambridge International (Dec 2025) | Published “Guiding Students Towards Conscious Use of AI” — focuses on AI as creative partner, writing assistant, tutor |
| Most Radical Institutional Response | Prague University of Economics (Faculty of Business): cancelled bachelor’s theses entirely; replaced with practical projects |
| Reference | Oxford University Press — Teaching the AI-Native Generation ↗ |
Eighty percent of students who participated in the survey reported using AI in their coursework. Regularly, as a regular component of their academic process, rather than sporadically or experimentally. Just 47% of respondents are certain they can recognize accurate information produced by AI. Thirty-two percent acknowledge that they are unable to determine the veracity of AI content. Twenty-one percent more say they’re unsure. Therefore, the generation that is most adept at using these tools is also the one that is least able to assess the results of the tools. After issuing prohibition notices for the past two years, which few students seem to have taken seriously, universities like Oxford and Cambridge are now attempting to address this gap, which is wide, acknowledged, and expanding.
There is something distinctive about the student responses in the Oxford University Press report. They are not what one might anticipate from a generation that is sometimes described as intellectually disengaged, passive, and addicted to screens. According to a survey, 60% of respondents are concerned that AI promotes copying rather than original thought. That is a serious question about the purpose of education rather than a technical criticism of the technology. Fifty-one percent worry that it might perpetuate prejudice or stereotypes. Forty-eight percent think their peers are using AI covertly to finish tasks, and forty-seven percent think their teachers are unable to spot it. The pupils understand what is going on. They have a clear understanding of the issue. They are requesting institutional guidance that they haven’t been getting, with 51% wanting clearer regulations and 48% seeking guidance on assessing the reliability of AI. One-third of them think their teachers don’t feel comfortable using AI. That is perhaps the most accurate diagnosis of the situation that can be made, provided it is made without cruelty.
Universities like Oxford and Cambridge face structural challenges that are not primarily philosophical in nature. It’s useful. Oxford’s tutorial system, in which undergraduates meet with tutors once a week to discuss an essay they’ve written one-on-one, was created with the specific premise that the essay was the student’s original work and that it showed the student’s capacity for critical thought, argument construction, and subject-matter expertise. Even if an AI can generate a reliable version of that essay in a matter of seconds, the tutorial is still effective if the dialogue indicates whether the student truly comprehends the points made in the essay. However, this necessitates that tutors reframe the tutorial from a discussion of a piece of writing to a more in-depth examination of comprehension—a change that is both operationally significant and pedagogically sound. Supervisors at Cambridge operate in a similar manner. The written work was meant to be proof of a process rather than merely an output. The relationship between process and output is broken by generative AI in a way that organizations that relied on it are still learning how to deal with.
The Faculty of Business and Administration at Prague’s University of Economics completely canceled its bachelor’s theses, substituting them with real-world projects that were thought to be less vulnerable to AI-generated content. This was one institution’s most drastic decision to date. A 10,000-word dissertation can now be scaffolded from first principles to polished draft in an afternoon, and testing someone’s ability to produce a document is no longer a reliable indicator of whether they have learned anything. This is a straightforward solution, but it has a certain honesty to it. It’s still unclear if Oxford and Cambridge will come to the same conclusion regarding some of their written assessments, but experiments are being conducted, such as more oral exams, more timed work done in person, and more project-based assessments that call for concrete application rather than merely argument.
In one crucial area, the students themselves appear to be ahead of the institutions: they seek guidance rather than prohibition. Of the 2,000 respondents, nine out of ten claimed that AI had aided in the development of a skill, such as exam preparation, idea generation, or problem solving. Negative effects, such as a propensity for over-reliance and a decrease in creative thinking, were also reported by 62% of respondents. Compared to most institutional policy documents, this combination of acknowledged benefit and acknowledged harm held simultaneously is a more sophisticated response. Reading this data gives the impression that the students are waiting for the adults present to engage in the same open dialogue that they are already having with one another. For Oxford, Cambridge, and all other universities attempting to make sense of this, the question is whether their governance structures can adapt quickly enough to a reality that a seventeen-year-old with a phone already understands quite well.
