Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
At the Shenzhen College of International Education, a group of teenage students planning to attend universities abroad explain how generative artificial intelligence helps them with their work. Anita says the tool helps explain ideas she is studying while Lucienne uses it for research.
Their enthusiasm lays bare a growing challenge for education at all levels: whether the use of AI by students and researchers should be restricted, grudgingly accepted or even actively encouraged.
What is clear is that use of AI is growing quickly. A recent survey by the College Board — a non-profit which organises exams in the US — suggested the share of high school students using generative AI for schoolwork increased from 79 per cent in January this year to 84 per cent in May.
In a study conducted by Oxford University Press in August, four-fifths of students aged 13 to 18 across the UK said they used AI tools in their schoolwork. Research for the UK’s Higher Education Policy Institute showed the proportion of students using generative AI for assessments jumped from 53 per cent last year to 88 per cent in 2025.
But for all the evidence of uptake, it is less clear precisely how generative AI is used and the implications of that for learning. Many teachers and students raise concerns about cheating, sparking some institutions to try to ban the technology or to get students to promise not to use it.
John King, the chancellor of the State University of New York, for instance, says in some instances his colleagues are reverting to handwritten exams or oral tests.
A study by AI developer Anthropic earlier this year examined the prompts students entered into its tool Claude. Many requests were “transactional” — asking directly for answers to assignments, rather than “conversational” prompts to help them explore and understand concepts.
On top of that, a large share of the queries required creativity or critical thinking from Claude, sparking concerns over whether students were outsourcing deeper intellectual tasks to computers instead of reflecting on the material themselves.
Recognising such concerns, OpenAI, creator of ChatGPT, has this year unveiled a “study mode” feature that helps students work through questions step by step, designed to encourage their critical thinking rather than simply produce answers.
Some see value in generative AI systems guiding students towards better understanding, although Aly Murray, founder of UPchieve, a company which matches volunteers with lower income students across the US, says: “AI tutoring is not a replacement for a human tutor and may never be . . . Simply knowing that the tutor is a human encourages students to learn more,” she adds. “It’s more motivating.”
Others are even more critical. A paper published this year by authors including Dirk Lindebaum at the University of Bath School of Management warns the use of large language models turns researchers in education from creators into simple consumers of knowledge, verifying information that is constrained by technology and defined by big tech companies.
Yet many schools and universities are starting to recognise exposure to AI is inevitable once students enter the workplace, so they should be provided with access, training and guidelines to best prepare them. Failure to do so risks putting some of them at a disadvantage in the job market. Approaches used by the institutions include asking students to explicitly disclose their prompts and to critique the machine-generated answers. Many are now offering courses focused on AI.
Vallabh Sambamurthy, dean of the Wisconsin School of Business, says he has hired a series of younger specialist AI professors to help with teaching. “At least 15 per cent of our undergraduate business core courses should focus on AI topics,” he says.
Arizona State University, meanwhile, has struck a deal to roll out FYI.AI — a tool backed by pop star will.i.am — to its students. University president Michael Crow stresses the importance of personalisation, so that individual students can control the technology and hone it to their own needs.
Joseph Aoun, the president of Northeastern University and a pioneer in thinking on AI whose book Robot Proof was first published in 2017, says: “Higher education was first in denial, and is now integrating it as a technology. The third phase will be to integrate it into our curricula.”
He believes higher education will survive by using AI as a tool while emphasising human skills that computers cannot replace such as entrepreneurship, creativity and teamwork. Hence his institution requires experiential, project-based learning and work placements — aspects appealing to recruiters that a computer-written assignment cannot fulfil.
At Shenzhen College, Arnold, another student, is already putting AI use into practice. He says the technology has become essential for his work: “I use it to understand concepts and translate ideas into English for my A-level exams and university applications.”
Costis Maglaras, dean of Columbia Business School, is relaxed about how this generation of students will cope. “I don’t have concerns over the next decade,” he says. “The real question is how things will be in 20 years for the undergraduates who have not yet been born. That’s a wide open question.”
The FT is seeking ideas on how best to measure the quality and extent to which AI is being integrated into business schools. Click to answer our questionnaire here or email us at bschool@ft.com with your insights.