Last Sunday, I read an article in which a professor expressed concern about OpenAI’s model O-3. According to his tests, this model could produce a thesis in just 30–40 minutes and even gather the necessary resources to demonstrate it had done its research.
This is part of a new function called deep research, designed to gather information and help researchers produce faster results. According to the same article, around 30% of researchers’ work has already been optimized by using AI.
Of course, this raises an important question: How are we going to maintain the academic integrity of our communities and educational institutions if anyone can produce vast amounts of information in such a short time?
Would this not lead to a situation where we are flooded with fabricated, quasi-realistic investigations by candidates who know little to nothing?
To be honest, I don’t think so. The reality is that we’ve always had cheaters—people trying to take advantage of the system in immoral ways for personal gain. History is full of scandals in the upper echelons of government, where even ministers have been exposed for using dishonest methods to climb to the top.
Artificial intelligence is simply making this phenomenon more visible and accessible to bad actors. This is not an explosion of cheating caused by AI, but rather the opposite: cheaters are just trying their luck with these systems more openly than before.
However, it’s quite telling that academia, with all its creative capacity, has not yet realized that sometimes the oldest solutions are the most effective ones. For instance, it’s far simpler to give potential candidates an oral exam, using their own work as a reference to corroborate whether they’ve read and understood the material. To make things even easier—and perhaps add insult to injury—we could use AI to summarize the material we received from them.
While no one can reasonably expect a person to remember every single detail of months of research, we can at least assume that an expert will have a good grasp of the material they claim to have produced.
The problem today is that we often overcomplicate solutions to relatively simple issues. The demonization of AI models and the fears running rampant in our societies are making it harder to focus on practical solutions.
This fearmongering only serves those who oppose progress by giving them an extra “I told you so”, shortsighted and quite easy to fix.
It also highlights a more pervasive issue: the actual expertise—or lack thereof—that some academics have over their material. While some take a gatekeeper approach, making it unnecessarily difficult for anyone to pass even the simplest subjects, others have decided they don’t need to learn or update their knowledge to properly guide students.
Both positions are indefensible. They are lazy at best and immoral at worst, as they deny society the service of the teachers and thinkers they are supposed to be cultivating.
Ironically, the solution lies in the past this one time. Yet our academic institutions are far more likely to overthink the problem and look too far into the future. Perhaps we should reintroduce graphite in pencils as an innovation—or make conversations fashionable again—so they might consider these as viable solutions once more. More than one though will try automatized programs that even the FAQ section of OpenAI for educators dismisses out of hand.





Share what you think!