Some weeks ago, a colleague initiated a fascinating discussion on the nature of AI. The central question was: does using AI to solve a task constitute cheating?
As teachers and instructors, we are living in interesting times. It is now practically impossible to guarantee that our students will refrain from using LLMs, and we certainly cannot ensure they will use them responsibly.
Assuming that students use these tools correctly, we should not fear their usage. At least in my view, it isn’t much different from employing a highly advanced grammar checker.
The proper use of AI tools presupposes that the user engages with the content to refine it as it blends with its personal experiences. AI is designed to summarize bits and pieces of information in a more digestible form, but this does not make anyone an expert on a topic. It simply provides echoes of condensed ideas. Millions of them, in a way that, theoretically, provides more context and possibilities for those who use the technology.
A true expert sees the value in these summaries, identifies the points of interest, and dives deeper into the content to generate new ideas based on the absorbed information. A poor user, on the other hand, may passively accept AI-generated hallucinations at face value without critical thought. In this last interaction the loosing side is the user, that not only risks not knowing but even becoming dumber in the interaction.
From the perspective of knowledge generation, the value generated by AI is in line with most academic work carried if used with a bit of sense. Most work done in courses and subjects—particularly by non-experts for example—falls under knowledge synthesis. This involves summarizing reading material in a way that allows us to reach new conclusions based on the provided information.
This act alone does not make anyone creative. It merely provides the resources to become creative by contributing to the absorption, reinterpretation, and application of information within a specific context. Rarely do we cross into the realm of information or data production, which represents the next level of knowledge generation.
On that note, we know that dealing with firsthand results from experiments and surveys is typically at the core of new knowledge production. This material eventually finds its way into books and other resources. Such work is carried out by individuals who not only synthesize knowledge but also identify links and confirm them in the purest sense—an ability impossible for AI to replicate.
The reason is simple: AI has access to static, synthesized knowledge. While a model might identify and highlight connections that we struggle to perceive due to the sheer volume of information, it stops there. AI does not produce new interpretations, draw novel conclusions, or test those conclusions in any meaningful way.
Therefore, using AI—at least when the intent is not to falsify documentation—is not only a valid methodology but also highly desirable. It increases the amount of information accessible to users at any given time, enabling them to create new knowledge.
Attempting to stop AI in schools is counterproductive. True, we might need to rethink how we evaluate the work of our students, but the “old ways” of assessment are still straightforward to implement. If in doubt:
Ask for a report, read it, and then question the presenter about the ideas within the text. No AI can simulate that interaction or comprehension.





Share what you think!