In light of last week’s events concerning DeepSeek, I had a conversation with a good friend over dinner. What struck me as dangerous was not only the precedent being set but also the way tools perceived as infallible can subtly alter reality, almost seamlessly.

This is part of a larger argument worth analyzing more carefully. Suppose our AI models lie only 1 out of 10 times—that’s a small percentage, isn’t it? 

Until you realize that, in practice, we have no way of determining which instance is false. If we’re lucky, the error will be inconsequential, like misreporting the amount of salt in a dish or slightly misquoting a phrase.

The real problem lies in the accumulation of these inaccuracies, much like when Wikipedia began allowing political opinions to shape its content, opening the door for both the left and right to redefine what they believe objectivity should be in any given case.

It is entirely possible to gradually and subtly shift what constitutes an acceptable answer until a new status quo is established. This is, without question, an immoral and indefensible use of technology—one that serves as a clear tactic for mass social engineering. This is precisely why AI models should, as much as possible, remain independent of major political interests.

The biases we often associate with AI models, shaped by our own sensibilities, are not necessarily a problem—provided we recognize that models are trained in a particular language, and literature itself is merely an echo of the host culture that generates information. 

Naturally, this means true objectivity will never exist, but we can and must embed our tools with the values we deem essential—such as individual freedoms and democracy. Even at the risk of hurting sensibilities (for the alternative is arguably worse).

This friend of mine has a son who is barely two years old. It’s easy to imagine a future where, as he grows, he turns to an AI model for answers to his schoolwork. Despite his mother’s attempts to limit his access to technology, it won’t take long before he starts using it—it’s almost inevitable. And then what?

What happens when we introduce an agent, widely perceived as infallible, to those with less experience or limited access to information? This poses a challenge far greater than the explosion of “alternative facts” on social media—one that could make the misinformation crisis we’ve seen so far look like child’s play (pun intended).

No, I don’t believe we should try to limit AI, but we absolutely need more education around it. After all, kids today are already learning complex things. Just look at how easily they absorb the lore of their favorite video games, master intricate key setups for Fortnite, or even dive into modding games. The issue isn’t technology itself—the real challenge is guiding this generation toward a healthier, more productive engagement with the digital world they already inhabit.

AI is here to stay, no matter what. Who knows what it will look like the first time the little one asks it a question? One thing is certain: it will be our responsibility to face that challenge head-on, not shy away from it.

Share what you think!

Trending