Less than a week ago (at the time of writing), the AI community was abuzz with excitement over the release of the DeepSeek model. Many in the field eagerly adopted it, praising its speed and claiming it outperformed its Western counterparts in several aspects. While I don’t typically make a habit of defending multi-billion-dollar companies like OpenAI or Anthropic, this situation immediately caught my attention.
Reports of lower operational costs are, at best, debatable—though they hold some relevance—and are not the most pressing issue. Neither are the occasional downtimes some users have experienced, nor the surprisingly unchallenged decision of a company to suddenly release its most valuable asset—its entire source code—into the public domain.
A bold move, no doubt, but ultimately a textbook example of aggressive market penetration from a company located in a country with an economic landscape where short-term financial losses are often leveraged as long-term strategic advantages. But let’s set that aside for now and focus on a more concerning issue.
Recent reports from various users and media outlets indicate that DeepSeek exhibits troubling behavior when prompted with questions that contradict the established policies of the Chinese Communist Party (CCP). Queries about topics such as Tiananmen Square and Taiwan are, at best, dismissed outright and, at worst, met with hallucinated responses—setting a dangerous precedent for a tool with the potential for widespread adoption.
If we take a moment to reflect, even in the West, we have fringe groups who believe the Earth is flat or that the moon landing was a hoax. Now, imagine the implications when an authoritarian government—one that has never been shy about its control over information—gains influence over the public sphere through a tool that is widely used and trusted.
This presents a serious moral dilemma: we are rapidly approaching a scenario in which, by design, AI has the power to rewrite inconvenient facts to serve political agendas. A reality that anyone familiar with 1984 would recognize—where “every fact rewritten, every story retold for the benefit of the Party.“
Scary—and yet, it is unfolding right before our eyes. Now more than ever, critical journalism, freedom of thought, and education on AI—its pitfalls and benefits—are essential. The real danger of AI may not lie in the economic upheavals it threatens to bring but rather in the subtle control it can exert over historical narratives and political discourse.
While I am even less of a fan of regulatory frameworks than I am of defending corporations, it’s undeniable that, at the very least, our companies are subject to some degree of accountability, and our legislators operate within a framework intended—however imperfectly—to serve the public interest.
The protection of the individual is a principle that we should at least try to keep in society. Something to consider next time the media picks on a “new model”.





Share what you think!