As the hype around AI fades and companies continue to fail in their attempts to replace humans with chatbots, we’ve surprisingly overlooked the rising wave of “agents” (and the many problems that come along with it).
Not long ago, this feature was prominently introduced as part of the ChatGPT 5 rollout, and for a while, it generated quite a buzz within the AI circles.
In simple terms, an AI agent is supposedly capable of performing complex tasks on your behalf—essentially taking control of your computer to do so. The main concern here isn’t necessarily the technology itself, but the security implications surrounding it, which significantly weaken the user’s ability to protect their data.
It starts with the protocols tied to new applications, which tend to be harder to understand as implementation begins, especially in the early stages, and the danger increases as it can require access to your passwords and all sorts of sensitive data.
This places us in a situation where potential actors are essentially priming societies for different versions of digital police states—where the constant monitoring of data empowers the shady interests of immoral corporations or governments.
That last part might sound like the premise of a dystopian videogame, and yet it’s exactly the new reality we’re walking into.
This is all while ignoring the fact that the more steps you hand over to an agent, the more checkpoints are required—each demanding stricter quality assurance to ensure the entire process chain runs correctly.
The allure is obvious: any company would dream of robotic workers who don’t need vacation, never get sick, and are perfectly content to be paid in pennies instead of dollars.
But the real challenge lies in identifying what quality control is truly necessary to guide these processes—and in recognizing the very real, explicit need for human talent to support and verify the system.
At the time of writing, the internationally renowned consulting firm Deloitte is caught up in a scandal involving a substantial payment that must now be returned to the Australian government—after hallucinated content found its way into the final delivery.
Arguably, the sum involved isn’t particularly sensitive for a firm of this size and stature. That said, the reputational damage—and the rupture in relationship with such a major client—goes far beyond what can be considered acceptable.
Any sensible stakeholder would now be far more concerned with the long-term economic consequences of policies that saved a few pennies on consulting—while sinking decades of accumulated brand value in the process.
As companies continue to move further down the rabbit hole, it might be time to reassess the true value behind their production. In the case of consultancies—or any firm dealing in explicit expertise and knowledge—the commodification of that asset should honestly have already reached its limit.
AI, agentic or not, should once again be recognized for what it is: a tool—not a magic wand.
The Pocket AI Guide is out!
📙 Amazon US: https://a.co/d/gCHHDax
📗 In Europe Amazon Germany: https://amzn.eu/d/3cmlIqa
(Available in other stores Amazon stores too in Europe)
Check the free resources in this website!




Share what you think!