As the hype around GPTs begins to fade—and we continue witnessing failed attempts by companies to replace humans with chatbots—we now find ourselves at the intersection of a new hype wave and the early signs of a possible new AI winter.
Though both phenomena share a common root and mutually reinforce each other, they belong to two very different categories and actually open up for different problems. Let’s start by unpacking the new hype currently gaining traction and leave the potential winter for a later entry.
If you use ChatGPT as an app on your computer, you may have already been introduced to “agents.” This new type of AI-powered assistant—supposedly capable of performing complex tasks on your behalf—promises all sorts of simplifications in your life.
Let’s not focus on whether they can live up to their promises, but more on the fact that the moment you use one, you are effectively giving the keys to the house away (depending on what service you’re using).
To guarantee its operation, you’re basically granting access to everything you have on screen—potentially opening yourself up to all sorts of security liabilities, as your information may be shared with a third party. The interaction is no longer just between you and a service provider, but is now filtered through an unrelated service.
The convenience factor offered is negligible at best, and the crown examples risible. For instance, the possibility of acquiring a plane ticket for you. The question shouldn’t be whether it can—but how often you actually perform that action.
After all, you’ve just conveniently exposed your credit card number and granted control of it to a yet another program.
Not to mention your purchase patterns, and if you already centralized your password controls additional access to other services that already hold plenty of information on you—like Microsoft, Google, or Apple wallets. Many of these tied to sensitive information and personal ID documents.
At some point, you might want to reconsider those 15 minutes you saved this year by booking the family vacation. And that’s just the personal risk you’re exposed to in transit.
Surprisingly, this follows a familiar pattern—much like what we now regret with the sudden rise of social media.
No one was particularly concerned about Facebook collecting massive amounts of data—until, years later, we found ourselves being algorithmically fed addictive content one click at a time.
The problem isn’t marketing. It’s the predatory systems that have emerged—systems built by technologies capable of uncovering and exploiting both our consumption patterns and thought patterns.
At the very bottom of it all, though, perhaps the problem isn’t really the service itself, but rather the convenient dismissal of our right to privacy. Not everything in our lives should be measured by the convenience of our next potential purchase—especially when we’re engaging with unpatched tech that could expose users to complete and utter financial ruin.
Because this is the real risk we’re facing: identity theft with consequences so severe they could lead to generational damage. If you want to avoid that outcome, then maybe—just maybe—don’t use the service at all.
My book: The Pocket AI Guide: A Practical Introduction for Professionals launches in August 2025.





Share what you think!