This week I decided to give Gemini another chance.
As far as language models go, I’ve generally avoided it—I find it often loses context, even when you’ve clearly provided the necessary information upfront. That alone is irritating, but it’s compounded by the model’s tendency to contradict itself within the same interaction.
Still, with a few extra minutes on hand, I decided to steer the conversation into a moral discussion—specifically about the potential for unethical uses of data. Unsurprisingly, just a few prompts in, Gemini began telling me about the company’s “commitment to ethics,” and the various guardrails supposedly in place to prevent misuse of the system.
We quickly landed on what’s become almost a public secret: given enough data points, you can (at least in theory) build an incredibly detailed personal profile—detailed enough to target behaviors you disapprove of.
Yes, the authoritarian’s wet dream: the ability to identify opposition based on supermarket purchases and TV preferences.
And depending on how far down the dystopian rabbit hole you’re willing to go, you don’t even need certainty. Just match high-probability patterns from your profiling system—and that’s enough to act on. You’re good to go.
And here’s where things get really interesting: while the enemies of Western democracies are openly integrating this capability into their governance structures, we’re fully aware of the same risks—but still pretending that faceless corporations are somehow moral entities, operating with stellar ethics and our best interests at heart.
After all, social media addiction—just like asbestos in the ‘60s or microplastics in the ‘80s—is simply the cost of keeping shareholders happy.
It’s become increasingly clear that the biases within AI—and how companies might exploit them—should be treated as matters of national security. Even if, or specially because as Anthropic has disclosed, poisoning a model takes as little as 250 documents. That’s not much to skew already incorrect statistics.
And yet, massive data-harvesting initiatives like OpenAI’s Atlas are quietly collecting mind-boggling volumes of information. The troubling part? Models themselves wouldn’t even be able to tell you if they’d been compromised—safeguards are designed specifically to filter or block responses, not to diagnose their own behavior.
Which means, once again, these systems could be powering all sorts of dark interests, and we’d be none the wiser.
To borrow the words of Benjamin Franklin: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”
The Pocket AI Guide is out!
📙 Amazon US: https://a.co/d/gCHHDax
📗 In Europe Amazon Germany: https://amzn.eu/d/3cmlIqa
(Available in other stores Amazon stores too in Europe)
Check the free resources in this website!




Share what you think!