Some days ago, on Friday (05.09.2025), it was reported by several media outlets that Anthropic, producers of the popular model Claude, had decided to settle a class action lawsuit for a value of $1.5 billion—presumably USD, since the matter was settled in a U.S. court. (Reuters)
The otherwise scandalous news seemed to disappear as quickly as it surfaced—at least in Norway, where an ongoing election cycle quickly buried the issue. Understandable, perhaps—not everyone in society is equally invested in the same topics.
However, what just transpired isn’t something that can be so easily dismissed—especially considering that Anthropic’s own data privacy statement claims: “Anthropic believes in transparent data practices.” Which, of course, becomes a moot point in light of what just happened.
And that’s just from the companies we actually hear about.
A running joke is that the most repeated lie on the internet is: “I have read and understood the terms of service.” Truthfully, almost no one bothers to check what’s in there. A good example of how absurd this is came from GameStation (UK), which added the following clause for April Fool’s in 2010:
“By placing an order via this Web site on the first day of the fourth month of the year 2010 Anno Domini, you agree to grant Us a non-transferable option to claim, for now and forever more, your immortal soul. … Should We wish to exercise this option, you agree to surrender your immortal soul … within 5 (five) working days of receiving written notification … We reserve the right to serve such notice in 6 (six) foot high letters of fire … If you a) do not believe you have an immortal soul, b) have already given it to another party, or c) do not wish to grant Us such a license, please click the link below to nullify this sub-clause and proceed with your transaction.” (Lexology)
On that particular day, 7,500 users unknowingly agreed to those specific terms and conditions. So it begs the question—how much more can services with a far larger user base hide in their contracts, simply banking on the fact that we’re all too busy to read them?
The more we open ourselves to surveillance capitalism—and to services that trade minimal convenience for total oversight—the more we risk our data becoming the fuel for tomorrow’s abuses. Because if even Anthropic, an ethical company, was caught in questionable practices, what should we expect from all the ones we never hear about?
As agents enclose on your AI services, you agreed to what?
The Pocket AI Guide is out!
📙 Amazon US: https://a.co/d/gCHHDax
📗 In Europe Amazon Germany: https://amzn.eu/d/3cmlIqa
(Available in other stores Amazon stores too in Europe)
Check the free resources in this website!





Share what you think!