Jon von Tetzchner, CEO of Vivaldi, delivered one of the most compelling talks at . In his presentation, he outlined the reasoning behind Vivaldi’s decision to give users the choice to keep their data private and avoid feeding it into models that may later be used predatorily against them.

His argument pointed toward the need for stronger regulation—framed not as a barrier to innovation, but as a way to reduce the incentive for unethical actors to exploit user data. He expressed this memorably by suggesting we might need “restraining orders for companies,” noting that many have already proven themselves incapable of maintaining healthy boundaries.

It’s undeniable that, in many cases, the hefty fines issued by administrative bodies such as the European Commission are treated by corporations merely as operational costs—absorbed into their bottom lines rather than serving as meaningful deterrents.

While the prospect of further regulation might raise concerns about reduced competitiveness, I’d argue—echoing von Tetzchner’s point—that this is often a weak argument used to deflect meaningful oversight. We can certainly speak of targeted regulation, but we shouldn’t pretend it’s what’s holding European companies back from market relevance—especially when many of those companies are barely present in the field to begin with.

The real issue lies more in the ideological domain. At the moment, the only actors benefiting from the lack of regulation appear to be those pursuing competitive advantage—whether for data exploitation or, more often, political leverage. It may be time to make ethical choices and embrace clear, if sometimes unpopular, decisions when it comes to the application of technology.

Every AI model, for example, is inevitably biased. That’s not speculation—it’s a structural truth. Worldviews and cultural frameworks shape the data, the developers, and the outcomes. Since bias is unavoidable, the question becomes: which biases do we want embedded in our models?

Democracy, freedom of speech, gender equality, human rights—these are not neutral principles. They are normative commitments, and they form the backbone of the liberal values embedded in our societies. These are, frankly, the kinds of “biases” we should want our technologies to reflect and reinforce.

We cannot, in good faith, pretend to uphold value-neutrality while allowing AI models to ignore—or worse, deny—flagrant violations of human rights. Just look at DeepSeek’s now-infamous refusal to acknowledge the Tiananmen Square massacre. That wasn’t a technical limitation. It was a political statement, delivered through code. And a clear example of social engineering in real time.

It’s a repetition of the same problem we’ve seen with social media—platforms riddled with unethical behavior, enabled by a lack of regulation. If TikTok is such a stellar product, then why isn’t it even available in its country of origin? The joke writes itself.

Meanwhile, we in the West have no truly autonomous digital ecosystem that supports and reflects our values—yet we hesitate to regulate, out of fear it might hurt the market dominance of current players.

But at some point, we have to stop hiding behind fear and be courageous enough to defend the values that make the world more just and livable. Like everything else in life, failing to do so doesn’t just risk financial loss—it risks losing the very foundations of the good life we claim to protect. 

Share what you think!

Trending