On September 11th, 2025, Albania made headlines with the release of “Diella,” an AI-powered minister tasked with helping the country make public bids “corruption-free” (BBC). 

As usual, proponents used the kind of flowery language we’ve all grown accustomed to by now. And while it’s true that the idea of an “objective being” (real or not) sounds appealing—implying that results will be bias-free—they were also quick to slip in the caveat: “if correctly programmed.

A daring insinuation, really—one that quietly reaffirms what we already know: that AI models and bots are full of bias, and can just as easily be misused.

Let’s assume for a minute that, somehow, a real and almost flawless AI has emerged—in Albania of all places—and that it’s genuinely capable of carrying out government tasks with efficiency and transparency.

If we turn a blind eye to citizen concerns around privacy, we’re then faced with the next set of logical questions, namely:

  1. How is this marvelous new technology actually going to guarantee the implementation of transparency protocols?
  2. And more importantly—what sort of government agency holds enough authority to act on the results of unbiased analyses produced by computational systems?

After all, corruption stems from the physical actions of public officials, supported by equally compromised structures of power. The declaration of transparency is as much a moral stance as it is a practical one—particularly in highly restrictive governments, which often have far fewer issues with the silent critiques of an AI system than with the scrutiny of a free press.

In the end, what we seem to be witnessing is the expansion of authoritarian regimes automating their approvals through flawed technology—where the hype perceived by the masses serves to validate even the most absurd outcomes. After all, AI doesn’t make mistakes… or does it?

When all is said and done, the real question remains: who exactly is guarding the systems we’re handing so much power to? 

I have yet to envision a future—happy or otherwise—where not only my e-stores, but even my government, redirects me to a chatbot reminding me how “important my call is,” while offering no real solution at all.

At the very least, this circus is now being staged under the premise of “transparency for all.” That’s insulting.


The Pocket AI Guide is out!

📙 Amazon US: https://a.co/d/gCHHDax
📗 In Europe Amazon Germany: https://amzn.eu/d/3cmlIqa
(Available in other stores Amazon stores too in Europe)

Check the free resources in this website!

Share what you think!

Trending