Some days ago (26.01.2026), Schroeder et al. released a very interesting article in Science titled How malicious AI swarms can threaten democracy*. Short and concise, the authors clearly express the dangers posed by automated LLMs.

Warnings of these sorts are now evident to anyone operating in the field, but have greatly evolved since years ago, when they were echoed by Shanahan in The Technological Singularity. One argument in particular seems to stand out, which is that the models are taking advantage of our natural need and desire to take part in a community structure. Ultimately, not the models themselves, but those behind them, in an attempt to undermine the values of Western democracies.

Pattern recognition and identification of the social structures that make up an online community are clearly a present danger that might already be operational. Not unlike computer labs in enemy countries filled with paid commenters, except more refined, with a more accurate understanding of social intricacies and working 24/7 with the goal of causing as much harm as possible.

Arguably, the implementation of such technologies will be utilized in favor of paid drones as long as the economic benefits outweigh the investment. In a complex world of proxy wars and political gains, though, this is hard to measure.

A big issue is the one pointed by Thornhill, for the Financial Times, which is that LLM’s are “not concerned with truth because they have no conception of it”.

This, of course, being a classic example of unethical AI use, as we can use it to replicate non-existent feelings, asking, for example, to rewrite a text with more compassion or appreciation for the reader (a typical HR scenario).

Even greater is the risk posed by misinformed professionals making unfounded claims that require laughable levels of quality assurance, such as a quick check on a claimed reference—an issue that has plagued even big consultancies such as Deloitte, as reported by Fortune just some months ago*.

However, I would ask the question that I normally pose my students whenever AI shows up: Who’s really at fault? 

The democratic values that constitute the Western canon are by no means monolithic or a given; we see an erosion of our values across our countries, but this is more a case of moral fatigue and disinterest rather than unstoppable forces.

An AI agent, swarm-based or otherwise, can scream and type to exhaustion that the sky is purple, and yet it takes collective cognitive dissonance to accept this as a fact and integrate it into what we take as reality. In the never-ending search for inclusion and acceptance, we have concatenated different elements that, unsurprisingly, weaken our perception of reality under the guise of those same values.

The solution, therefore, lies not necessarily in regulatory frameworks but in the individual’s capacity to reflect and the intellectual agency provided by our societies. What needs restoring is our belief in what is true and desirable, as regulation ultimately mirrors the will of the collective.

Laws and regulations are, ideally, not tyrannical impositions of an enlightened elite, but rather the common wisdom and desire of a human group imprinting what they consider to be acceptable social behavior. Where no one believes in them, they might as well not exist.

As an architect, I know too well that ultimately what brings down a house is not the elements, but the lack of maintenance required for the building to stand the test of time.

Notes:

  1. How malicious AI swarms can threaten democracy | Science
  2. Generative AI models are skilled in the art of bullshit
  3. Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations | Fortune

The Pocket AI Guide is out!

📙 Amazon US: https://a.co/d/gCHHDax
📗 In Europe Amazon Germany: https://amzn.eu/d/3cmlIqa
(Available in other stores Amazon stores too in Europe)

Check the free resources in this website!

Share what you think!

Trending