One of the most fascinating—and deeply unsettling—aspects of AI is how easily it can be used to fabricate images or falsify data. That’s the usual framing, at least. But there are far more insidious applications of the technology that don’t get enough attention.
Back in the early 1980s, a group of researchers introduced the now-outdated chatbot known as ELIZA—an early iteration of an expert system, designed to simulate psychological dialogue. The system operated with a limited set of interactions: when it recognized certain keywords, it would trigger deeper responses. If no keywords were detected, it would default to prompting questions—looping as many times as necessary.
To the surprise of many, people who interacted with ELIZA often reported the feeling of conversing with a real human being. This psychological phenomenon became known as the “Eliza Effect”—a term we still use to describe the moment when individuals perceive machine interactions as genuinely human. This is, in a sense, a predecessor to many of the problems we risk facing today with AI.
Anyone with experience using language models as writing support will often sense when something was AI-generated. But someone with less familiarity can quickly fall prey to more sinister uses. LLMs, after all, are capable of fabricating virtually any type of response. And with that comes the ability to simulate emotions and project feelings that aren’t real.
This isn’t just a dream scenario for a manipulative personality—it’s a textbook setup for psychopath-level behavior at scale. One of the clearest examples is in HR or recruitment scenarios, where an unscrupulous AI could simulate sympathy, build rapport, and gain trust—all while exploiting the user’s vulnerability.
It’s not just a technical concern. It’s an ethical red line—and we’re already blurring it. Or rather, criminal networks might already be. As AI systems become more sophisticated in areas like customer service, it’s not hard to imagine the flood of highly convincing scams that could follow.
This time, language is no longer a barrier. What once limited these groups—the challenge of communicating across dialects and cultural nuances—has effectively disappeared. Modern models switch seamlessly between languages, and real-time translation is improving by the day.
Naturally, the victims are—as always—those with limited understanding of the technology and heightened emotional vulnerability. That often means aging populations in Western societies, many of whom are already isolated by disintegrating family structures and communities.
In short: the lonelier our societies become, the more exposed we are to an onslaught of emotionally hollow but ruthlessly optimized machines—ready to exploit any human who’s willing to listen. All a clear misuse of one of our most develoiped capacities laying the groundwork for our life as a society.
It’s impossible to predict every way a technology this powerful might be twisted—but hiding from it is not the answer. Just as AI can manipulate the vulnerable, it can also quietly distort the perceptions of otherwise healthy minds—fueling paranoia, reinforcing cognitive loops, and turning belief into delusion through sheer repetition.
The antidote isn’t more algorithms—it’s more connection. Human relationships, social trust, and the rebuilding of meaningful communities may be among the only real defenses we have. Not just to protect the most vulnerable—but to prevent the rest of us from seeking comfort in the digital echo chambers that reflect only what we already believe.





Share what you think!