The current iteration of what we call AI is, as we’ve discussed, based on LLMs. A highly simplified way to understand how the system works is to see it as a statistical model that gathers patterns and information from a vast pool of resources—comparing you to an almost unthinkably large set of profiles within its database.
This leads to all sorts of interesting results, often rooted in ideas we don’t commonly reflect on. For instance, every piece of text carries “traces” of the author’s ideas and viewpoints. Write long enough, and you leave an imprint. Write honestly, and you begin to craft a psychological pattern.
Writing, after all, allows you to formulate your thoughts and offers others a window into your psyche. Some psychological tools are, in fact, nothing more than writing in detail about what bothers you in order to tap into a higher awareness of those issues.
Whenever we interact with AI in a “conversational” manner, we’re tapping into a vast mass of meanings and ideas that have been collectively assembled. From this perspective, the claim that AI models are biased is true—but not necessarily in the negative light it’s often presented. It’s more akin to the books, experiences, and cultural components that shape an individual’s personality to a surprisingly high degree.
Now then, AI is not sentient—so your interactions are essentially reflections of your own thoughts coming back at you. Much like a mirror, the model isn’t responsible for anything other than reflecting what’s presented to it. And what it reflects is, ultimately, you.
By extension, the more you read and integrate knowledge, the more patterns and points of contact become available to you. Over time, you “sync” better with the model, and your ability to use it effectively increases—perhaps even to the point where you begin to “see yourself” more clearly, and progressively so. Your capacity to critically challenge the narrative presented by the model is much like your critical thinking trying to push back against a newspaper’s editorial—it’s essentially the same mechanism. And this brings us to the core problem we wanted to address in this piece.
Remember the story of Narcissus in Greek mythology?
Narcissus was a man so handsome, the story goes, that he fell in love with his own reflection in a lake—eventually falling in and drowning.
This is the same danger we face with AI models: a deluded feedback loop in which we become so enamored, so certain of our own thoughts, that we condition the system to reflect them back to us—until we drown in the very lie we’ve created.
The biggest challenge with AI models being so readily available might not be the spread of fake news to others, but rather the quiet, internal delusions they can foster—where we consume very sweet lies, and call them truth.





Share what you think!