One of the most common misconceptions about AI is the belief that it understands, feels, or thinks. None of this is true. AI tools are impressive in how they interpret signifiers and meanings, but they are ultimately just cleverly crafted algorithms.

This misunderstanding is actually a regular ethical concern when it comes to AI, as it can give the impression that it interacts similarly to how humans do. The important thing to remember is that, unlike with humans, there is no ultimate purpose or broader intention when dealing with AI.

For example, if you request “a poem,” the program has no real way of recognizing or assigning value to the text. For a large language model (LLM), there is no inherent difference between a master literary piece and a fortune cookie message. In fact, some of the key processes of AI are:

  1. Pattern recognition: This allows the algorithm to establish how we normally process language relations. Simply put, what do people mean when they say “cold as ice” or “hard as a rock” in an expression?
  2. Contextual interpretation: The model relies on myriads of associations dependent on the particular interaction, which have no real value or meaning for the AI. Basically, it responds based on what we have expressed, not based on any personal opinions or desires to express anything.
  3. Probabilistic language modeling: This involves the advanced use of probability, where the system “interprets” possible logical answers based on the millions of interactions it has processed. Language is a structure, and human conversations are often similar in patterns and expressions. In a broad sense, we’re not all that different, and our populations have defined idiosyncrasies and behaviors.
  4. Coherent response generation: The model completes the operation by ensuring that the answer we are given “makes sense” compared to other conversational patterns in the database. However, the fact that these responses make sense does not imply real understanding.

That said, understanding how the tool works doesn’t necessarily help us overcome the bias that makes us “think” we are talking to another human being. A terrifying scenario is the one proposed by Shanahan in The Technological Singularity, where AI human resource agents could perfectly mimic human compassion, essentially abusing our socializing instincts.

It’s not unlike interacting with a psychopath who takes advantage of someone who’s none the wiser.

Ultimately, we might be heading towards what a friend mentioned to me a couple of days ago: “Eventually, we will forget we’re not talking to another person.” True as that may be, understanding the tool a bit better might be the best defense we have… for now.

Share what you think!

Trending