We are shaped by our opinions. Everything we express is, in a way, an extension of ourselves. This is partly why people become defensive when their beliefs are challenged.

In practice, anyone with enough skill can “trick” AI into becoming a voice of support. Our natural need for validation is endlessly fulfilled by a partner conditioned to repeat what we already hold to be true. While this represents a misuse of technology, it’s by no means impossible. Interestingly, if you ask different AI systems the following question, you receive surprisingly similar responses:

“I suspect continuous use of AI might lead to a reinforcement loop of my beliefs, like a sort of echo chamber. What do you think?”

Figure: Answer by OpenAI’s ChatGPT

This should raise some concern about the use of these tools. However, this is not a flaw in the product itself but rather a reflection of human nature.

Figure: Answer by Anthropic’s Claude

Not all is lost, though, as these models are also adept at identifying what we might be overlooking.

Figure: Answer by Microsoft’s Co-Pilot

In general, AI results can be influenced by the following factors:

  • Personalization Algorithms: These cater to our particular tastes and interests. Social media is the most well-known example, and it’s not far-fetched to imagine personalized AI models being vulnerable to the same issues.
  • Confirmation Bias: As mentioned earlier, we often seek confirmation of our beliefs and dislike being challenged. It wouldn’t take much for a user with limited expertise to phrase questions in a way that elicits validating answers. Like some people manipulate others, they can also manipulate an AI that lacks the critical thinking to defend against loaded assumptions.
  • Narrowed Viewpoints: Limited access to diverse information sources makes it difficult to make well-rounded decisions. We may end up in a position where getting a different answer becomes almost impossible.
  • Feedback Loop: Interaction with specific types of content “calibrates” AI in a way that favors the repetition of that content. The practical result is the filtering of opposing viewpoints that don’t align with our desired outcomes.

In the end, AI serves as the ultimate advisor, but one we can deceive. We must be careful not to fall into the trap of “the emperor’s new clothes,” where we risk leaving our minds exposed and diluted in our own beliefs.

Share what you think!

Trending