Writing, like music, painting, or any other form of human expression, is deeply tied to emotion. It shouldn’t come as a surprise, then, that given enough data, we can interpret whether a text is loaded with happiness or steeped in melancholy.
The idea shouldn’t be hard to grasp—anyone who’s spent time with a book has experienced a wide range of emotions. And it’s not only because the author is skilled at evoking them; it’s simply part and parcel of human interaction to read subtext or to “read between the lines.”
True, some philosophers—like Wolfgang Iser—have already pointed out that the text exists in connection with the reader. In essence, it’s an interaction not unlike the one you’d have in a regular conversation. But let’s walk a little further into that territory, at the risk of falling prey to wild speculation.
We must acknowledge that communication through a written medium is certainly not the same as speaking face to face. However, both forms of interaction carry micro-expressions, gestures, and inferences—just expressed differently. The issue is that most people don’t write nearly enough to make this assessment obvious.
That doesn’t mean we can’t draw a soft comparison using simpler variables. For instance, think of a friend or loved one you frequently communicate with. They definitely have patterns—phrases, even emojis—they tend to use, and whether you realize it or not, you’re reacting to them. We even form emotional associations: your overconfident friend probably ends every message with a wink face, just as moms often wrap up conversations with hearts and emojis that radiate “affection.”
This same type of interpretation now extends into our digital corporate lives, where tools like Teams or Outlook let you “react” to messages. You instinctively know that in a corporate setting, a thumbs-up is a neutral sign of acknowledgment. Still, you’ll have that one colleague who uses oddly specific reactions, and that other one who breaks the norm by showing affection in chats. More often than not, there’s also a correlation between these digital quirks and real-world behavior and we have already integrated this into a facet of our lives.
All of this, of course, is permeated and shaped by cultural patterns and even generational trends. After all, memes are a sort of “meta-language,” intelligible only to the groups around during their peak popularity. It takes no small amount of talent to understand how other groups use them—or even to grasp the references they’re making. It’s an accelerated form of pop culture language, one that’s always been present in human interaction since the days of illustrated images.
Therefore, given enough inference, AI can offer insight into what an author might be feeling. With nuance, of course. For instance, someone who clearly enjoys writing might be overflowing with joy while creating a piece—even if the tone itself leaks sadness. This could also be complicated if parts of the text have been rewritten by a model to refine grammar or clarify meaning.
But even that alone gives us valuable meta-information about an individual’s speech patterns and behavior. It revitalizes writing and reading—taking them beyond what we typically credit them for—and opens up new windows into the human psyche and how it interacts with the world.
Try it yourself. Ask AI, “Is it possible for you to infer my emotional state from the paragraph you just received?” and have fun—or be amazed—by what you might uncover.





Share what you think!