A couple of weeks ago, while participating in a summit, I came across an idea that I found particularly interesting (BIM Summit ‘25, Oslo — Speaker: Elin Hauge).

It was the notion of viewing LLMs as “deterministic word calculators.” I’d actually heard this phrasing before from a very good friend (Kristoffer Thomsen, the most fantastic individual in Cloud foundations at our school), but it wasn’t until that day that the full implications of the concept really clicked for me.

If we begin with the assumption that an LLM and a calculator are essentially the same in principle—one dealing with numbers, the other with words—then we’re also accepting that, in some way, we expect a determined result. This wordplay, as complicated as it may sound, is actually quite easy to grasp.

Deterministic, in this context, can be understood as “expected,” “known,” or even “unchanging.” 

That’s not a problem when we’re working with numbers, because “2 + 2 = 4” holds true for everyone, everywhere. In fact, we’d be extremely suspicious of a calculator that didn’t run on that logic. It would be mysterious, defective—or perhaps entirely useless for our purposes—if it behaved in such an erratic way.

But here’s where the problem starts: words aren’t deterministic. From a grammatical standpoint, sure—there are patterns, rules that guide how certain words fit together at particular points. That’s the structure. But there’s more to words than grammar—there’s meaning. And meaning is never fully fixed. We cannot guarantee that how words interact with people is predetermined—or even fully predictable. In fact, we can barely guarantee that concepts are interpreted the same by individuals.

Language, as a concept, is built entirely on agreements. We form shared notions of what something means within a group, leaning on the common ground we hold around an idea. Which is to say: no two people understand exactly the same thing at any given moment—but some part of our understanding overlaps.

When enough of these agreements exist between members of a community, we give rise to a language. And when that community is broad or institutionalized enough, we eventually call it an official language—like we see in nation-states.

The rise of LLMs arguably threatens to alter this entire dynamic—and the reason is simple: we’re applying a deterministic tool to an undetermined factor. In other words, we’re pretending that the answer is the same for everyone. But as we’ve already established, that’s simply not possible—because that’s not how language works.

The way this expectation is seeping into culture is also quite subtle—and shrewd. We project humanity onto the models. It’s irrational, and yet completely natural, because we instinctively assume the model will give us the “right” answer in what feels like a typical social interaction.

Ironically, it’s in this very domain that the model understands the least. There’s simply no way to guarantee that the interpretation of such an interaction is accurate—no matter how sophisticated the pattern. There’s always a margin of error that can’t be erased.

Our tendency to humanize tools isn’t going away. If anything, it’s bound to increase as these systems become further embedded in our daily lives. The point, then, isn’t to attack the tech—or even resist it—but to recognize a flaw in ourselves, in the hope that we continue to make good decisions and account for the inevitable mistakes in our calculations.

Share what you think!

Trending