Stochastic Parrots and Thinking Machines
Why metaphors matter when talking about AI
Time is money. You spend it, save it, invest it, and waste it. It’s quantifiable and scarce so you better use it.
Time is a river. It flows through history and into the future. You navigate it as it carries you through life but you can’t tame it.
Metaphors matter.
Metaphors are much more than language devices - they are structures that help us see, think, and reason. They allow us to take a concept that we have some knowledge about and apply it to concepts where we know much less.
Good metaphors enhance our understanding of the world. Bad ones limit it.
Our metaphors about AI suck. Many of them don’t just limit our understanding of AI but actually lead us astray.
We often talk of machines thinking when it’s clear they can’t think in any recognisable way. And while thinking might just be short hand for processing, this projection of human capacity onto a machine is so much stronger with LLMs simply because they can produce natural language outputs that are indistinguishable from humans.
But then we talk about LLMs hallucinating which is even worse. Not because they can’t think but because LLMs lack an explicit grounding in truth. To hallucinate implies you have somehow diverged from the truth or reality but LLMs never had any connection to it.
LLMs generate content. Sometimes that content aligns with reality and sometimes it doesn’t. When AI tells you to put glue on your pizza, this isn’t it being nefarious or hallucinating. It isn’t even a mistake - it’s just content.
In fact I’m not even sure it makes sense to say an LLM can make mistakes. It would seem really odd to claim that the dice made a mistake at the craps table, or that a coin made a mistake by coming up heads rather than tails. LLMs aren’t fundamentally that different.
I don’t have any good answers here. Thinking about LLMs as auto-complete-on-steroids is useful for curbing our tendency to anthropomorphise but it under sells just how amazing LLMs are. Black boxes captures our lack of understanding of how the internals work but doesn’t help us use them. Stochastic Parrots is better as it incorporates the non-deterministic nature of LLM outputs, and it’s kind of cute too. But it still sells AI short.
Sometimes I think about LLMs as pattern guns - tools that shoot out words based on their massive library of internal patterns. I like this metaphor because it highlights how powerful LLMs are, as well as how dangerous they can be if used without care. But this too only captures a small part of the nature of LLMs.
As entrepreneurs and engineers working with AI, finding better metaphors isn’t just academic — it shapes what we build. Framing LLMs as stochastic parrots or pattern guns rather than oracles helps us see both the absurdity of using them where truth matters and the genuine opportunities they present for generative applications where truth is beside the point.