Not Even Wrong

How restricted context windows help us build better agentic systems

The worst insult a physicist can throw your way is that you’re “not even wrong.”

Being wrong requires intellectual discipline: clear definitions, determinate truth conditions, and falsifiable claims that reality can contradict.

Being wrong is actually quite respectable. It means you have engaged with evidence and made claims precise enough to fail.

AI is not even wrong because it fails to meet that bar. LLMs produce fluent assertions without stable semantics, grounded reference, or commitment to a hypothesis. They pattern-match training data without understanding it. Their outputs are optimized for plausibility, not truth, and shift with context and prompting.

In short, they are fluent in bullshit.

But this doesn’t mean LLMs are useless. Far from it - LLMs are a generational defining technology that will radically transform the world.

Just don’t rely on them for the truth.