Coherence and Cogency

LLMs are coherence machines, not truth-checkers

In philosophy, there’s an important distinction between coherence and cogency - one that cuts to the heart of what makes an argument convincing versus what makes it (probably) true.

Coherence is about internal consistency - how all the pieces of a narrative fit together without contradiction. A coherent story hangs together; it feels complete. Cogency goes further - it requires both a strong logical structure as well as true premises. A coherent argument can still be wrong, but a cogent argument provides good reasons to believe its conclusion.

Psychology also reveals similar split operating in human cognition, most famously expressed by Daniel Kahneman’s book Thinking Fast and Slow, where he describes human thinking as representing two competing systems.

System 1 works through pattern matching and narrative consistency - it asks whether a story “feels right,” whether the pieces cohere into a satisfying whole. It’s fast, easy, and comfortable. System 2 engages in deliberate logical evaluation, checking whether an argument actually follows and whether its premises are supported by evidence. It’s slow, expensive, and hard.

System 1’s coherence can create compelling but fundamentally wrong conclusions. Because it is easy, because it is fast, because it always has an answer to any question we ask, it takes a lot of effort to rein it in and use deliberate reasoning instead. This is a large part of why we’re susceptible to cognitive biases like the availability heuristic, confirmation bias, and conspiracy theories - all of which feel perfectly coherent while being utterly non-cogent. If we aren’t concious of these two systems, go too fast, and fail to reflect, we will inevitably make bad decisions.

It’s very clear now that this distinction holds with LLMs with an important caveat - LLMs are coherent, not cogent.

LLMs are essentially coherence machines — System 1 engines without the capacity for System 2’s reflection and truth-checking. Trained on next-token prediction, they have mastered internal consistency and narrative flow. They excel at maintaining topic coherence and structuring arguments that feel complete. But their output is superficial — it’s linguistic coherence, not logical cogency.

LLM architecture fundamentally precludes cogency. Cogency requires strong logical structure and true premises. LLMs have no access to external reality to verify premise truth, and no built-in mechanism for checking logical validity. They can recognize when something contradicts their training patterns - when a sentence “doesn’t sound right” - but cannot distinguish between valid and invalid inferences. They simply complete patterns.

So we see “hallucinations”, we need to realise that these are the expected output of a coherence engine without truth constraints. LLMs can generate beautifully structured, factually wrong content because they’ve seen many examples of coherent but non-cogent human writing.

This can be a real danger if we are not careful. Because LLM outputs perfectly satisfy our System 1’s hunger for coherence, it can become very easy to not deliberately reflect on it. An AI that cannot be cogent paired with humans who often cannot be bothered to be is a particularly insidious combination. We can now outsource our easy thinking leaving us only the hard stuff - which can be exhausting.

This isn’t an indictment of LLMs — they’re a genuinely paradigm-shifting technology. The solution is to use them deliberately - let them provide coherence, but never forget that only we can deliver cogency.