The Truth Problem

LLMs can't lie because they don't know the truth

40 years ago Harry Frankfurt proposed a beautiful distinction between lies and bullshit that can help us better frame our thinking about AI today.

Lying requires an understanding of the truth so that the truth can be subverted. Bullshitting has no such requirement.

Generative AI can’t lie to you because it doesn’t know the truth. It can’t deceive you and it certainly can’t be honest with you.

LLMs aren’t thinking machines - at least in any human/animal way of thinking. They are just vast networks of statistical information, trained and fine-tuned to produce human sounding outputs.

Sometimes these outputs are useful - sometimes really useful. But other times they are not.

This is where I see engineering agentic systems diverge from other types of software. We have to work out where AI can produce consistently useful bullshit, and where it can’t.