Agentic Engineering and the Art of Bullshit

Treating LLMs as sophisticated bullshit generators and designing systems that harness them effectively

Agentic engineering is the art of directing the flow of bullshit.

Not just any type of bullshit. I mean bullshit in the philosophical sense - content that is neither truth nor lies.

There’s a rich vein of academic research into the philosophy of bullshit that can help us better understand the nature of software engineering in the age of LLMs. Key works include Harry Frankfurt (1986) - bullshit differs from lies, liars care about truth to avoid it, bullshitters are indifferent to truth; GA Cohen (2005) - meaningful bullshit vs misleading bullshit; and Sarah Fisher (2024) - unconstrained next-word prediction is a special type of mindless bullshit.

What this research makes clear is that LLMs are bullshit generators. The architecture of these models, vast statistical vector associations, might be able to represent semantics but never truth. The outputs can be meaningful to us but any truth content is accidental. Hallucinations aren’t errors - they’re just outputs that aren’t accidentally correct.

So how can we produce useful results from outputs that are bullshit?

This is the question that agentic engineering is trying to answer. There are a variety of approaches from truth grounding via RAG to context management to backpressure orchestration. The specific approach depends on the objective at hand but the key maxim remains - don’t fix the bullshit, design systems around it.

Agentic engineering succeeds when we treat LLMs as what they are - sophisticated bullshit generators - and design systems that:

  • Ground outputs in external truth sources rather than trusting the model
  • Build verification and validation as first-class components
  • Chain multiple agents where cross-validation replaces single-source trust
  • Accept that perfect accuracy is impossible and build for inevitable failure

The art isn’t in getting the model to tell the truth. It’s in building systems where the bullshit flows through controlled channels, gets filtered, validated, and redirected toward useful outcomes.

We’re not prompt engineering our way to truth. We’re engineering systems that harness bullshit to do real work.