The Monkeys, the Librarian, and the Magician: Why LLMs aren't an absolute path to AGI

Brian Fending
Share:
The Monkeys, the Librarian, and the Magician: Why LLMs aren't an absolute path to AGI

The hockey stick curves toward AGI that populate pitch decks assume scaling LLMs will yield qualitatively different results. But those impressive scaling curves use logarithmic scales that make diminishing returns look like steady progress. DeepMind's Chinchilla study found a power law relationship where the loss function flattens and more compute yields less improvement.

The counterargument invoking agentic systems, multi-agent architectures, and world models concedes the core claim. You're no longer arguing that scaling LLMs is the path, but that LLMs might be one component of a much more complex system. That's a different bet with different timelines and capital requirements.

Executives drawing hockey stick curves to AGI solely through LLM scaling are betting on a trajectory that bends in ways the underlying technology can't support. If AGI emerges, it won't be because we scaled token prediction harder, but because we built fundamentally different coordination, reasoning, and embodiment layers on top of it.

Continue Reading

This article continues on my preferred publishing platforms. Choose your platform to read the full article: