Demis Hassabis: World Models and AGI's Missing Pieces
DeepMind CEO explains jagged intelligence, why world models are essential for AGI, and how Genie agents in simulated worlds could unlock robotics.
Why Current AI Has Jagged Intelligence
This is Demis Hassabis on his home turf - the Google DeepMind podcast with Hannah Fry - talking about where AI actually is versus where it needs to be for AGI. The conversation is refreshingly technical and honest about current limitations.
The "jagged intelligence" framing is key to understanding why we don't have AGI yet. Current models can win gold medals at the International Math Olympiad while failing basic logic puzzles. They can analyze complex philosophy but struggle with consistent chess play. Hassabis doesn't treat this as a minor bug - it's a fundamental architectural gap. "You would expect from an AGI system that it would be consistent across the board."
World models are Hassabis's longest-standing passion, and this interview explains why he thinks they're essential. Language models understand more about the world than linguists expected - "language is richer than we thought" - but spatial dynamics, intuitive physics, and sensorimotor experience can't be captured in text. For robotics and truly universal assistants, you need systems that understand cause and effect in physical space.
The Genie + Simma loop is the real headline here. They're dropping AI agents (Simma) into AI-generated worlds (Genie) and letting them interact. "The two AIs are kind of interacting in the minds of each other." This creates potentially infinite training environments where Genie generates whatever scenarios Simma needs to learn. It's an elegant solution to the training data problem for embodied AI.
On hallucinations, Hassabis makes a subtle but important point: the problem isn't that models are uncertain, it's that they don't know they're uncertain. AlphaFold outputs confidence scores; LLMs often don't. Better models "know more about what they know" and can introspect on their uncertainty. The fix requires using thinking steps to double-check outputs - systems that "stop, pause, and go over what they were about to say."
The scaling debate gets nuanced treatment. DeepMind hasn't hit a wall - Gemini 3 shows significant improvements - but returns aren't exponential anymore. "There's a lot of room between exponential and asymptotic." His formula: 50% scaling, 50% innovation. Both are required for AGI.
8 Insights From DeepMind's CEO on AGI Development
- Jagged intelligence is the core AGI barrier - Models excel at PhD-level tasks while failing high school logic; consistency across domains is missing
- World models are essential for embodied AI - Spatial dynamics, intuitive physics, and sensorimotor experience can't be learned from text alone
- Genie + Simma creates infinite training loops - AI agents in AI-generated worlds could solve the data problem for robotics
- Hallucinations stem from meta-ignorance - Models don't know what they don't know; need confidence scores like AlphaFold
- Scaling isn't dead, just not exponential - DeepMind operates on 50% scaling, 50% innovation; both needed for AGI
- Fusion is a root node problem - Partnership with Commonwealth Fusion to accelerate clean energy via AI-assisted plasma containment
- Online learning is still missing - Current models don't continue learning after deployment; this is a critical gap
- Physics benchmarks needed for world models - Generated videos look realistic but aren't physics-accurate enough for robotics
What World Models Mean for the Path to AGI
Current AI is "jagged" - brilliant at narrow tasks, unreliable across domains. The path to AGI likely requires world models that understand physics and causality, not just language patterns. DeepMind's bet: train agents in AI-generated worlds until they develop intuition about how reality works.


