Newsfeed / Google DeepMind's Demis Hassabis with Axios' Mike Allen
Axios·December 5, 2025

Google DeepMind's Demis Hassabis with Axios' Mike Allen

Nobel laureate Demis Hassabis shares his 5-10 year AGI timeline, explains why 1-2 transformer-level breakthroughs are still needed, and discusses AI safety.

Google DeepMind's Demis Hassabis with Axios' Mike Allen

How Demis Hassabis Sees the Path to AGI

This is Demis Hassabis at his most candid - 400 days post-Nobel, reflecting on what that platform enables and where AI is actually headed. No hype, just clinical assessment from someone who's been building toward AGI for decades.

The scientific method as competitive advantage is the thread running through everything. Hassabis distinguishes DeepMind not by resources (though Google has those) but by rigor: "We blend world-class research with world-class engineering with world-class infrastructure. You need all three to be at the frontier."

This isn't empty positioning. When asked about the 2017-2018 pivot to LLMs, he's refreshingly honest: he was agnostic to the approach. They had Chinchilla, Sparrow, AlphaZero-style RL systems, and neuroscience-inspired architectures all running in parallel. When scaling started showing results, they shifted resources pragmatically. "If you're a true scientist, you can't get too dogmatic about some idea you have."

The AGI timeline is specific: 5-10 years, but with a high bar. His definition requires "all the cognitive capabilities we have" including invention and creativity. Current models are "jagged intelligences" - PhD-level in some areas, flawed in others. True AGI means cross-the-board consistency plus capabilities that don't exist yet: continual learning, online learning, long-term planning.

He estimates "one or two more transformer-level breakthroughs" are still required beyond scaling. Not incremental innovation - fundamental advances.

The multimodal insight is underrated. Hassabis says the most astonishing thing getting too little attention is video understanding. He tested Gemini on Fight Club, asking about the significance of a character removing his ring before a fight - and got "a very interesting philosophical point" about symbolically leaving everyday life behind. That's conceptual understanding of visual narrative, not pattern matching.

On catastrophic risk, he's measured: P(doom) is "non-zero" but he dismisses precise percentages as "nonsense because no one knows." What he does commit to: if it's non-zero, "you must put significant resources and attention on that."

The capitalism angle is interesting - he argues enterprise customers demanding guarantees will naturally reward responsible providers. A model that goes off the rails loses business. Whether that incentive structure holds for AGI-level systems is an open question he doesn't fully address.

On the China race: the lead is "months, not years." Chinese labs haven't shown algorithmic innovation beyond the frontier, but they're excellent at fast-following. The gap is narrowing.

8 Insights From Hassabis on AGI Timelines and Safety

  • AGI: 5-10 years - Requires all human cognitive capabilities including invention; current models are "jagged intelligences"
  • 1-2 breakthroughs still needed - Scaling alone probably won't get there; expects transformer-level advances
  • Multimodal is underrated - Video understanding shows conceptual grasp of narrative, not just pattern recognition
  • Scientific method is the edge - Pragmatic approach let DeepMind pivot when scaling showed results
  • P(doom) is non-zero - Refuses to quantify but commits significant resources to safety
  • China lead: months, not years - They fast-follow well but haven't shown frontier algorithmic innovation
  • Nobel platform matters - Opens doors for advocacy on responsible AI development
  • Agents coming but unreliable - Within a year, close to delegating full tasks; universal assistant vision

What This Means for AI Development and Research

A Nobel laureate estimates AGI in 5-10 years but says 1-2 transformer-level breakthroughs are still required. Current models are "jagged intelligences" - PhD-level in some areas, flawed in others. The path requires consistency across domains plus capabilities that don't exist yet: continual learning, online learning, long-term planning.

Related