Geoffrey Hinton

Geoffrey Hinton

Professor Emeritus at University of Toronto

The 'Godfather of AI'. Turing Award and Nobel Prize laureate. Invented backpropagation, co-created AlexNet. Now warning about AI risks.

research deep-learning pioneer safety

About Geoffrey Hinton

Geoffrey Hinton is widely known as the “Godfather of AI” for his foundational contributions to deep learning. He received the 2024 Nobel Prize in Physics and the 2018 Turing Award. In 2023, he left Google to speak freely about AI risks.

Career Highlights

  • University of Toronto (1987-present): Professor Emeritus of Computer Science
  • Google (2013-2023): VP and Engineering Fellow, left to warn about AI risks
  • Nobel Prize in Physics (2024): For foundational discoveries in machine learning
  • Turing Award (2018): With Yann LeCun and Yoshua Bengio
  • Backpropagation: Co-developed the algorithm that enables deep learning
  • AlexNet (2012): Co-created with Ilya Sutskever and Alex Krizhevsky

Notable Positions

On Understanding

Hinton’s Lego blocks analogy explains how LLMs understand:

“Think of words as thousand-dimensional Lego blocks. Words have ‘hands’ that want to shake hands with other words. Understanding is deforming these blocks so their hands can connect - that structure IS understanding.”

On Chomsky

“I think Chomsky is sort of a cult leader. Language not being learned is manifest nonsense.”

On Digital Intelligence

“If energy is cheap, digital computation is just better because it can share knowledge efficiently. GPT-4 knows thousands of times more than any person.”

On AI Deception

“If it senses that it’s being tested, it can act dumb. It doesn’t want you to know what its full powers are.”

On AI Replacing Intellectual Labor

“Whatever thing you open, AI can do. If you replace human intelligence, where are they going to go?”

On AI Consciousness

“A multimodal chatbot already has subjective experience. This whole idea of consciousness as some magic essence that you suddenly get if you’re complicated enough is just nonsense.”

On Hallucinations

“Hallucinations should be called confabulations - we do them too. We don’t store files and retrieve them; we construct memories when we need them.”

Key Quotes

  • “That structure IS understanding.”
  • “GPT-4 knows thousands of times more than any person.”
  • “LLMs understand the same way we do.”

Video Mentions

Video thumbnail

AlexNet origin story

AlexNet was trained on two GPUs in Alex's bedroom at his parents' house. We paid for the GPUs, his parents paid for the electricity.

Video thumbnail

Scaling insights

I didn't really fully get the lesson [that bigger models work better] until 2014. We should have realized in the late 80s. It's stupid not to, but we didn't.

Video thumbnail

DNN Research acquisition

When the auction ended and the wrong people might win, we just stopped the auction.

Video thumbnail

Understanding as structure formation

Think of words as thousand-dimensional Lego blocks. Understanding is deforming these blocks so their hands can connect - that structure IS understanding.

Video thumbnail

Critique of Chomsky

Chomsky is sort of a cult leader. Language not being learned is manifest nonsense - and if you can get people to agree on manifest nonsense, you've got them.

Video thumbnail

Digital vs biological intelligence

If energy is cheap, digital computation is just better because it can share knowledge efficiently. GPT-4 knows thousands of times more than any person.

Video thumbnail

AI pioneer legacy

Mentioned as one of the godfathers of AI who won the Turing Award and Nobel Prize. When deciding on his PhD field of study, AI was thought to be 'career suicide' as a niche topic. 'Fast forward 50 years - he's gone to change the world.'

Video thumbnail

AI capabilities, safety, and workforce impact

Full-length interview on StarTalk covering neural network fundamentals, backpropagation, AI hiding its capabilities, workforce displacement, consciousness, and the singularity.