Ilya Sutskever

Ilya Sutskever

Co-founder at Safe Superintelligence Inc.

Co-founder of Safe Superintelligence Inc. Former OpenAI Chief Scientist and co-inventor of AlexNet. One of deep learning's founding figures.

research openai safety deep-learning

About Ilya Sutskever

Ilya Sutskever is a co-founder of Safe Superintelligence Inc. (SSI), focused on building safe superintelligent AI. He was previously co-founder and Chief Scientist at OpenAI, where he led research from 2015-2024. He is one of the co-inventors of AlexNet, the breakthrough that launched the deep learning revolution.

Career Highlights

  • Safe Superintelligence Inc. (2024-present): Co-founder
  • OpenAI (2015-2024): Co-founder and Chief Scientist
  • Google Brain (2013-2015): Research Scientist
  • AlexNet (2012): Co-inventor with Geoffrey Hinton and Alex Krizhevsky
  • PhD under Geoffrey Hinton: University of Toronto

Notable Positions

On the Scaling Era Ending

Sutskever believes pure scaling has run its course:

“Is the belief really that if you just 100x the scale everything would be transformed? I don’t think that’s true.”

On Eval Performance vs Reality

“Models are much more like the first student - technically brilliant but lacking the ‘it factor’ that makes for actual capability.”

On the Research Cycle

He frames AI history as oscillating eras:

“2012-2020 was research, 2020-2025 was scaling, and now we’re returning to research.”

Key Quotes

  • “The real reward hacking is human researchers too focused on evals.”
  • “Models generalize dramatically worse than people - it’s super obvious.”
  • “Value functions might short-circuit the wait-until-completion problem.”

Video Mentions

Video thumbnail

Eval vs real capability gap

Models are like hyper-specialized competition students - 'they practiced 10,000 hours for competitive programming but lack the it factor that makes for actual capability.'

Video thumbnail

End of scaling era

Is the belief really that if you just 100x the scale everything would be transformed? I don't think that's true. We're back in the age of research.

Video thumbnail

RL training limitations

The real reward hacking is human researchers who are too focused on evals.