Ilya Sutskever
researchopenaisafetydeep-learning
About Ilya Sutskever
Ilya Sutskever is a co-founder of Safe Superintelligence Inc. (SSI), focused on building safe superintelligent AI. He was previously co-founder and Chief Scientist at OpenAI, where he led research from 2015-2024. He is one of the co-inventors of AlexNet, the breakthrough that launched the deep learning revolution.
Career Highlights
- Safe Superintelligence Inc. (2024-present): Co-founder
- OpenAI (2015-2024): Co-founder and Chief Scientist
- Google Brain (2013-2015): Research Scientist
- AlexNet (2012): Co-inventor with Geoffrey Hinton and Alex Krizhevsky
- PhD under Geoffrey Hinton: University of Toronto
Notable Positions
On the Scaling Era Ending
Sutskever believes pure scaling has run its course:
"Is the belief really that if you just 100x the scale everything would be transformed? I don't think that's true."
On Eval Performance vs Reality
"Models are much more like the first student - technically brilliant but lacking the 'it factor' that makes for actual capability."
On the Research Cycle
He frames AI history as oscillating eras:
"2012-2020 was research, 2020-2025 was scaling, and now we're returning to research."
Key Quotes
- "The real reward hacking is human researchers too focused on evals."
- "Models generalize dramatically worse than people - it's super obvious."
- "Value functions might short-circuit the wait-until-completion problem."
Related Reading
- Generalization Gap - The fundamental limitation Sutskever identifies
- Scaling Laws - The paradigm he says is ending