About Jared Kaplan
Jared Kaplan is a co-founder of Anthropic and one of the discoverers of neural scaling laws — the empirical finding that larger models trained on more data with more compute predictably improve in capability. This insight became the theoretical foundation for the modern AI scaling paradigm.
Before Anthropic, Kaplan was a physics professor at Johns Hopkins University, where he studied theoretical physics and string theory. His transition to AI was driven by recognizing that neural networks exhibited the kind of predictable scaling behavior familiar from physics, and that this had profound implications for the future of AI.
Career Highlights
- Anthropic (2021-present): Co-founder
- Johns Hopkins University: Professor of Physics
- OpenAI (2019-2021): Research, co-authored scaling laws paper
- Academic background: Theoretical physics, string theory
Notable Positions
On Scaling Laws
Kaplan's scaling laws paper demonstrated that neural network performance follows power-law relationships with model size, dataset size, and compute budget. This wasn't just an empirical observation — it provided a roadmap for building more capable AI systems and made the case that bigger models would predictably get better.
On Evals as Culture
At Anthropic, Kaplan champions an evaluation-first approach where every team — not just safety — builds and maintains evals. This operationalizes safety by making it measurable and embedded in daily work rather than a separate compliance exercise.
Key Quotes
- "Evals, evals, evals. Every team produces evals." (on Anthropic's safety culture)
- "Everything you do is gonna be suboptimal." (on embracing iteration)
Related Reading
- Scaling Laws - The research Kaplan co-pioneered
- Dario Amodei - Co-founder and CEO
- Enterprise AI - Where scaling meets business

