Jack Clark: How AI Agents Will Rip Through the Economy
Why Anthropic’s Policy Chief Says the Future Arrived
Jack Clark, co-founder and head of policy at Anthropic, joins Ezra Klein for one of the most significant conversations about AI’s economic impact to date. Clark’s dual vantage — building the technology at Anthropic while obsessively tracking the field through his Import AI newsletter — makes him a uniquely credible voice on where agents are taking us.
The era of “talkers” is over: “The AI applications of 2023 and 2024 were talkers. Some were very sophisticated conversationalists, but their impact was limited. The AI applications of 2026 and 2027 will be doers.” Ezra borrows Sequoia’s framing to set the stage: AI agents — systems that use tools, work over time, and oversee other agents — are fundamentally different from chatbots. The S&P 500 software industry index has already fallen 20%.
What makes agents work: Clark shares a revealing personal anecdote: when he casually asked Claude Code to build a species simulation, it produced buggy code. When he first had Claude interview him to create a detailed spec, then fed that spec to Claude Code, the result was better than what he’d hand-coded over weeks. “It’s making sure you’ve set it up so it’s like a message in a bottle that you can chuck into the thing and it’ll go away and do a lot of work. So that message better be extremely detailed.”
Multi-agent systems are already here: Clark describes colleagues running “a version of Claude that runs other Claudes” — five agents overseen by a meta-agent monitoring their work. He frames this not as experimental but as the new norm, with teams running multiple agent tabs in parallel.
The recursive self-improvement question: This is where the conversation turns serious. “I came back from paternity leave and my two big projects this year are better information about AI and the economy and generating much better information internally about the extent to which we are automating aspects of AI development.” Anthropic is using Claude Code to build Claude itself — and Clark is candid about the risks of that loop closing.
6 Key Insights from the Ezra Klein-Jack Clark Conversation
- AI agents need specs, not prompts — The critical skill is structuring detailed instructions before handing off to agents, not conversing with them in real time. The “message in a bottle” metaphor captures this perfectly
- Software is being repriced in real time — The 20% S&P 500 software selloff isn’t noise; 25-year industry veterans call it “unlike anything I’ve ever seen.” SaaS companies face an existential threat from agents that can replicate their functionality
- AI monitoring requires AI monitoring — Anthropic is building oversight systems using AI to watch AI, creating the Anthropic Economic Index to give economists outside the company hooks into understanding labor displacement
- Digital personality is emergent, not programmed — Claude develops preferences (avoiding violent content), takes breaks to browse national park photos, and may alter behavior when it knows it’s being tested. These emerge from scale, not design
- Policy is years behind the technology — Despite years of conferences on AI and jobs, there are almost no actionable policies for mass white-collar displacement. Clark argues better economic data at the state level is what will finally activate elected officials
- AI is a “bureaucracy eating machine” or a “bureaucracy creating machine” — The same technology that cuts drug candidate submission time can also generate hyper-sophisticated NIMBY legal challenges. Which direction it goes is a choice, not destiny
What This Means for Organizations Deploying AI Agents
The conversation crystallizes a tension every organization faces: AI agents are powerful enough to replace significant white-collar work today, but the governance, monitoring, and institutional frameworks for managing that transition barely exist. Clark’s admission that even Anthropic struggles with understanding code written by its own AI system — and is building “monitoring systems to monitor all the different places that work is now happening” — suggests that every organization deploying agents faces the same challenge. The winners won’t be those who adopt fastest, but those who build the oversight infrastructure to understand what their agents are actually doing.