Sam Altman: Intelligence Will Be a Utility Like Water
Why Sam Altman’s “Flood the World” Strategy Changes Everything
Sam Altman sat down with Larry Fink (BlackRock CEO and OpenAI board member) at the AI Infrastructure Forum for a conversation that went well beyond the usual AI hype cycle. Two claims stood out: a 1000x cost reduction in AI reasoning in roughly 16 months, and a prediction that more of the world’s cognitive capacity will sit inside data centers than outside them by late 2028.
On crossing the threshold into economic utility: “At some point in the last few months, we really have crossed a threshold into major economic utility of these models. My job shifted from doing direct technical work to managing a team of agents doing this work.” Altman describes the current capability trajectory: AI can now handle multi-hour tasks. Soon it’ll be multi-day, then multi-week. After that, AI systems will be “connected to your life, to your company, proactively thinking, working all the time” — like trusting a senior employee.
On startups not wanting employees: The mental shift is concrete. Startups no longer talk about how many employees they need — they ask how much compute they can reserve. “Can I do a cloud deal? Can I get this many tokens?” Bigger companies are following: engineering orgs are doubling or tripling what they plan to ship this year. That has never happened before.
On AGI losing its meaning: Altman says the word “has ceased to have much meaning.” He offers two more useful thresholds instead: (1) when more cognitive capacity exists inside data centers than outside them — “maybe late 2028” — and (2) when CEOs, presidents, and Nobel laureates can’t do their jobs without heavy AI use. The first is a physical reality shift. The second is a workflow reality shift.
On 1000x cost reduction: “From our first reasoning model O1 to GPT-5.4, to get the same answer to a hard problem has been a reduction in cost of about 1000x.” In roughly 16 months. This isn’t just model improvement — kernel engineers, power engineers, and data center designers all contributed efficiency gains simultaneously.
On intelligence as utility: OpenAI’s top guiding principle is flooding the world with intelligence — “too cheap to meter,” borrowing the phrase from nuclear energy’s unfulfilled promise. “We see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want.” The alternative — capacity constraints driving high prices — means AI “goes to rich people” or governments make central planning decisions that “almost always go badly.”
On the $110B round and custom chips: The funding round is 4x larger than Aramco’s record IPO. OpenAI is building a specialized inference-only chip — not the fastest, but the cheapest per watt. The bet: in an energy-constrained world with massive agent demand, efficiency per watt matters more than raw speed.
On India’s zero-person startups: Codex usage in India 10x’d in months. Indian founders told Altman they’re building “zero-person startups” — a prompt that writes software, handles customer support, and does legal work while the founder goes on vacation. Indian companies aggressively locked in compute capacity, refusing to let Altman leave the room without signing deals.
On managing abundance vs. scarcity: “For centuries, maybe millennia, we have learned a lot about how to structure society to manage scarcity. Almost none of that helps us as we have to quickly learn towards managing abundance.” Altman acknowledges a possible paradox: quality of life goes up while GDP goes down in a deflationary world where cognitive capacity lives in data centers.
6 Key Takeaways from Altman at AI1F
- 1000x cost reduction from O1 to GPT-5.4 in ~16 months — and still early in the efficiency curve
- More cognitive capacity in data centers than outside by late 2028 — Altman’s most concrete AGI-adjacent prediction
- Startups want compute, not employees — the mental shift is already complete for new companies, big companies following
- Intelligence as utility — OpenAI’s guiding principle is making intelligence too cheap to meter, like electricity or water
- Custom inference chip by end of 2026 — optimized for cheapest-per-watt, not fastest, targeting agent workloads
- Democratic AI — technology decisions of this magnitude belong to society through democratic processes, not companies
What This Means for Organizations Building with AI
The 1000x cost reduction in 16 months is the number that should reshape every AI budget conversation. If that trajectory continues — and Altman says they’re “still so early” — then the economics of deploying AI agents at scale become fundamentally different every quarter. Organizations that wait for “the right time” to adopt AI agents are optimizing against a moving target that’s accelerating away from them. The question isn’t whether AI will be cheap enough to deploy everywhere — it’s whether your organization will have the workflows, data, and processes ready when it is.