Agentic Coding

/eɪˈdʒentɪk ˈkoʊdɪŋ/

Also known as: agentic engineering, agent-assisted development, AI pair programming

technicalbeginner

What is Agentic Coding?

Agentic coding is a software development approach where AI agents (like Claude Code, Codex, or Cursor) autonomously write, test, and iterate on code while a human developer focuses on architecture, system design, and taste. Unlike simple autocomplete or chat-based AI coding, agentic coding involves running multiple AI agents in parallel, each working on different parts of a project.

The term was popularized by developers like Peter Steinberger who distinguish it from "vibe coding"—a more casual approach where developers prompt AI without rigorous verification.

Key Characteristics

Developer as Architect

In agentic coding, the human focuses on:

  • System design and architecture
  • Defining verification loops (tests, linting)
  • Taste-checking outputs
  • Directing agent attention

"I'm the architect. Codex does the line-by-line understanding." — Peter Steinberger

Parallel Agent Execution

Advanced practitioners run 5-10 agents simultaneously:

"I constantly jump around. One main project has my focus, and satellite projects also need attention—maybe I spend 5 minutes, it does something for half an hour, and I try it."

Closed Feedback Loops

The critical difference from vibe coding is that agentic coding requires verification:

"You have to close the loop. The agent needs to be able to debug and test itself."

Agentic Coding vs Vibe Coding

AspectAgentic CodingVibe Coding
VerificationAutomated tests, lintingManual checking
Developer roleArchitectPrompter
Agent countMultiple parallelUsually one
Code reviewArchitecture-focusedLine-by-line
Hours workedPotentially higherVariable

Workflow Example

  1. Design phase: Discuss feature with agent, explore options
  2. Architecture: Decide on approach, file structure, interfaces
  3. Delegation: "Build this feature, run full gate when done"
  4. Parallel work: Move to another agent/feature while first cooks
  5. Verification: Agent runs tests, reports results
  6. Integration: Merge into codebase if tests pass

Tools for Agentic Coding

  • Claude Code: Anthropic's terminal-based agent
  • Codex: OpenAI's agent, praised for thorough context reading
  • Cursor: IDE-integrated agent with fast iteration
  • Windsurf: Alternative agent IDE

Why Harnesses Matter More Than Models

A key insight from production agent engineering is that the infrastructure around an agent — acceptance baselines, execution boundaries, feedback signals, and fallback mechanisms — determines system stability more than raw model capability. As Tw93 documents in his deep-dive on agent architecture: "Using a more expensive model doesn't always yield the massive improvements you'd expect. Instead, the quality of your harness and validation tests has a far greater impact on success rates."

This aligns with Karpathy's observation that agent failures are usually "skill issues" — poor instructions, inadequate memory tools, or suboptimal coordination — not capability gaps. The engineering discipline of agentic coding is precisely about building these harnesses: context layering to prevent signal dilution, ACI-principled tool design, structured memory systems, and evaluation frameworks that catch regressions before deployment.

Key Quotes

"Surprise: using agentic coding makes you a better coder because you have to think harder about your architecture so that it's easier to verify."

"Now that all the mundane stuff of writing code is automated away, I can move so much faster. But it's mentally even more taxing because I'm managing 5-10 agents."

See Also