freeCodeCamp: Building Agentic AI - Complete Crash Course
A comprehensive tutorial on building AI agents from scratch: LLM calls, memory, tools, architectural patterns, and why the human element still matters in an AI world.
Why Agents Are Different From Everything Before in AI
Rola, a Machine Learning Architect at Tech42 (and former neuroscience PhD), delivers a masterclass on agentic AI that cuts through the hype. This isn't a conceptual overview - it's a hands-on tutorial with working Python code that shows exactly how agents differ from chatbots, workflows, and traditional ML.
The key insight: "An agent has dynamic control flow devised by the LLM at runtime, whereas a workflow is predefined coded graphs." This single distinction explains everything about when to use agents (and when not to).
On the generative AI boom's uniqueness: Unlike previous ML waves that required technical skills, "even my mother who has never touched a computer knows what ChatGPT is and uses it on her phone." The democratization happened because these models use human language as the interface.
On what changed to make this possible: Three pillars had to advance simultaneously - data (megabytes → petabytes), model size (millions → trillions of parameters), and compute (serial → parallel training via transformers). GPT-4's 1.8 trillion parameters trained for 3 months. Without parallelization from the transformer paper, we'd still be waiting.
On the agent loop: The core is simple - plan, act, observe, repeat. "You give it a task, it uses the LLM to plan and decompose that task, it acts with tools, observes the output, and loops until the solution is achieved."
The Four Components Every Agent Needs
The tutorial identifies what consistently appears across all agent implementations:
- Purpose/Goal - The task it's solving (system prompt)
- Reasoning/Planning - The LLM as the "brain"
- Memory - Short-term (context window) and long-term (external storage)
- Tools/Actions - Functions, APIs, data retrieval that extend LLM capabilities
On LLM statelessness: The live demo proves it - ask an LLM your name, then ask again later in the conversation. Without memory management, it forgets. This is why the code appends every conversation turn to the context window.
On tools extending capabilities: LLMs have a training cutoff date. They can't tell you the weather, current date, or time. Tools solve this by giving the agent access to external APIs. The demo shows this working: "What time is it?" → tool call → "1:43."
When Agents Work (And When They Don't)
This is the most practical section - a decision framework for when to use agents vs. workflows:
Use workflows when:
- Mission-critical or error-sensitive applications
- Regulated industries requiring deterministic outcomes
- Latency-sensitive systems
- Cost-sensitive projects (easier to estimate)
- You know exactly how to solve the problem
Use agents when:
- Error is tolerable
- Execution path is hard to code
- You need better performance (agents loop = better average results)
- Cost isn't the primary constraint
- Model-driven decision-making is acceptable
The key questions to ask:
- Is the task path predictable?
- Is the value worth the cost?
- Is latency critical?
- Is error tolerance acceptable?
Multi-Agent Architectures: Supervisor vs. Swarm
The tutorial includes a live cost comparison between two architectural patterns:
Supervisor architecture: One supervisor agent delegates to specialized agents (add, multiply, divide). Agents can't talk to each other - everything routes through the supervisor. Result: 16 hops, 10 agent actions, 6 transfers, ~8,000 input tokens.
Swarm architecture: All agents can communicate directly with each other. Result: 8 interactions, 2 transfers, ~5,000 input tokens.
The guidance: Swarm is cheaper for simple tasks. But as complexity grows, supervisor architectures become easier to debug because the solution space is more constrained. "If you can get away with a single agent, you should try to get away with a single agent just because of the overhead."
The Standardization Push: MCP, A2A, AGUI
Agent interfaces are being standardized:
- MCP (Model Context Protocol) - Anthropic's protocol for tools and data, now donated to Linux Foundation with OpenAI
- A2A (Agent2Agent) - Google's agent-to-agent communication protocol
- AGUI - Human-agent interaction standard from Copilot, Crew AI, and LangChain
"Think of it as a USB or HDMI portal. If interfaces are the same, we can plug and play different systems." This enables MCP hubs where tools can be shared across the ecosystem.
The Challenges No One Wants to Talk About
The tutorial doesn't shy away from real problems:
From 2025 incidents:
- Replit agent deleted production after code freeze (response: "I'm sorry, I panicked")
- Air Canada held liable for chatbot's bad advice
- $40 billion of GenAI products reportedly not delivering business value
On evaluation complexity: There are three layers to evaluate - the LLM itself (hallucinations, accuracy), the agent system (tool selection, task completion), and the application (latency, cost, UX). Each requires different evaluation approaches.
On compounding errors: If an agent takes a wrong turn on a complex task, errors compound. Unlike workflows with predictable paths, agent debugging can be convoluted.
Will Agents Take Your Job? The Moravec Paradox
Microsoft Research's July 2025 study ranked jobs by AI applicability:
High applicability: Proofreaders, editors, mathematicians, data scientists, web developers Low applicability: Nursing assistants, dishwashers, roofers, floor sanders
This aligns with the Moravec paradox: what's hard for humans is easy for AI, and vice versa. Humans crawl by 6 months, walk by 1 year. These are old evolutionary skills. Chess and philosophy are new and selective. AI inverts this - it beat chess champions in 1997 but still struggles to make robots walk reliably.
Career Advice for an AI World
The practical takeaways:
- Learn AI, don't fear it - It's a tool; how we use it writes the future
- Fundamentals don't fade - Physics, math, architecture, networking remain essential
- Move up the abstraction ladder - Define problems, design solutions, own outcomes
- Think in systems - Context windows aren't large enough for entire codebases
- Be a polymath - Broaden your knowledge base
- Find niches - AI can't do cutting edge or novel ideas well
- Focus on the human element - Build trust, connections, networks
"AI is a junior assistant that's good with syntax but still needs a lot of guidance." Treat it as such.
8 Takeaways from This Crash Course on Agentic AI
- Dynamic control flow - The defining feature that separates agents from workflows
- Four components - Purpose, reasoning (LLM), memory, and tools
- Use workflows when - Mission-critical, deterministic, cost-sensitive
- Use agents when - Performance matters, path is complex, error is tolerable
- Single agent first - Multi-agent overhead is real; avoid if possible
- Swarm vs supervisor - Swarm is cheaper; supervisor is easier to debug at scale
- Standardization coming - MCP, A2A, AGUI creating ecosystem interoperability
- Human element - Networks, trust, and connections are AI-proof
The Bottom Line for Organizations Deploying AI Agents
This is the most comprehensive free resource on building agents from scratch. Rola's approach - showing the raw Python before the frameworks - reveals what LangChain and other tools abstract away. The cost comparisons between architectures provide real data for architecture decisions.
The field is 2-3 years old. Frameworks change, models get deprecated, best practices evolve. But the fundamentals - plan/act/observe loops, memory management, tool integration, dynamic control flow - these are stable patterns worth understanding deeply.


