AGI (Artificial General Intelligence)
/ˌeɪ-dʒiː-ˈaɪ/
What is AGI?
Artificial General Intelligence (AGI) refers to AI systems that can perform any intellectual task that a human can do—learning, reasoning, problem-solving, and adapting across domains without being specifically trained for each one. Unlike today's "narrow AI" that excels at specific tasks (like playing chess or recognizing images), AGI would generalize: taking knowledge learned in one context and applying it to entirely different situations.
The term was coined by Ben Goertzel in his 2005 book "Artificial General Intelligence," though the concept has been a goal of AI research since the field's founding in the 1950s.
Key Characteristics
- Generalization: Ability to transfer learning across domains without retraining
- Contextual reasoning: Understanding nuance and applying knowledge appropriately in new situations
- Autonomous learning: Self-improving without human intervention
- Common sense: Understanding the world the way humans do, not just pattern-matching on data
Why AGI Matters
AGI represents a potential inflection point in human history. As Janet Adams of SingularityNET argues: "It will be the most intelligent, the most powerful technology ever invented. It will have the ability to be a winner takes all race."
For organizations, the distinction matters because:
- Today's AI is narrow: Current systems (including LLMs) do one thing well but can't generalize
- AGI changes the game: A system that can learn any task would transform every industry simultaneously
- Ownership questions: Who controls AGI may determine who controls significant economic and social power
Current State
As of 2026, no AGI system exists, though several organizations claim they're close:
- OpenAI officially pursues AGI as its mission
- Anthropic builds toward AGI with emphasis on safety
- SingularityNET uses neurosymbolic approaches through their ASI Alliance
- Google DeepMind researches general-purpose AI agents
Predictions for AGI arrival range from 1-3 years (optimists) to "never" (skeptics who doubt AGI is even possible).
Controversy and Debate
The AI community is divided on whether AGI is:
- Inevitable and imminent: Current scaling trends will naturally lead to AGI
- Possible but distant: Requires fundamental breakthroughs we haven't made yet
- Impossible in principle: There's something special about human cognition that can't be replicated
Critics like Yann LeCun argue that current LLM architectures can never achieve AGI because they lack true understanding and world models.
Related Reading
- ASI - Artificial Superintelligence, what comes after AGI
- Neurosymbolic AI - One proposed path to AGI
- Ben Goertzel - Pioneer who coined the term AGI
Mentioned In

Janet Adams at 00:04:30
"The phrase AGI was actually coined by our founder, Dr. Ben Goertzel, in his book 'Artificial General Intelligence' in 2005."
