Newsfeed / Glossary / AGI (Artificial General Intelligence)
research

AGI (Artificial General Intelligence)

Pronunciation

/ˌeɪ-dʒiː-ˈaɪ/

Also known as:Artificial General IntelligenceGeneral AIStrong AIHuman-Level AI

What is AGI?

Artificial General Intelligence (AGI) refers to AI systems that can perform any intellectual task that a human can do—learning, reasoning, problem-solving, and adapting across domains without being specifically trained for each one. Unlike today's "narrow AI" that excels at specific tasks (like playing chess or recognizing images), AGI would generalize: taking knowledge learned in one context and applying it to entirely different situations.

The term was coined by Ben Goertzel in his 2005 book "Artificial General Intelligence," though the concept has been a goal of AI research since the field's founding in the 1950s.

Key Characteristics

  • Generalization: Ability to transfer learning across domains without retraining
  • Contextual reasoning: Understanding nuance and applying knowledge appropriately in new situations
  • Autonomous learning: Self-improving without human intervention
  • Common sense: Understanding the world the way humans do, not just pattern-matching on data

Why AGI Matters

AGI represents a potential inflection point in human history. As Janet Adams of SingularityNET argues: "It will be the most intelligent, the most powerful technology ever invented. It will have the ability to be a winner takes all race."

For organizations, the distinction matters because:

  1. Today's AI is narrow: Current systems (including LLMs) do one thing well but can't generalize
  2. AGI changes the game: A system that can learn any task would transform every industry simultaneously
  3. Ownership questions: Who controls AGI may determine who controls significant economic and social power

Current State

As of 2026, no AGI system exists, though several organizations claim they're close:

  • OpenAI officially pursues AGI as its mission
  • Anthropic builds toward AGI with emphasis on safety
  • SingularityNET uses neurosymbolic approaches through their ASI Alliance
  • Google DeepMind researches general-purpose AI agents

Predictions for AGI arrival range from 1-3 years (optimists) to "never" (skeptics who doubt AGI is even possible).

Controversy and Debate

The AI community is divided on whether AGI is:

  • Inevitable and imminent: Current scaling trends will naturally lead to AGI
  • Possible but distant: Requires fundamental breakthroughs we haven't made yet
  • Impossible in principle: There's something special about human cognition that can't be replicated

Critics like Yann LeCun argue that current LLM architectures can never achieve AGI because they lack true understanding and world models.

Mentioned In

The phrase AGI was actually coined by our founder, Dr. Ben Goertzel, in his book 'Artificial General Intelligence' in 2005.

Janet Adams at 00:04:30

"The phrase AGI was actually coined by our founder, Dr. Ben Goertzel, in his book 'Artificial General Intelligence' in 2005."

At Davos 2026: 'We'll have a model that can do everything a human could do at the level of a Nobel laureate across many fields by 2026-27.'

Dario Amodei at 00:03:00

"At Davos 2026: 'We'll have a model that can do everything a human could do at the level of a Nobel laureate across many fields by 2026-27.'"

I think there may be one or two missing ingredients. It remains to be seen how the self-improvement loop works without a human in the loop.

Demis Hassabis at 00:05:30

"I think there may be one or two missing ingredients. It remains to be seen how the self-improvement loop works without a human in the loop."

Related Terms

See Also