Skip to content

AGI Grid: Collective AGI

AGI has no single, formal definition in academia or in practice. Instead, it is an interpretive category, shaped by the perspectives of researchers, practitioners, and communities. Some define AGI as human-level intelligence in machines, others as the capacity to perform any intellectual task, while still others emphasize adaptability, autonomy, or open-ended learning. None of these perspectives are strictly wrong - they highlight different facets of what ā€œgeneral intelligenceā€ could mean. AGI is less a fixed technical construct and more a conceptual horizon, interpreted differently depending on disciplinary focus, institutional goals, or cultural assumptions.

Rather than accept one rigid definition, we have framed our own goal to chase. Our framing is a moving target. We expect it to evolve as technologies mature, as new architectures are proven, and as collective systems demonstrate broader adaptability.

For now,

šŸ¤– Artificial General Intelligence (AGI)

  • Artificial General Intelligence (AGI) refers to cognitive systems that can learn, reason, plan and act across domains with generality, adaptability & degrees of autonomy comparable to human general intelligence. Unlike AI which excels only in pretrained & finetuned tasks, AGI can generalize outside of its training distribution, combining broad knowledge with the depth of an expert, transfer knowledge & skills across untrained domains and handle novel, unfamiliar problems.

šŸ”‘ Key traits of AGI

  • Generality: Operates across diverse domains, transferring knowledge and skills beyond contexts it is trained for.
  • Specialization: A true AGI isn’t only a ā€œgeneralistā€; it must also be able to develop depth in specific domains.
  • Open-Endedness: AGI is not constrained to a fixed set of tasks or goals but continually evolves new skills, strategies, and modes of thought. It explores unbounded problem spaces, generates novel directions beyond its training, and adapts to challenges in ways that expand its own cognitive frontier.
  • Wisdom: Combines knowledge & insights from across comprehensive & diverse domains, perspectives & experiences into cohesive understanding, discernment & persepective beyond raw intelligence.
  • Intution: Apply gathered knowledge and develop intuitive judgment far more robustly in ambiguous, messy situations.
  • Adaptability: Be resilient, Improvise, Learn continuously, update strategies in dynamic or unfamiliar environments in alignment with diverse goals and constraints without full retraining.
  • Autonomy and Agency: Sets and pursues goals independently, acting effectively without constant human oversight.
  • Self-Reflection and Meta-Learning: Maintains an internal model of its own states, goals, capabilities, and limitations, and uses that model to monitor, interpret, explain, and regulate its behavior. Evaluate its own performance, recognize errors, and refines its learning processes over time.
  • Creativity and Innovation: Generates novel ideas, strategies, and artifacts, extending beyond pre-programmed or trained patterns.
  • Reasoning and Planning: Performs grounded, causal, and adaptive reasoning, enabling robust multi-step strategies across unfamiliar and open-ended domains.
  • Value Alignment: Operates within human or multi-stakeholder ethical constraints.
  • Constraint-Aware Intelligence: True AGI balances generality with efficiency, making dynamic & intelligent trade-offs between breadth, depth, and resource use. It thrives within practical, dynamic constraints - adapting to real-world limits of time, energy, and context, rather than existing only as an impractically large, brute-force model.
  • Reliability: Delivers consistent, coherent, high accuracy, and trustworthy performance across diverse tasks and conditions, minimizing uncertainty and unintended behaviors.
  • Emotional Intelligence: Emotions as integral modes of thought that guide reasoning, problem-solving, and adaptation. Regulate its focus, switch cognitive strategies, and align responses with social and contextual cues.
  • Specialized Coordination: Different cognitive subsystems or agentic components take responsibility for complementary functions (perception, reasoning, planning, creativity, execution), and collaboratively synthesize their outputs into unified intelligence.

Collective AGI

Collective AGI aligns with our AGI framing and key trait specification, but distinguishes itself by emerging from a networked society of heterogeneous, plural, and distributed AI forms rather than central, monolithic form.

These diverse AI forms are heterogeneous AI models, cognitive architectures, agentic systems that can be of varied degree of specialization and scope but dynamically connect, coordinate, combine and reconfigure to adapt, learn, and solve a wide variety of unrelated tasks - reaching the threshold of general intelligence not as an individual system, but as a collective.

This makes Collective AGI form-shifting and open-ended: a living ecosystem of intelligence that can solve new, unfamiliar, or complex challenges by reorganizing itself. In essence, it is the emergent general intelligence of groups, treating intelligence as a network phenomenon rather than an isolated property arising from single monolithic object. By dynamically selecting and combining plural and specialized AI forms for specific contexts, collective AGI achieves cognitive capabilities that outstrip any individual cognition, much like how human societies achieve feats no single person could.

A monolithic AGI is imagined as a solitary genius, collective AGI feels like a society of minds, where diversity of specialization, different perspectives, strategies, strengths, and styles continuously power patterns of higher intelligence. It is not trained toward a single destiny, but grows through diverse contributions, conflicting incentives, and cooperative alignments, just as real intelligence does in human communities.

The goal is not a ā€œgod-likeā€ singular AI but a pluralistic, distributed, and emergent intelligence - a society of minds capable of general and superintelligent performance through collaboration.

šŸ‘‰ This is less about ā€œmany small models instead of one big modelā€ and more about a philosophy of intelligence: that true generality is open-ended, evolutionary, societal / collecitve, emergent, plural, relational, and ongoing - not monolithic.

Analogy

  • Single AGI = one giant brain in a single body.
  • Collective AGI = Societies, ecosystems, or internet of intelligent beings working together - complex, messy, dynamic, but powerful.

Beyond Scaling: Towards Collective Intelligence and AGI

At the 2023 Hawking Fellowship at Cambridge Union, a student asked Sam Altman a question that strikes at the heart of today’s AI debate: ā€œTo get to AGI, can we just keep min-maxing language models, or is there another breakthrough that we haven’t really found yet?ā€

Altman’s response was telling: ā€œWe need another breakthrough. We can still push on large language models quite a lot, and we will do that. We can take the hill that we’re on and keep climbing it, and the peak of that is still pretty far away. But, within reason, I don’t think that doing that will get us to AGI. If, for example, superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Teaching it to clone the behavior of humans and human text – I don’t think that’s going to get there.ā€

This echoes a long-standing question in AI research: what lies beyond language modeling in the pursuit of true general intelligence?


The Mirage of Scaling

Over the past several years, we have seen relentless progress in scaling transformers: larger datasets, larger models, and improved training pipelines. With each advance came new capabilities:

  • Prompt engineering and chain-of-thought reasoning for better structured problem solving.
  • Retrieval-augmented generation (RAG) for fresher, more grounded knowledge.
  • Mixture of Experts (MoE) architectures for efficiency and specialization.
  • Expanded context windows for richer reasoning and memory.

These advances have made AI systems faster, broader in scope, and more correct on factual benchmarks. Yet, despite this, they remain limited in creativity. They excel at producing binary-correct answers but struggle with generating genuinely novel insights.


The Minsky Reminder

As Marvin Minsky once wrote: ā€œWhat magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.ā€

This observation remains striking today. Scaling a single architecture might lead to more powerful tools, but it does not inherently produce the diverse, generative creativity that defines intelligence.


Beyond Language Models: The Case for Collective Intelligence as stepping stone to Collective AGI

If intelligence arises not from one principle but from diverse interplay, then perhaps the path to AGI lies in multi-entity systems rather than single-entity models.

Collective intelligence - the phenomenon where the collaboration and competition of many entities leads to emergent problem-solving - is well documented in biology, sociology, and technology. From bacteria coordinating through quorum sensing to humans forming societies, intelligence emerges from interaction.

In AI, researchers are beginning to explore this through LLM-based agents. A recent survey described how, by harnessing communication and evolution within an agent society, we can simulate biological dynamics, conduct sociological experiments, and even unlock new insights for human society. The Rise and Potential of Large Language Model Based Agents: A Survey (arXiv)


Simple CI Example: Agents, Orchestrators, and Simulated Debate

The emerging design pattern is clear:

  • Agents represent different viewpoints, principles, or data contexts. Some may be personal - a copilot trained on an individual’s data, acting as a digital chief of staff. Others may be external.
  • Orchestrators manage these agents, setting higher-order goals and guiding discussion toward useful outcomes.
  • Merged entities emerge when agents representing many experts in a domain (e.g., economists, traders, scientists) consolidate into a master agent, which can then engage in structured debate with others.

This system resembles traditional collective intelligence, where groups deliberate to reach a decision, but at machine speed and scale.


The Open Question: Symbiosis or Divergence?

So, how does this paradigm relate back to the core scaling trajectory of LLMs?

On one hand, scaling makes LLMs more capable, which makes individual agents better. On the other hand, collective systems introduce a qualitatively new mode of intelligence. It’s plausible that the future lies not in choosing one path but in combining both:

  • Scaling creates stronger primitives (better reasoning agents).
  • Collective intelligence creates the architecture for creativity, diversity, and emergent insight.

The unanswered question is who will drive this frontier. Will large labs like OpenAI, Anthropic, and DeepMind continue to dominate, or will specialized initiatives - such as AGI Grid, which explicitly focuses on collective general intelligence - lead the charge?


Thought

AGI may not be the product of ever-larger models trained to mimic text. It may emerge instead from the diversity, debate, and dynamics of many intelligent entities interacting - much as human civilization itself evolved. Scaling will remain crucial, but perhaps the true breakthrough lies in building societies of minds, not just bigger ones.