General Artificial Intelligence (AGI)-a system with intelligence comparable or superior to human intelligence in all domains-continues to be considered the holy grail of technology. However, in 2025, an alternative path is emerging more clearly: we are not achieving AGI as a unified system, but rather through an increasingly convincing illusion created by the combination of multiple specialized narrow AIs.
The Mosaic of Artificial Intelligence.
Today's AI excels at specific tasks: Large Language Models (LLMs) handle text, models such as Midjourney or DALL-E create images, AlphaFold analyzes proteins. Although individually limited, when integrated into a coordinated ecosystem, these narrow AIs create an appearance of general intelligence-a "proxy" for AGI.
According to Stanford University's AI Index 2025 report, despite significant progress, AI continues to face obstacles in the area of complex reasoning.
More advanced models solve highly structured problems but show marked limitations when it comes to articulated logical reasoning, sequential planning and abstract thinking.
The Society of Minds Approach and Multi-agent Systems.
In 2025, artificial intelligence is rapidly evolving from a niche technology to a strategic element of the technological and social landscape, with profound cultural and ethical implications.
This has led to the emergence of agentic AI systems that bring us closer to the horizon of general artificial intelligence.
In multi-agent systems, each agent operates independently, using local data and autonomous decision-making processes without depending on a central controller.
Each agent has a local view but none possesses a global view of the entire system. This decentralization allows agents to handle tasks individually while contributing to the overall goals through interaction.
In 2025, multi-agent systems-where multiple AI agents collaborate to achieve complex goals-are becoming more prevalent. These systems can optimize workflows, generate insights, and assist in decision-making in various domains.
For example, in customer service, AI agents handle complex requests; in manufacturing, they supervise production lines in real time; in logistics, they coordinate supply chains dynamically.
The Computational Plateau and Physical Barriers.
Despite impressive progress, we are beginning to reach a plateau in traditional computational development. From 1959 to 2012, the amount of energy required to train AI models doubled every two years, following Moore's Law. However, the most recent data show that after 2012, the doubling time became significantly faster-every 3.4 months-making the current rate more than seven times the previous rate.
This dramatic increase in the computational power required underscores how economically challenging it has become to achieve significant progress in the field of AI.
The Promise of Quantum Computing
Quantum computing could overcome this obstacle, offering a paradigm shift in the computational capacity needed for even more sophisticated models. In 2025, quantum computing is emerging as a crucial tool to address these challenges as technology companies embrace alternative power sources to keep pace with the growing energy consumption of AI.
According to a forecast by Arvind Krishna, CEO of IBM, thanks to rapid advances in quantum computing, AI energy and water consumption could be reduced by up to 99 percent in the next five years.
This technology promises to unlock now unimaginable computing capabilities and open new frontiers in scientific research.
A major advance was announced in March 2025 by D-Wave Quantum, which published a peer-reviewed paper titled "Beyond-Classical Computation in Quantum Simulation," demonstrating that their annealing quantum computer has outperformed one of the world's most powerful classical supercomputers in solving complex simulation problems of magnetic materials.
The year 2025 has seen transformative advances in quantum computing, with major advances in hardware, error correction, integration with AI, and quantum networks. These advances are redefining the possible role of quantum computing in areas such as healthcare, finance and logistics.
However, according to Forrester, quantum computing still remains experimental despite advances in 2025 and has not yet demonstrated a practical advantage over classical computers for most applications.
The Quantum Race: Microsoft vs. Google?
Microsoft claims to have made significant progress in quantum computing with its Majorana 1 chip, introduced in early 2025. This processor features a new Topological Core architecture, built with eight topological qubits that manipulate Majorana particles, quasi-particles that act as "half-electrons" known for their strong resistance to errors.
Google, on the other hand, has developed a different approach with its revolutionary quantum chip called Willow, which solves the traditional problem of increasing error rate as qubits increase-Willow actually becomes more accurate as more qubits are added.
These two different strategies represent fundamentally different approaches to quantum computing, with Microsoft focusing on topology and Google on error optimization.
Cognitive Barriers that Persist
In addition to hardware limitations, composite AIs face other fundamental barriers:
Causal understanding: Systems correlate variables but do not isolate true cause-and-effect relationships. AI has made significant progress in many areas, but continues to face limitations in understanding and responding to human emotions, in decision making in crisis situations, and in evaluating ethical and moral considerations.
Continuous learning: Neural networks lose accuracy when trained sequentially on different tasks, showing a kind of "catastrophic amnesia."
Meta-cognition: AIs lack an internal model of their own cognition, limiting true self-improvement.

Toward an AGI "Per Proxy"
The scientific community appears to be rather divided on the technologies and timeframes needed to achieve the goal of General Artificial Intelligence (AGI), but the debate is bringing forth interesting new suggestions, which are already finding practical application in the research of new AI systems.
2025 could be the year when the first agent systems go into production in companies.
While AGI represents the most ambitious goal-systems with cognitive capabilities comparable to or greater than humans, capable of understanding, learning, and applying knowledge across the board.
Rather than waiting for a monolithic AGI, the more likely future will see the emergence of what we might call "front AGIs"-systems that appear to possess general intelligence through:
- Orchestration of AI microservices: Several specialized AIs coordinated through a common abstraction layer.
- Unified conversational interfaces: A single interface that hides the complexity of multiple underlying systems.
- Limited cross-domain learning: Selective sharing of knowledge between specific domains.
Consciousness: Reality or Shared Illusion?
In the AGI debate, we tend to assume that humans are endowed with a "consciousness" that machines cannot replicate. But perhaps we should ask a more radical question: is human consciousness itself real or is it too an illusion?
Some neuroscientists and philosophers of mind, such as Daniel Dennett, have proposed that what we call "consciousness" might itself be a post-hoc narrative - an interpretation that the brain constructs to make sense of its own operations.
If we view consciousness not as a mysterious, unitary property, but as a set of interconnected neural processes that generate a convincing illusion of a unified "self," then the boundary between humans and machines becomes less clear.
From this perspective, we might view the differences between emergent AGI and human intelligence as differences in degree rather than in kind. The illusion of understanding we see in advanced language models may not be so different from the illusion of understanding we experience ourselves-both emerging from complex networks of processes, though organized in fundamentally different ways.
This perspective raises a provocative question: if human consciousness is itself a simulation emerging from multiple interconnected cognitive processes, then the "proxy" AGI we are constructing-a mosaic of specialized systems working together to simulate a general understanding-might be strikingly similar to our own mental architecture.
We would not be trying to replicate a magical, ineffable quality, but rather to reconstruct the convincing illusion that we ourselves experience as consciousness.
This reflection does not diminish the depth of the human experience, but it does invite us to reconsider what we really mean when we talk about "consciousness" and whether this concept is really an insurmountable obstacle for artificial intelligence, or simply another process that we may someday be able to simulate.

Conclusion: Rethinking the Goal
Perhaps we should radically reconsider our definition of AGI. If human consciousness itself could be an emergent illusion-a narrative that the brain constructs to make sense of its own operations-then the sharp distinction between human and artificial intelligence becomes less defined.
Experts predict that 2027 could mark a pivotal moment for AI. At the current pace, models could achieve cognitive generality-the ability to tackle any human task-within a few years.
This scenario should not be seen simply as a replication of human intelligence, but as the emergence of a new kind of intelligence-neither fully human nor fully artificial, but something different and potentially complementary.
This approach frees us from trying to replicate something we may not fully understand-human consciousness-and instead allows us to focus on what artificial intelligence can do on its own terms. The AGI that will emerge will thus not be a single system "pretending" to be human, but an integrated technological ecosystem with its own emergent features-a distributed intelligence that, paradoxically, may reflect the fragmented and interconnected nature of our own cognition more than we initially thought.
In this sense, AGI research becomes less an attempt to emulate the human and more a journey of discovery about the very nature of intelligence and consciousness, both human and artificial.
Sources
- https://www.justthink.ai/artificial-general-intelligence/understanding-agi-vs-narrow-ai-explaining-the-differences-and-implications
- https://www.rand.org/pubs/commentary/2024/02/why-artificial-general-intelligence-lies-beyond-deep.html
- https://futurism.com/glimmers-agi-illusion
- https://ai.stackexchange.com/questions/26007/are-there-any-approaches-to-agi-that-will-definitely-not-work
- https://qubic.org/blog-detail/the-path-to-agi-overcoming-the-computational-challenge
- https://www.linkedin.com/pulse/amplification-intelligence-recursive-self-improvement-gary-ramah-0wjpc
- https://www.investopedia.com/artificial-general-intelligence-7563858