emergent AGI and the rise of distributed intelligence – Bank Underground

  • 11
emergent AGI and the rise of distributed intelligence – Bank Underground emergent AGI and the rise of distributed intelligence – Bank Underground
Font size:

Mohammed Gharbawi

Rapid advances in artificial intelligence (AI) have fuelled a lively debate on the feasibility and proximity of artificial general intelligence (AGI). While some experts dismiss the concept of AGI as highly speculative, viewing it primarily through the lens of science fiction (Hanna and Bender (2025)), others assert that its development is not merely plausible but imminent (Kurzweil (2005); (2024)). For financial institutions and regulators, this dialogue is more than theoretical: AGI has the potential to redefine decision-making, risk management, and market dynamics. However, despite the wide range of views, most discussions of AGI implicitly assume that its emergence will be as a singular, centralised, and identifiable entity, an assumption this paper critically examines and seeks to challenge.

AGI, for the purpose of this paper, refers to advanced AI systems able to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of human capabilities. Such advanced systems could fundamentally transform the financial system by enabling autonomous agents capable of complex decision-making, real-time market adaptation, and unprecedented levels of predictive accuracy. These capabilities could have an impact on everything from portfolio management and algorithmic trading to credit allocation and systemic risk modelling. Such profound shifts would pose significant challenges to regulators and central banks.

Traditional macro and microprudential toolkits for ensuring financial stability and maintaining the safety and soundness of regulated firms, may prove inadequate in a landscape shaped by superhuman intelligences operating at scale and speed. And while AGI could enhance productivity as well as amplify systemic vulnerabilities, there may be a need for new regulatory frameworks that account for algorithmic accountability, ethical decision-making, and the potential for concentrated technological power. For central banks, AGI could also reshape core functions such as monetary policy transmission, inflation targeting, and financial surveillance – requiring a rethinking of macrofinancial strategies in a world where machines, not markets, increasingly set the pace.

Conventional depictions of AGI tend to centre on the image of a single, powerful entity, an artificial mind that rivals or surpasses human cognition in every domain. However, this view may overlook a more plausible route: the emergence of AGI from a constellation of interacting AI agents. Such powerful agents, each specialised in narrow tasks, might collectively give rise to general intelligence not through top-down design, but through the bottom-up processes characteristic of complex systems or networks. This hypothesis draws on established concepts in biology, systems theory, and network science, particularly the principles of swarm intelligence and decentralised collaborative processes (Bonabeau et al (1999); Johnson (2001)).

The idea that intelligence can arise from decentralised systems is not new. There are many examples in nature to suggest that emergent cognition can manifest in distributed forms. Ant colonies, for example, demonstrate how relatively simple individual organisms can collectively achieve complex engineering, navigation, and problem-solving tasks. This phenomenon, known as stigmergy, enables ants to co-ordinate effectively without centralised direction by, for example, using environmental modifications such as pheromone trails (Bonabeau et al (1999)).

Similarly, the human brain, with its billions of interconnected neurons, exemplifies collective intelligence. No single neuron possesses intelligence in isolation; rather, it is the complex interactions between neurons that give rise to consciousness and cognition (Kandel et al (2000)). Human societies may also be viewed as a form of distributed cognitive system (Hutchins (1996); Heylighen (2009)). Collective human activity, through collaboration and innovation across generations, has driven scientific breakthroughs, technological advances, and cultural evolution.

Recent technical advances in multi-agent AI models provide further support for the plausibility of distributed AGI. Research has shown that simple AI agents, interacting in dynamic environments, can develop sophisticated collective behaviours that are not explicitly programmed but which emerge spontaneously from those interactions (Lowe et al (2017)). Real world examples of such processes include using multi-agent AI systems to manage complex logistical networks (Kotecha and del Rio Chanona (2025)); to build trading algorithms that adjust dynamically to market conditions (Noguer I Alonso (2024)); and to co-ordinate traffic signal control systems (Chu et al (2019)).

Other case studies include DeepMind’s AlphaStar, comprising multiple specialised agents interacting collectively to achieve expert-level mastery of the complex real-time strategy game StarCraft II (Vinyals et al (2019)). Similarly, developments such as AutoGPT illustrate how multi-agent frameworks can autonomously perform sophisticated, multi-stage tasks in wide variety of contexts. The internet, populated by countless autonomous bots, services, and APIs, already constitutes a proto-ecosystem potentially conducive to the emergence of more advanced, decentralised cognitive capabilities.

While these examples of distributed systems clearly do not have the agency and intentionality necessary for general intelligence, they do provide a conceptual foundation for envisioning AGI not as a single entity but as a distributed ecosystem of co-operating agents.

Distributed systems present several advantages over centralised models, such as adaptability, scalability, and resilience. In a distributed system, individual components or entire agents can be updated, replaced, or removed with minimal disruption. The overall system evolves, akin to a biological ecosystem, such that advantageous behaviours proliferate and obsolete ones fade. This evolutionary potential makes such systems far more responsive to new challenges then centralised structures (Barabási (2016)).

Distributed AGI systems may also be more robust than centralised systems. They do not have single points of failure; if one part malfunctions or is compromised, others can compensate. Furthermore, just as ecosystems maintain balance through biodiversity, distributed AI can tolerate and adapt to disruption. When one approach fails, others may succeed. This fault tolerance not only protects the system but can also inspire innovation. Different agents might trial varying strategies simultaneously, yielding solutions that no single AI could have independently devised. Such experimentation at scale makes distributed AGI an engine for innovation as much as intelligence.

However, the distributed emergence of AGI introduces significant new challenges and risks. Unlike centralised systems, distributed intelligence may develop incrementally, making early detection and oversight challenging. Traditional benchmarks for assessing individual agent performance will fail when applied to the cumulative outputs of agent interactions; they will potentially miss the emergence of collective intelligence (Wooldridge (2009)). In addition, the inherent unpredictability and opacity of such systems complicate governance and control, analogous to complex societal phenomena or financial crises, such as the 2008 economic collapse (Easley and Kleinberg (2010)).

Governance mechanisms will need to evolve significantly to address the unique challenges posed by advanced AI systems, particularly as they approach AGI. Unlike narrow AI, AGI systems may exhibit autonomy, adaptability, and the capacity to act across multiple domains, making traditional oversight mechanisms inadequate. These challenges are amplified if AGI emerges not as a single entity but as a distributed phenomenon – arising from the interaction of multiple autonomous agents across networks. In such cases, monitoring and accountability become particularly complex, as no single component may be solely responsible for a given outcome. For example, emergent behaviours can arise from the collective dynamics of otherwise benign agents, echoing patterns seen in financial markets or ecosystems (Russell (2019)).

This complicates questions of legal liability: if a distributed AGI system causes harm, how should responsibility be allocated? Existing legal frameworks, which rely on clear chains of command and intent, may struggle to accommodate such diffusion. Ethical concerns also deepen in this context, especially if these systems exhibit traits associated with consciousness or moral agency, as some theorists have speculated (Bostrom and Yudkowsky (2014)). Rather than attempting to address all of these dimensions at once, it is crucial to prioritise the development of robust frameworks for interoperability, accountability, and early detection of emergent behaviour.

Critics highlight the considerable challenges associated with achieving distributed AGI. Maintaining alignment of decentralised agents with respect to coherent strategic objectives and preserving a unified sense of identity are non-trivial problems. Fragmentation, where subsystems develop incompatible or conflicting goals, is a further legitimate concern (Goertzel and Pennachin (2007)). However, parallels exist in human societies, which frequently navigate comparable issues through shared cultural norms and institutional frameworks, suggesting these challenges may not be insurmountable.

The emergence of AGI carries far-reaching policy implications that demand proactive attention from regulators, central banks, and other financial policy makers. Existing regulatory frameworks, designed around human decision-making and conventional algorithmic systems, may be ill-equipped to govern entities with general intelligence and adaptive autonomy. Policies will need to address questions such as transparency, accountability, and liability – especially when AGI systems make high-impact decisions that may affect markets, institutions, or consumers. There may also be a need for new supervisory approaches for monitoring AGI behaviour in real time and assessing systemic risk arising from interactions between multiple intelligent agents. In addition, the geopolitical and economic implications of AGI concentration (where a few entities control the most powerful systems) could raise concerns about market fairness and financial sovereignty.

Central banks and regulators must, therefore, not only anticipate the technical trajectory of AGI but could also help shape its development through, for example, standards, governance protocols, and international co-operation to ensure it aligns with public interest and financial stability. In other word, proactively addressing these challenges will be critical to ensuring that distributed AGI develops responsibly and remains aligned with prevailing societal values.


Mohammed Gharbawi works in the Bank’s Fintech Hub Division.

If you want to get in touch, please email us at [email protected] or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Prev Post Save $200 at Fontainebleau Las Vegas with You Amex Card
Next Post Bradford retains crown as UK’s top homes hotspot: OnTheMarket   – Mortgage Strategy
Related Posts
Bradford retains crown as UK’s top homes hotspot: OnTheMarket   – Mortgage Strategy

Bradford retains crown as UK’s top homes hotspot: OnTheMarket   – Mortgage Strategy

Save $200 at Fontainebleau Las Vegas with You Amex Card

Save $200 at Fontainebleau Las Vegas with You Amex Card