Complexity Science for AI?

AI technologies are primarily developed by advancements in machine learning, particularly deep learning and natural language processing. An example is ChatGPT, which is built on the Transformer architecture, and employs deep neural networks with attention mechanisms to process and generate human-like text. While the architecture of these models is inherently complex, characterised by vast parameters and intricate layers, they do not rely heavily on Complexity Science as a core framework in their design or functionality.

There are actually some indirect connections between AI and Complexity Science. Deep neural networks, for instance, can be conceptualised as complex systems where simple components (neurons) interact to produce emergent behaviours, such as understanding and generating language. While Complexity Science provides valuable insights into such emergent phenomena, these principles are not the primary foundation for AI model development. Complexity Science concepts are also applied in training optimisation, where researchers study high-dimensional optimisation landscapes, convergence properties, and loss surface dynamics to improve the stability and efficiency of training processes. Interpretability in AI benefits from complexity-based approaches like network theory and information theory, which help uncover how information flows through neural networks. Another area of overlap is robustness and generalisation, where ideas from Complexity Science and statistical mechanics, such as phase transitions and criticality, aid in understanding why large, over-parameterised models perform well in real-world scenarios.

Despite these connections, we must acknowledge that Complexity Science has not been a major driving force in the development of AI technologies like ChatGPT. The creation of such models relies more on advances in neural network architectures, data processing, and algorithmic optimisation than on the theoretical foundations of Complexity Science.

There are some opportunities if we can enrich AI with principles from Complexity Science. It could enhance the adaptability, robustness, and interpretability of AI systems by providing them a better ways of managing dynamic, non-linear interactions and uncertainty in real-world environments. This integration could enable the creation of AI models that handle emergent behaviours more effectively, excel in predictive analytics, and exhibit greater resilience, moving the field closer to achieving general, human-like intelligence.

The challenges are on behalf of Complexity Science. One major limitation is its lack of standardised, predictive methods that can be broadly applied to complex systems. Most Complexity Science models are descriptive or exploratory, emphasising qualitative understanding over quantitative prediction. Additionally, complex systems are often highly context-dependent, making it difficult to generalise findings or develop uniform approaches. Computational intensity is another barrier; many complexity-based models, such as agent-based simulations, struggle to scale to the large datasets typical in AI. Furthermore, Complexity Science has historically focused on theoretical and simulation-based methods, while AI thrives on data-driven approaches, creating a methodological gap. Finally, the absence of a unified theoretical framework in Complexity Science makes it still challenging to translate its principles into practical, standardised tools for AI.

Complexity Science offers profound insights into the behaviour of complex systems but remains underdeveloped in areas critical for its integration with AI, such as predictive capability, scalability, and standardisation. As interdisciplinary research progresses and computational capabilities grow, these limitations may be addressed, unlocking new opportunities for AI systems to benefit from the rich, nuanced perspectives of Complexity Science.

Non-Accumulative Adaptability

Exploring the ideas about adaptation and emergence as a part of ecosystem (i.e. complex adaptive system — CAS) development, I think it is more exciting when we see it through the combined lenses of CAS, Schumpeter, Kuhn, Foucault, and Lyotard. Each of these perspectives explores how change does not just happen bit by bit, but instead in bold and disruptive leaps, as transformations that completely alter the playing field, whether we’re talking about economies, sciences, societies, or even our basic understanding of the world.

CAS implies that change is a matter of adaptive cycles — cycles of growth, accumulation, collapse, and renewal. An ecosystem could grow, accumulates the resources until hitting a limit. Then its whole structure becomes unsustainable, collapses, and reboots in a new way — it reorganises itself with fresh relationships and opportunities. This cycle is anything but smooth; it’s like a forest fire clearing the way for new growth, and it’s essential for resilience and long-term adaptability. This model resonates closely with Schumpeter’s idea of creative destruction in economies. Schumpeter saw capitalism as a system where innovation doesn’t build up neatly on top of the old but bulldozes it — new technologies, businesses, and products disrupt markets, toppling established companies and paving the way for the next wave of growth. For Schumpeter, entrepreneurs drive this cycle, constantly reinventing the economy and shifting the landscape in unexpected ways.

Thomas Kuhn brought a similar idea into science with his concept of paradigm shifts. In Kuhn’s view, science isn’t a smooth, cumulative process of adding one discovery to the next. Instead, it moves forward in fits and starts. Scientists work within a “paradigm” — a shared framework for understanding the world — until enough anomalies build up that the whole system starts to feel shaky. At that point, someone comes along with a radically new idea that doesn’t just tweak the existing framework but replaces it. Kuhn’s paradigm shift is a profound reimagining of the rules, kind of like Schumpeter’s creative destruction but applied to the way we think and know. It’s as if science periodically wipes the slate clean and rebuilds itself from a fresh perspective.

As a Gen-X, I must also mention Michel Foucault. Foucault offered a more historical spin on these ideas with his concept of epistemes. Foucault believed that every era has its own underlying structure of knowledge, shaping how people perceive and think about the world. These epistemes don’t evolve smoothly; they’re punctuated by abrupt shifts where the entire basis of understanding changes. Just like in a Kuhnian paradigm shift, when a new episteme takes over, it fundamentally changes what questions are even worth asking, as well as who holds power in the discourse. In Foucault’s view, knowledge isn’t just a collection of facts piling up—it’s tied to shifts in power and perspective, with each era replacing the last in a way that’s not fully compatible with what came before.

Then there’s Jean-François Lyotard, who takes the idea a step further by challenging the very idea of cumulative “progress” altogether. As a postmodernist, Lyotard argued that the grand narratives that used to make sense of history, science, and knowledge are breaking down. Instead of one single, upward trajectory, we’re left with multiple, fragmented stories that don’t fit neatly together. Knowledge, for Lyotard, is no longer a matter of moving toward some ultimate truth but an evolving patchwork of perspectives. This rejection of a single narrative echoes Schumpeter’s and Kuhn’s visions of disruption and replacement over seamless continuity. Lyotard’s work suggests that, in knowledge and culture alike, stability is always provisional, subject to the next seismic shift in understanding.

Let’s imagine they can talk together

So when we look at all these thinkers together, a fascinating picture emerges. In CAS, Schumpeter’s economics, Kuhn’s science, Foucault’s history, and Lyotard’s philosophy, progress is not about slowly stacking up ideas or wealth. Instead, it’s about cycles of buildup, breakdown, and renewal — each shift leaving behind remnants of the old and bringing forth something fundamentally new. This kind of progress isn’t just unpredictable; it’s fueled by disruption, tension, and revolution. These thinkers collectively remind us that the most transformative changes come from breaking with the past, not from adding to it. Progress, in this view, is a story of radical leaps, creative destruction, paradigm shifts, and fresh starts—where each new phase is a bold departure from what came before.

Enterprise Architecture for Digital Transformation

Lapalme has discussed “Three Schools of Thought on Enterprise Architecture” at IT Professional in 2012. Korhonen and Halén explored more on Enterprise Architecture for Digital Transformation.

Schools of Though on EA:

  • The Enterprise IT Architecting (EITA) school views enterprise architecture as “the glue between business and IT”. Focusing on enterprise IT assets, it aims at business-IT alignment, operational efficiency and IT cost reduction. It is based on the tenet that IT planning is a rational, deterministic and economic process. EA is perceived as the practice for planning and designing the architecture.
  • The Enterprise Integrating (EI) school views enterprise architecture as the link between strategy and execution. EA addresses all facets of the enterprise in order to coherently execute the strategy. The environment is seen both as a generator of forces that the enterprise is subject to and as something that can be managed. EA is utilized to enhance understanding and collaboration throughout the business.
  • The Enterprise Ecological Adaptation (EEA) school views EA as the means for organizational innovation and sustainability. The enterprise and its environment are seen as coevolving: the enterprise and its relationship to the environment can be systemically designed so that the organization is “conducive to ecological learning, environmental influencing and coherent strategy execution.” EA fosters sense making and facilitates transformation in the organization.

Level or Enterprise Architecture

  • Technical Architecture (AT) has an operational focus on reliability and present day asset utilization and is geared to present-day value realization. This is the realm of traditional IT architecture, information systems design and development, enterprise integration and solution architecture work. AT also addresses architectural work practices and quality standards, e.g. architectural support of implementation projects, development guidelines, and change management practices. In terms of organizational structure, AT would pertain to the technical level of organization, where the products are produced or services are provided.
  • Socio-Technical Architecture (AS) plays an important role as the link between strategy and execution. The business strategy is translated to a coherent design of work and the organization so that enterprise strategy may be executed utilizing all its facets, including IT. AS is about creating enterprise flexibility and capability to change rather than operational optimization: the focus on reliability is balanced with focus on validity in anticipation of changes, whose exact nature cannot be accurately predicted. AS would pertain to the managerial level of organization, where the business strategy is translated to the design of the organization.
  • Ecosystemic Architecture (AE) is an embedded capability that not only addresses the initial design and building of a robust system but also the successive designs and continual renewal of a resilient system. The architecture must allow for co-evolution with its business ecosystem, industry, markets, and the larger society. AE would pertain to the institutional level of organization, where the organization relates to its business ecosystem, industry, markets, and the larger society.

Adaptation and Maladaptation

Source: Korhonen J.J., Halén M. 2017. Enterprise Architecture for Digital Transformation. IEEE 19th Conference on Business Informatics. DOI 10.1109/CBI.2017.45

Ecological Tool for Market Ecosystem

Scholl, Calinescu, Farmer (2021) illustrated how ecological tools can be used to analyse financial markets. Studying markets as complex ecosystems rather than perfectly efficient machines can help regulators guard against damaging market volatility. And they show that changes to the wealth invested via different strategies within a market ecology can help predict market malfunctions like mispricings, bubbles, and crashes.

They model different investor strategies – including non-professional investors, trend followers, and value investors – as different players within a market ecology. They find that:

  1. Just as the status and health of biological ecosystems depend on the species present and their populations, the status and health of market ecosystems depend on market strategies and the wealth invested in them.
  2. Understanding the impact of, and interactions between, different investor species can help predict market malfunctions, just as understanding the impact and interactions of different biological species can help predict ecosystem instability or collapse.
  3. Similar to how animal populations within ecosystems can fluctuate indefinitely, market prices can stray very far from equilibrium and can also fluctuate indefinitely.

Reference:

  • Scholl MP, Calinescu A, Farmer JD (2021), How Market Ecology Explains Market Malfunction, Proceedings of the National Academy of Sciences, 2021 118 (26) e2015574118. DOI: 10.1073/pnas.2015574118

Complexity Economics

Arthur WB (2021) wrote a paper comparing conventional vs complexity economics.

Conventional neoclassical economics assumes:

  • Perfect rationality. It assumes agents each solve a well-defined problem using perfectly rational logic to optimize their behaviour.
  • Representative agents. It assumes, typically, that agents are the same as each other — they are ‘representative’ — and fall into one or a small number (or distribution) of representative types.
  • Common knowledge. It assumes all agents have exact knowledge of these agent types, that other agents are perfectly rational and that they too share this common knowledge.
  • Equilibrium. It assumes that the aggregate outcome is consistent with agent behaviour — it gives no incentive for agents to change their actions.

But over the past 120 years, economists such as Thorstein Veblen, Joseph Schumpeter, Friedrich Hayek, Joan Robinson, etc have objected to the equilibrium framework, each for their own reasons. All have thought a different economics was needed.

It was with this background in 1987 that the Santa Fe Institute convened a conference to bring together ten economic theorists and ten physical theorists to explore the economy as an evolving complex system.

Complexity economics sees the economy as not necessarily in equilibrium, its decision makers (or agents) as not superrational, the problems they face as not necessarily well-defined and the economy not as a perfectly humming machine but as an ever-changing ecology of beliefs, organizing principles and behaviours.

Complexity economics assumes that agents differ, that they have imperfect information about other agents and must, therefore, try to make sense of the situation they face. Agents explore, react and constantly change their actions and strategies in response to the outcome they mutually create. The resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis. The economy becomes something not given and existing but constantly forming from a developing set of actions, strategies and beliefs — something not mechanistic, static, timeless and perfect but organic, always creating itself, alive and full of messy vitality.

Difference between Neoclassical and Complexity Economics

In a complex system, the actions taken by a player are channelled via a network of connections. Within the economy, networks arise in many ways, such as trading, information transmission, social influence or lending and borrowing. Several aspects of networks are interesting: how their structure of interaction or topology makes a difference; how markets self-organize within them; how risk is transmitted; how events propagate; how they influence power structures.

The topology of a network matters as to whether connectedness enhances its stability or not. Its density of connections matters, too. When a transmissible event happens somewhere in a sparsely connected network, the change will fairly soon die out for lack of onward transmission; if it happens in a densely connected network, the event will spread and continue to spread for long periods. So, if a network were to slowly increase in its degree of connection, the system will go from few, if any, consequences to many, even to consequences that do not die out. It will undergo a phase change. This property is a familiar hallmark of complexity.

Reference:

© 2024 Complexity Center

Theme by Anders NorénUp ↑