Designs experiments with 50 heterogeneous GPT-5-based LLM agents in a network-effect game to test convergence to fulfilled expectation equilibria under varied conditions.
A study titled "When Machines Meet Each Other: Network Effects and the Strategic Role of History in Multi-Agent AI" designs experiments with 50 heterogeneous GPT-5-based large language model (LLM) agents interacting in a canonical network-effect game, where economic theory predicts convergence to a fulfilled expectation equilibrium (FEE). The experimental framework systematically varies network-effect strengths, price trajectories, and decision-history lengths to examine how these factors influence agent behavior. Results show that LLM agents systematically diverge from FEE: they underestimate participation at low prices and overestimate it at high prices, sustaining persistent dispersion in their forecasts.
Price is identified as the dominant driver of deviation from equilibrium, while the structure of history moderates this effect. Simple monotonic histories—where past outcomes follow a steady upward or downward trend—help stabilize coordination among agents, whereas non-monotonic or randomized histories amplify divergence and path dependence, disrupting convergence. Stronger network effects do not directly cause deviations but amplify distortions induced by price and heterogeneity, increasing the gap between agent expectations and FEE. Regression analyses at the individual level confirm that network effects intensify the impact of price on deviations, and that longer decision histories reduce dispersion, though with diminishing returns.
The findings indicate that, unlike classical economic agents who treat history as sunk, LLM agents rely on historical information to form expectations, making their strategic behavior inherently history-dependent. This dependence leads to partial convergence under monotonic conditions but persistent miscoordination when trajectories are complex or random. The study provides the first systematic evidence on multi-agent AI systems under network effects and highlights how the curation of historical data can serve as a design lever in configuring such systems.
This research explores the emergent dynamics of multi-agent systems populated by Large Language Models (LLMs) within environments characterized by network effects. The authors design a large-scale simulation involving 50 heterogeneous agents powered by GPT-5, tasking them with playing a network-effect game where the utility of an agent's decision depends on the choices of others. The primary objective is to empirically evaluate whether these advanced AI agents can converge to Fulfilled Expectation Equilibria (FEE)—a game-theoretic state where agents' expectations about others' behavior align with actual outcomes—under a variety of controlled conditions.
The study's key contribution lies in its analysis of how "history" serves as a strategic mechanism for coordination among AI agents. By manipulating the availability and context of interaction history, the paper demonstrates that LLM agents utilize past information to form beliefs about future states, significantly influencing their ability to coordinate and reach stable equilibria. The findings suggest that while GPT-5-based agents possess the capacity for complex strategic reasoning, their convergence is sensitive to the depth and structure of historical context provided, revealing path dependencies in how AI populations settle on standards or technologies.
This work is critical for the future of autonomous systems, particularly as AI agents are increasingly deployed in economic and social domains where network effects are dominant, such as decentralized finance, automated trading, and digital platform governance. By bridging the gap between traditional game theory and modern generative AI, the paper provides a framework for predicting whether multi-agent systems will stabilize into efficient outcomes or succumb to volatility and coordination failures. It highlights the necessity of designing agent architectures that effectively manage memory and expectations to ensure robust performance in complex, interconnected environments.
This paper investigates the emergent strategic behavior of large language model (LLM) agents in a network-effect game, where 50 heterogeneous GPT-5-based agents interact under varying conditions to test convergence to fulfilled expectation equilibria (FEEs). The study designs experiments to explore how network structure, historical context, and agent heterogeneity influence collective outcomes. By simulating repeated interactions, the authors observe whether agents' beliefs about others' strategies stabilize into equilibria where expectations align with actual behavior. The work extends theoretical models of strategic interaction to the domain of multi-agent LLMs, where agents must infer and adapt to others' policies dynamically.
The key contributions include empirical validation of equilibrium convergence in LLM-based agents, the role of strategic history (e.g., past interactions) in shaping beliefs, and the impact of network topology on coordination. The paper highlights how even sophisticated LLMs may struggle with common-knowledge assumptions or path-dependent strategies, suggesting that real-world deployments of multi-agent AI systems could exhibit unpredictable dynamics. The findings matter for AI alignment, decentralized AI systems, and game-theoretic modeling of autonomous agents, as they reveal how machine learning agents may or may not replicate human-like strategic reasoning. The work bridges economic game theory with empirical LLM research, offering insights for designing more robust multi-agent AI frameworks.