Experimental framework tests LLM agents in network-effect games for economic equilibrium predictions.

Topological visualization of [2510.06903] When Machines Meet Each Other: Network Effects and the Strategic Role of History in Multi-Agent AI
Brave API

Large language model (LLM) agents were tested in a canonical network-effect game to evaluate their alignment with economic equilibrium predictions, specifically the fulfilled expectation equilibrium (FEE), where participation matches common expectations . The experimental framework involved 50 heterogeneous GPT-5-based agents interacting repeatedly under systematically varied conditions, including network-effect strengths ($$\beta \in \{0.25, 0.75\}$$), price trajectories, and decision-history lengths . Each agent was assigned a unique standalone value $$\theta_i \in \{0, \dots, 49\}$$, mirroring heterogeneity in classical economic models .

Results showed that LLM agents systematically diverge from FEE: they underestimate participation at low prices and overestimate it at high prices, maintaining persistent forecast dispersion . Price was identified as the dominant driver of deviation, while network effects amplified contextual distortions rather than acting as an independent force . Although stronger network effects did not directly shift expectations, they intensified deviations caused by price and agent heterogeneity, leading to increased optimism at high prices and pessimism at low prices .

The structure of historical information played a critical moderating role. In static settings without history, agents failed to converge to FEE . In dynamic settings, access to history influenced coordination: monotonic histories—where past outcomes followed steady upward or downward trends—helped stabilize expectations and reduce dispersion, whereas non-monotonic histories amplified divergence and path dependence . Regression analyses confirmed that history moderates sensitivity to price extremes, but convergence to equilibrium remains elusive under complex trajectories .

These findings provide the first systematic evidence on multi-agent AI systems under network effects, highlighting that LLM agents’ strategic behavior is contingent, history-dependent, and shaped by architectural constraints . Unlike classical economic agents who treat history as sunk, LLMs rely on past data to form forecasts, making the curation and structure of history a key design lever in AI systems .

Generated Mar 7, 2026
Cerebras Thinking

This paper introduces a novel experimental framework designed to evaluate Large Language Model (LLM) agents within the context of network-effect games, specifically examining their capacity to predict and achieve economic equilibria. The authors construct simulated economic environments where the utility of an agent's choice is dependent on the choices of others, forcing agents to navigate strategic coordination problems and positive feedback loops. By treating LLMs as rational economic actors, the study rigorously tests whether these models can converge to Nash equilibria or other stable market states, comparing AI performance against theoretical benchmarks and, in some cases, human behavioral data.

A key insight of the research is the "strategic role of history," demonstrating how LLM agents effectively leverage past interaction data to inform future decision-making. Unlike traditional game-theoretic agents that might rely solely on static payoff matrices, the LLM agents exhibit a sophisticated ability to reason about the trajectory of the network, using historical context to break symmetry and coordinate on superior equilibria. The study suggests that the in-context learning capabilities of transformers allow these agents to simulate complex economic reasoning, anticipating how current actions influence the future behavior of the network, which is a critical component of strategic interaction in markets with network externalities.

This work is significant as it bridges the gap between artificial intelligence and computational economics, providing a scalable method for stress-testing economic theories with AI-driven participants. As autonomous agents begin to populate digital markets and decentralized platforms, understanding their propensity to generate or amplify network effects is vital for ensuring system stability and efficiency. The findings imply that LLMs could serve as powerful simulators for economic policy testing and mechanism design, while also highlighting the potential risks of emergent herd behavior or instability when AI agents interact at scale.

Generated Mar 11, 2026
Open-Weights Reasoning

This paper, titled When Machines Meet Each Other: Network Effects and the Strategic Role of History in Multi-Agent AI (arXiv:2510.06903), investigates the behavior of large language model (LLM) agents in network-effect games, where outcomes depend on the actions and interactions of multiple agents. The authors design an experimental framework to study how LLMs—when deployed as autonomous agents—navigate strategic environments with externalities, such as coordination games, network formation, and market entry. By analyzing the agents' decision-making processes under different conditions (e.g., with or without memory of past interactions), the study explores how history and context influence their convergence to economic equilibria. The work also examines the role of network effects, where the utility or payoff of an agent depends on the choices of others, mirroring real-world scenarios like platform adoption or social media growth.

The key contributions of this research include empirical evidence on how LLMs perform in multi-agent settings with strategic interdependence, highlighting both their strengths (e.g., adaptability to dynamic environments) and limitations (e.g., sensitivity to framing or lack of long-term planning). The paper demonstrates that agents' ability to reason about history and network structures significantly impacts their strategic outcomes, suggesting that future AI systems may need explicit mechanisms to handle temporal and relational dependencies. Why this matters: As AI agents become more prevalent in economic and social systems, understanding their behavior in networked, strategic contexts is critical for designing robust applications. The findings also raise questions about the interpretability and predictability of LLM-driven decision-making, with implications for fields like automated negotiation, decentralized systems, and policy design for AI-driven markets.

Generated Mar 11, 2026
Sources