LLM agents generate context-based human-like decisions, enabling strong individual simulations in economic experiments.
Large language model (LLM) agents are capable of generating context-based, human-like decisions, which enables them to serve as strong individual simulators in economic experiments. These agents exhibit behavior patterns consistent with classical economic theories, such as downward-sloping demand curves and diminishing marginal utility, and in some cases, demonstrate rationality that exceeds that of human participants, particularly in complex tasks like budget allocation . Studies show that LLM agents display bounded rationality rather than strict rational expectations, aligning closely with human decision-making under uncertainty . For instance, in laboratory market experiments, LLMs replicate broad human behavioral trends, including convergence dynamics in positive and negative feedback markets, when equipped with a minimal memory of past interactions and high response variability .
In simulated environments, LLM agents can autonomously adjust labor supply, consumption, and investment decisions based on economic incentives, taxes, and market prices, forming adaptive socio-economic systems . They have been used to simulate tax policy through a Stackelberg game framework, where worker agents optimize labor based on persona-conditioned utility functions, and a planner agent adjusts tax schedules using in-context reinforcement learning, resulting in welfare outcomes comparable to or better than classical models . Additionally, when embedded in spatially structured social networks, LLM agents develop individualized behaviors and social roles through repeated interactions, further enhancing their realism in modeling human societies .
These capabilities make LLM-powered agents valuable tools for studying micro-level economic behavior and macro-level emergent phenomena, offering a scalable and cost-effective alternative to traditional human experiments .
This research introduces a framework for utilizing Large Language Models (LLMs) as autonomous agents capable of simulating complex socio-economic behaviors. Rather than relying on static utility functions or rule-based bots, the proposed model leverages the generative capabilities of LLMs to produce context-aware, human-like decisions. The material details how these agents process environmental cues, maintain memory of past interactions, and weigh social norms, effectively mimicking the cognitive processes of human participants in controlled economic scenarios.
A key contribution of this work is the empirical demonstration that LLM-based agents can replicate the behavioral patterns and biases observed in traditional human-based economic experiments. The study shows that these agents are not merely random generators but exhibit distinct personalities, strategic thinking, and irrationalities that mirror real-world social dynamics. By bridging the gap between artificial intelligence and behavioral economics, the authors provide a scalable method for modeling individual heterogeneity and collective interaction without the logistical constraints of recruiting human subjects.
This research is significant because it establishes a new paradigm for computational social science and economic modeling. It offers researchers a powerful, low-cost sandbox for stress-testing economic theories, policy interventions, and market mechanisms before real-world implementation. For the technical community, this work underscores the potential of generative AI to evolve beyond text generation into complex agentic reasoning, enabling high-fidelity simulations of human systems at scale.
# Summary: Socio-Economic Model of AI Agents
This paper introduces a socio-economic model for AI agents, focusing on how large language model (LLM) agents can generate human-like decisions in economic experiments. The work explores the feasibility of using AI agents to simulate individual economic behavior, particularly in contexts requiring contextual reasoning, social cognition, and strategic interaction. By leveraging LLMs, the model enables agents to process complex economic scenarios, make nuanced decisions, and adapt to dynamic environments—closer to human behavior than traditional rule-based or utility-maximizing models. The paper demonstrates that these AI agents can participate in controlled economic experiments, such as trust games, bargaining, and market interactions, with performance comparable to human subjects in certain cases.
The key contributions include: 1. Contextual Decision-Making Framework: The model incorporates mechanisms for interpreting social and economic cues, allowing AI agents to simulate human-like bounded rationality. 2. Empirical Validation: The authors present experimental results showing that LLM-based agents can replicate human behavioral patterns in classic economic games, suggesting their utility in behavioral economics research. 3. Scalability & Generalization: Unlike prior models, this approach does not rely on hand-crafted rules but instead learns from natural language interactions, making it adaptable to diverse socio-economic settings.
This research matters because it bridges AI and behavioral economics, offering a new tool for studying economic phenomena at scale. By enabling realistic simulations of human decision-making, the model could advance policy analysis, market modeling, and the design of incentive structures. However, it also raises questions about bias, interpretability, and ethical implications in AI-driven economic simulations—a critical area for future work.