Provides economists with practical instructions for building autonomous LLM-based agents that plan, use tools, and execute multi-step research tasks.
AI agents are autonomous large language model (LLM)-based systems capable of planning, using tools, and executing multi-step research tasks, marking a shift from simple chatbots to sophisticated research assistants in economics. These agents can autonomously conduct literature reviews across numerous sources, write and debug econometric code, fetch and analyze economic data, and coordinate complex research workflows, significantly enhancing research efficiency.
A key development is "vibe coding," or programming through natural language, which allows economists—even those without formal coding expertise—to build sophisticated AI research assistants using frameworks like LangGraph. LangGraph enables users to design agent workflows as graphs of interconnected LLM-powered experts (e.g., data collector, analyst, writer), supporting customizable, multi-agent systems with built-in memory and quality control. For instance, an economist can describe tasks such as “download data from FRED,” “run regression,” and “write results summary,” and the agent autonomously determines the sequence of actions based on intermediate outputs.
The integration of AI agents into economic research represents a structural transformation, enabling automated hypothesis testing, dynamic model selection, and real-time debugging. Specialized agents can be assigned distinct roles—such as Ideator for generating research questions, DataCleaner for handling time-series analysis, or Proofreader for ensuring manuscript quality—facilitating a division of labor that mirrors the full research lifecycle. Inter-agent communication follows structured chains of thought, ensuring coherence across tasks like model specification, empirical estimation, and result interpretation.
Human-in-the-loop (HITL) oversight remains critical to ensure methodological validity, ethical compliance, and domain accuracy, particularly when agents risk model misspecification or logical errors. As AI systems increasingly exhibit autonomy, the skill set required of economists is shifting from traditional programming toward prompt engineering and conceptual modeling, signaling a broader "linguistic turn in programming" akin to the advent of early statistical software.
These advancements are documented in Anton Korinek’s 2025 NBER Working Paper AI Agents for Economic Research, which provides hands-on instructions, working code examples, and conceptual frameworks to help economists deploy AI agents at every stage of research—from initial investigation to final analysis. The paper emphasizes that modern agentic frameworks allow researchers to build functional AI tools in minutes, democratizing access to advanced computational methods.
This NBER working paper serves as a comprehensive technical guide for economists seeking to leverage Large Language Models (LLMs) not merely as chat interfaces, but as autonomous research agents capable of complex reasoning and execution. It details the architecture of agentic systems that can plan multi-step workflows, utilize external tools (such as code interpreters, web browsers, and APIs), and self-correct to achieve specific research objectives. The authors provide practical frameworks for implementing these systems, moving from basic prompt engineering to advanced agentic loops, thereby demonstrating how to automate labor-intensive aspects of the research pipeline, including data collection, cleaning, and empirical analysis.
A key contribution of this work is its translation of generic AI agent concepts—such as Chain-of-Thought prompting, ReAct (Reasoning + Acting) patterns, and memory management—into the specific context of economic research. The paper offers concrete examples of how to configure agents to perform tasks like synthesizing literature, writing and debugging Python code for statistical modeling, and simulating economic behaviors. By treating the LLM as a central processing unit that orchestrates these tools, the authors illustrate how researchers can build robust systems that handle open-ended problems with minimal human intervention, significantly reducing the friction between hypothesis formulation and empirical validation.
This material is significant because it represents a paradigm shift in computational economics, moving the discipline from manual scripting to the orchestration of "AI scientists." For a technically literate audience, it provides a blueprint for scaling research productivity and tackling complex problems that require iterative reasoning and tool usage. By systematizing the deployment of autonomous agents, the paper highlights a future where AI acts as a co-investigator, potentially accelerating the pace of discovery and allowing economists to focus on higher-level conceptual strategy rather than low-level implementation details.
# AI Agents for Economic Research | NBER: A Technical Summary
This NBER working paper provides a practical guide for economists on constructing autonomous Large Language Model (LLM)-based agents capable of executing complex, multi-step research tasks. The authors detail how to design agents that can plan, use external tools, and dynamically adjust their workflows to tackle economic research challenges—such as data collection, analysis, and hypothesis testing—with minimal human intervention. The paper emphasizes modular architecture, where agents integrate LLMs with specialized tools (e.g., APIs, databases, computational libraries) to perform tasks like retrieving economic indicators, running regressions, or generating synthetic datasets. A key innovation is the hierarchical planning framework, where agents break down high-level research questions into executable subtasks, validate intermediate results, and iterate when errors occur.
The paper’s contributions are particularly valuable for automating repetitive research workflows and accelerating exploratory analysis. By formalizing agent design principles—such as tool integration, memory management, and error recovery—the authors address critical gaps in applying LLMs to economic research. The work also highlights ethical and validation considerations, such as ensuring reproducibility and mitigating hallucinations in agent-generated outputs. For economists and AI researchers, this paper offers a blueprint for developing robust, autonomous research assistants, reducing cognitive load in data-driven inquiry and enabling new forms of large-scale empirical work. The insights are especially relevant in an era where AI-driven automation is reshaping scientific discovery.
Why It Matters: - Efficiency: Automates labor-intensive tasks (e.g., data cleaning, literature reviews) to free economists for deeper analysis. - Scalability: Enables parallelized research across datasets or methodological variations. - Reproducibility: Structured agent workflows improve transparency in AI-assisted research. - Frontier Work: Positions economists at the intersection of AI and social science, with applications in policy modeling, forecasting, and experimental design.
For technically inclined readers, the paper serves as both a how-to manual and a research agenda, bridging the gap between cutting-edge LLM capabilities and practical economic applications. The full working paper is available [here](https://www.nber.org/papers/w34202).