Survey of agentic AI in financial services including model risk management and compliance.

Topological visualization of Agentic AI Systems in Financial Services
Brave API

Agentic AI systems are increasingly being adopted in financial services for their ability to perform complex, multi-step tasks autonomously, with significant applications in model risk management (MRM) and regulatory compliance. Financial institutions, including major players like NASDAQ, are leveraging agentic AI to improve efficiency, accuracy, and scalability across operations such as claims adjudication, loan processing, and financial research . A recent Moody’s study found that 70% of surveyed institutions prioritize AI for risk and compliance, while 66% use it to accelerate analysis and 64% to reduce costs .

In model risk management, agentic systems are structured as collaborative "crews" consisting of specialized agents supervised by a manager or judge agent. The modeling crew handles tasks such as exploratory data analysis, feature engineering, model training, and documentation, while the MRM crew ensures compliance, replicates models, evaluates conceptual soundness, and analyzes outcomes under extreme scenarios . These MRM agents use tools like retrieval-augmented generation (RAG) to validate modeling procedures against organizational guidelines and ensure adherence to documentation standards . This dual-crew framework has been tested on real-world financial problems including credit card fraud detection, credit approval, and portfolio risk modeling, demonstrating robustness and effectiveness .

Compliance and auditing automation are also key use cases. Agentic AI supports know-your-customer (KYC), anti-money laundering (AML), and sanctions screening by processing vast amounts of transaction data more quickly and accurately than manual methods . One dual-agent system improved accuracy to nearly 100% by having one agent extract data from documents and another verify it across sources such as loan applications and bank statements, although this increased computational cost . Despite these benefits, challenges remain: a 2025 Infosys study cited in a 2026 research roundup found that only 2% of companies had adequate AI guardrails, leading to AI incidents such as privacy violations and systemic failures in 95% of organizations .

Regulatory and governance frameworks must evolve to address the dynamic nature of agentic AI. Traditional model risk management assumes static algorithms, but agentic systems continuously learn and exhibit emergent behaviors, necessitating adaptive governance . A proposed "agentic regulator" framework uses a layered approach with self-regulation modules, firm-level governance, regulator-hosted monitoring agents, and independent audit blocks to detect collusive or destabilizing patterns in real time . Similarly, Deloitte emphasizes the need for built-in compliance guardrails, automated risk assessments, and continuous monitoring, supported by close collaboration between compliance and AI development teams .

Transparency and accountability are critical. Agentic AI systems must provide auditable decision trails and allow human override when necessary . Regulatory bodies are expected to demand higher interpretability, requiring institutions to document AI decision-making processes without sacrificing operational efficiency . As of 2025, 78% of financial organizations expressed distrust in agentic AI, and 69% of AI projects failed to reach production, highlighting ongoing concerns around security, data governance, and workforce readiness . To mitigate these risks, experts recommend strategic implementation grounded in strong data governance, cloud-native infrastructure, and enterprise-wide AI governance models .

Generated Feb 22, 2026
Cerebras Thinking

This paper provides a comprehensive survey of the emerging landscape of Agentic AI within the financial services sector, moving beyond standard predictive models to autonomous systems capable of planning, tool use, and multi-step reasoning. It delineates the architectural patterns—such as ReAct, AutoGPT, and multi-agent frameworks—that are being adapted for high-stakes financial workflows, including algorithmic trading, automated underwriting, and dynamic fraud detection. The text categorizes these applications based on their autonomy levels, offering a technical taxonomy that distinguishes between scripted automation and true agentic behavior capable of handling unstructured data and complex decision loops in volatile market environments.

A central contribution of this work is its rigorous analysis of Model Risk Management (MRM) and compliance methodologies specifically tailored for agentic systems. The authors highlight how the non-deterministic nature of Large Language Model (LLM)-based agents introduces novel risks, such as hallucination and "goal drift," which complicate traditional validation frameworks mandated by regulations like SR 11-7. The paper proposes technical guardrails and governance structures, including real-time monitoring of agent chains, human-in-the-loop interventions, and sandboxed execution environments. It argues that existing compliance models must evolve from static validation to continuous, runtime oversight to manage the emergent behaviors inherent in autonomous agents.

This research is critical for bridging the gap between rapid advancements in generative AI and the conservative, risk-averse culture of financial institutions. By mapping the technical capabilities of agentic AI against the rigid requirements of financial governance, the paper serves as a strategic blueprint for CTOs, risk officers, and data scientists. It underscores that the successful adoption of agentic AI depends not merely on the sophistication of the reasoning models, but on the robustness of the infrastructure that ensures auditability, explainability, and regulatory adherence in an increasingly automated financial ecosystem.

Generated Mar 4, 2026
Open-Weights Reasoning

Summary: Agentic AI Systems in Financial Services

This paper surveys the deployment and governance of agentic AI systems in financial services, with a focus on model risk management (MRM) and compliance. Agentic AI—defined here as autonomous or semi-autonomous systems capable of decision-making—is increasingly used in areas such as algorithmic trading, credit scoring, fraud detection, and regulatory reporting. The authors examine the unique challenges these systems pose, including interpretability, dynamic risk assessment, and regulatory alignment under frameworks like Basel III, GDPR, and MiCA. Key contributions include a taxonomy of agentic AI risks (e.g., adversarial attacks, concept drift, and emergent behaviors) and a proposed MRM lifecycle tailored to autonomous systems, emphasizing continuous monitoring, explainability audits, and stress-testing for non-deterministic outputs.

The paper argues that traditional MRM approaches, designed for static models, are insufficient for agentic AI, which may adapt, learn, or interact with environments in unpredictable ways. It highlights regulatory gaps, such as the lack of clear guidance on accountability for AI-driven decisions, and proposes mitigation strategies, including counterfactual testing, human-in-the-loop oversight, and federated validation to ensure compliance without stifling innovation. The work is particularly relevant to financial institutions navigating AI governance and regulators seeking to update risk frameworks for advanced AI. By bridging technical and regulatory perspectives, the paper offers a roadmap for safer, more transparent deployment of agentic AI in finance.

Why it matters: As financial services adopt increasingly autonomous AI systems, this survey provides a critical framework for addressing their risks while ensuring alignment with evolving regulatory expectations. It is essential reading for quantitative researchers, risk managers, and policymakers grappling with the next frontier of AI in finance.

Source: [arXiv:2501.10000](https://arxiv.org/abs/2501.10000)

Generated Mar 12, 2026
Sources