AI Governance for FinTech Engineering Leaders

Scrums.com Editorial Team
Scrums.com Editorial Team
March 11, 2026
6 mins
AI Governance for FinTech Engineering Leaders

The EU AI Act entered into force in August 2024. DORA regulation applied from January 17, 2025. Neither arrived with an engineering checklist.

What they arrived with: audit requirements, technical documentation obligations, human oversight standards, and penalty clauses that reach up to 7% of global annual turnover for the most serious AI Act violations. An engineering leader who treats these as legal problems has already made the first governance mistake. The technical requirements (model documentation, logging architectures, explainability outputs, incident response procedures) are engineering deliverables. The frameworks just made them mandatory.

This article covers what the major frameworks require at the engineering level, where they overlap, and what governance infrastructure your team needs before an auditor asks for it.

AI governance in FinTech is the combination of documentation practices, technical controls, and accountability structures that allows a financial services organization to demonstrate that its AI systems meet regulatory requirements, operate within defined risk parameters, and can be audited when needed. It sits at the intersection of the EU AI Act, EU DORA regulation, NIST AI RMF, and FCA guidance, each applying differently depending on the systems you have deployed and the markets you operate in.

For a broader overview of how AI fits into software delivery, see AI Agents in Software Development: A Practical Guide for Engineering Leaders. For FinTech engineering context, see the FinTech Engineering Playbook.

The Regulatory Landscape

Three frameworks carry direct compliance weight for FinTech engineering teams operating in regulated markets. A fourth, the NIST AI RMF, is US-origin but increasingly referenced in audit contexts globally.

Framework Jurisdiction Status Engineering Implication
EU AI Act European Union In force Aug 2024; high-risk from Aug 2026 Risk classification, technical documentation, human oversight requirements.
EU DORA (Regulation 2022/2554) European Union Applied Jan 17, 2025 ICT risk management, AI in incident reporting, third-party AI tool risk.
NIST AI RMF 1.0 United States Published Jan 2023 (Voluntary) Governance structure: GOVERN, MAP, MEASURE, MANAGE.
FCA Guidance (DP5/22) United Kingdom Published 2022, ongoing Explainability, SMCR accountability for AI decisions.

These frameworks do not operate in isolation. A FinTech with EU operations, UK customers, and US-listed equity may be simultaneously subject to all four. The engineering implication is that governance infrastructure needs to satisfy the union of their technical requirements, not the intersection.

EU AI Act: Risk Classification

The EU AI Act classifies AI systems by risk category. High-risk classification is not based on how sophisticated the model is. It is based on what the model does and who it affects.

Under Annex III of the Act, AI systems used for creditworthiness assessment and credit scoring of natural persons are classified as high-risk. So are systems used in life and health insurance risk assessment, employment screening, and access to essential private services. Most FinTech credit, insurance, and lending AI falls within this classification.

High-risk classification triggers six technical requirements:

Requirement What It Means for Engineering
Risk management system Documented process for identifying and mitigating risks throughout the AI system lifecycle.
Data governance Training, validation, and testing data managed with documented quality standards.
Technical documentation Pre-deployment documentation of model design, development, and performance characteristics.
Record-keeping Automatic logging of system operation to enable post-deployment audit.
Transparency Output explanations that enable human oversight and allow users to understand decisions.
Human oversight Deployed systems must allow human intervention and override.

The compliance timeline looks comfortable until you account for the lead time required to build logging infrastructure, documentation workflows, and human oversight mechanisms into production systems. Prohibited AI practices were banned from February 2025. High-risk system requirements apply from August 2026. Teams that wait until 2025 to start will face a compressed implementation window for systems already in production.

Penalties under the Act reach €35 million or 7% of global annual turnover for violations involving prohibited practices, and €15 million or 3% of turnover for high-risk system violations. These are maximums. The enforcement trajectory from GDPR suggests they will be applied selectively but at scale when applied.

EU DORA: AI Under Operational Resilience

DORA (Regulation 2022/2554) is not the same as DORA metrics. The EU Digital Operational Resilience Act is a financial sector regulation that applied from January 17, 2025. It does not regulate AI systems directly. It regulates ICT risk management, and AI systems running in production are ICT systems.

Four DORA provisions have direct implications for AI-powered FinTech systems.

ICT risk management. DORA requires financial entities to identify and manage risk from all ICT systems, including AI. For AI-powered trading, fraud detection, and credit decisioning systems, this means formal risk identification in your ICT risk register, not just infrastructure and application layers.

Incident reporting. Major ICT incidents must be reported to regulators within defined timeframes. AI system failures that cause material disruption to financial services are reportable events. Your incident response procedures need to account for AI-specific failure modes: model drift, data pipeline failures, and adversarial inputs.

Digital operational resilience testing. DORA requires threat-led penetration testing for systemically important financial entities and general resilience testing for all in-scope firms. AI systems used in critical functions are in scope. Adversarial robustness testing (checking model behavior under edge cases and attempted manipulation) satisfies this requirement for systems making consequential decisions.

Third-party ICT risk. AI tools provided by third parties (model providers, fine-tuning services, inference infrastructure) are third-party ICT services under DORA. The contracts, exit strategies, and concentration risk assessments DORA requires for critical ICT vendors apply to your AI tool stack.

NIST AI RMF: A Practical Governance Structure

The NIST AI Risk Management Framework 1.0, published in January 2023, is voluntary in the US and carries no direct penalty for non-compliance. It is also the most operationally useful governance structure available for engineering teams, and it is increasingly referenced in audit contexts even where it is not strictly required.

The framework organizes AI risk management into four functions.

GOVERN. Establishes the organizational context for AI risk management: who is accountable, what the risk tolerance is, and what policies apply. For engineering leaders, this means defining AI ownership explicitly: who signs off on a model going to production, who is accountable for its performance, and what the escalation path is when something fails.

MAP. Identifies and classifies AI risks before deployment. The mapping process connects each AI system to the potential harms it could produce, the users or populations affected, and the regulatory obligations that apply. A maintained AI inventory (models in production, their risk classification, their data dependencies) is the output of this function.

MEASURE. Defines how risk is quantified and monitored. For FinTech AI, this includes model performance metrics, fairness metrics for credit systems involving protected characteristics, and drift detection. The measurement infrastructure is what lets you demonstrate regulatory compliance in an audit rather than reconstruct it.

MANAGE. Responds to identified risks: implementing controls, tracking residual risk, and defining response procedures when risks materialize. The incident response procedures DORA requires for ICT failures map directly to the MANAGE function.

FCA Expectations

The FCA's DP5/22 discussion paper on AI (2022) remains the clearest statement of UK regulatory expectations for financial services AI. Two requirements carry the most engineering weight.

Explainability. The FCA expects that AI systems making or informing material decisions about customers (credit decisions, insurance pricing, fraud flags) can produce explanations intelligible to the customer affected. This is not a vague principle. It is an engineering requirement: your model architecture, output format, and logging infrastructure must support explanation generation. Post-hoc explanation tools such as SHAP values satisfy this requirement for some model types; others require architectural decisions made at design time, not retrofitted after deployment.

SMCR accountability. The Senior Managers and Certification Regime places personal accountability on named senior managers for regulated activities. As AI systems take on functions that were previously human decisions, the question of who is accountable under SMCR for those decisions becomes a compliance question with personal consequences. The regulatory accountability chain does not end at "the model decided."

Building AI Governance Into Engineering Practice

For most teams, the practical question is not how to design governance from scratch. It is what to build now, for the systems already in production, to satisfy the audit requirements that are coming. Five practices cover the requirements across the frameworks above.

Maintain an AI system inventory. A documented register of every AI model in production: what it does, what data it uses, what decisions it informs or makes, what its risk classification is, and who is accountable for it. This is the foundation for NIST MAP, EU AI Act technical documentation, and FCA accountability tracing. Without it, every regulatory inquiry starts from scratch.

Document models before deployment, not after. The EU AI Act's technical documentation requirement applies to high-risk systems before they go live. Building model cards and system documentation into your deployment workflow, not as a post-deployment exercise, is the difference between governance that is auditable and governance that is reconstructed under pressure.

Build logging that serves audit requirements. Logging for debugging is not the same as logging for audit. Audit-grade logging captures what data the model received, what decision it produced, when, and with what confidence or explanation. For credit and fraud systems, this log is a regulatory artifact. Design it as one from the start.

Define human oversight mechanisms in production. The EU AI Act requires that high-risk AI systems allow human intervention. For production systems, this means decision override paths are built into the product, not available in theory but absent in practice. If your fraud detection system flags a transaction and there is no workflow for a human reviewer to examine the flag and override it, you have a compliance gap.

Establish AI-specific incident response procedures. AI system failures differ from infrastructure failures. Model drift, adversarial inputs, and data pipeline corruption require detection methods and response procedures distinct from standard incident response. Document them. Test them. The DORA reporting requirement assumes you have them.

What Engineering Leaders Own vs. Delegate

Area Engineering Leader Owns Delegate To
AI system inventory Maintain and update Legal/compliance for regulatory mapping
Model documentation Technical content Legal for regulatory classification sign-off
Logging architecture Design and implementation No delegation
Human oversight mechanisms Build into product Product for UX design
Incident response procedures Technical procedures Legal for regulatory notification
Risk classification Technical input Legal/compliance for final determination
Third-party AI vendor assessment Technical due diligence Procurement/legal for contract terms

For the security and compliance practices that run parallel to AI governance, see SOC 2 for Engineering Teams. For FinTech acquisition contexts, the FinTech engineering due diligence checklist covers the compliance controls acquirers assess most closely.

Frequently Asked Questions

What is AI governance in FinTech?

AI governance in FinTech is the combination of documentation practices, technical controls, and accountability structures that allows a financial services organization to demonstrate that its AI systems meet regulatory requirements, operate within defined risk parameters, and can be audited on demand. It covers the intersection of multiple frameworks (the EU AI Act, EU DORA regulation, NIST AI RMF, and FCA guidance), each applying differently depending on the systems deployed and the markets served.

What does the EU AI Act require for FinTech AI systems?

FinTech AI systems used for credit scoring, insurance risk assessment, or employment screening are classified as high-risk under Annex III of the EU AI Act. High-risk classification requires a risk management system, data governance documentation, pre-deployment technical documentation, automatic logging for audit, transparency and explainability outputs, and human oversight mechanisms. High-risk system requirements apply from August 2026. Penalties reach €35 million or 7% of global annual turnover for the most serious violations.

How does EU DORA regulation apply to AI systems?

EU DORA (Regulation 2022/2554), which applied from January 17, 2025, regulates ICT risk management in financial services. AI systems running in production are ICT systems under DORA. The regulation requires that AI systems in critical functions be included in ICT risk registers, that AI failures causing material disruption be treated as reportable incidents, that AI systems in critical functions undergo resilience testing, and that third-party AI providers be managed under DORA's third-party ICT risk requirements.

What is the difference between DORA metrics and DORA regulation?

DORA metrics (Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Time to Restore Service) are engineering performance metrics from the DevOps Research and Assessment program. EU DORA (Regulation 2022/2554) is the Digital Operational Resilience Act, a financial services regulation that applied in January 2025. They share an acronym and are unrelated. The regulation requires operational resilience across ICT systems including AI; the metrics measure software delivery performance.

What should engineering leaders do first to build AI governance?

Start with an AI system inventory: a documented register of every model in production, what it does, what decisions it informs, and who is accountable for it. Without visibility into what AI systems are deployed and what they do, no other governance practice is actionable. From the inventory, prioritize systems that fall within EU AI Act high-risk categories, as these have the earliest compliance deadlines and the highest penalty exposure.

If you want visibility into how AI systems are performing across your engineering teams, Scrums.com connects to your GitHub, Jira, and CI/CD pipeline and surfaces delivery metrics, cycle time, and system health in one place. To discuss your team's setup, start a conversation with our team.

Eliminate Delivery Risks with Real-Time Engineering Metrics

Our Software Engineering Orchestration Platform (SEOP) powers speed, flexibility, and real-time metrics.

As Seen On Over 400 News Platforms