Blog
24 Mar 2026

AI is rewriting business risk in finance

Maximilian Douglas Carl Vickers
Maximilian Douglas Carl Vickers
Research Manager

Digital risks are reshaping the management agenda, even as familiar vulnerabilities  continue to test operational resilience. Business conduct risk—the exposure created by a company’s own actions, decisions and practices—is adding a new roster of concerns to perpetual issues such as corruption, money laundering  and human rights.  And these digital threats, amplified by artificial intelligence, bring substantial and growing costs if left unmitigated. 

We researched this changing landscape in collaboration with Swiss consultancy and data provider, RepRisk, and found that executives are highly concerned with AI-enhanced threats including data privacy, cybersecurity, and misleading communications. 

Our survey of more than 500 C-suite executives at financial services firms around the globe, reveals the tension between pursuing the value of productivity boosting AI investment and managing their impacts. 

We found that most companies are struggling to keep pace with change, threatening investor confidence, client relationships, and brand value. This requires fresh approaches to governance, oversight, and control frameworks.  

The Business Conduct Risk Intelligence Report 2026

Read the report

AI-driven conduct risk can occur when the technology is used to support or automate decisions without adequate human oversight in areas as diverse as lending, recruitment, surveillance, or customer onboarding. If models cannot be clearly explained, tested for bias, or audited, firms may struggle to show that outcomes are fair, consistent, and accountable.  

One recent example of this new risk environment made headlines in the UK last year when employees at a mid-size bank used a public generative AI tool to create personalised content for clients. The result: sensitive, financial data for around 75,000 bank customers was exposed, triggering regulatory scrutiny and reputational fallout.  

AI is also changing the broader information environment in which risk is identified and assessed. Used well, the technology can detect faint signals of risk early, spot patterns humans miss, and expand the scope of threat-monitoring coverage. Equally, AI can amplify misinformation, flawed reasoning, and false signals, making it harder to trust the data and respond with confidence.  

Executives need confidence in the quality of sources, the robustness of methodology, and the materiality of what they are seeing. Our research found that this trust depends on human oversight, with two-thirds of respondents expressing confidence in conduct risk data that combines advanced AI with expert human input, compared with just over one-third who are confident in fully automated approaches.  

With enterprise spending on technology growing quickly—Oxford Economics forecasts a 7.8% increase in 2026, roughly twice the pace of global GDP growth—failing to act upon the full range of AI-driven changes is a risk no company can afford. At the same time, firms that use AI to move faster without strengthening governance may simply fail faster. In the next phase of risk management, trust will matter as much as technology. 

Speak to us

Connect with our Thought Leadership team. Leverage our global expertise and data-driven insights to uncover strategic, high-impact narratives that help executives lead more sustainably and drive profitability.

  • Share: