More than 75% of UK financial services firms now use AI, as the evidence gathered by the Treasury Select Committee reports. Insurers and international banks show the highest take up. Businesses deploy the technology to automate back office work and to run core services such as insurance claims and credit checks.
MPs said the speed of adoption has overtaken public oversight. The Bank of England, the FCA and the Treasury rely on existing rules, which aren’t AI specific. The committee said this leaves consumers exposed and creates uncertainty for firms.
Dame Meg Hillier, chair of the Treasury Select Committee said: “Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector which must compete on the global stage.
“The use of AI in the City has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the Government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”
What Risks Worry MPs The Most?
The committee received quite a bit of evidence about harm linked to AI in finance, according to its report. One issue centres on transparency. Customers often cannot see how automated systems reach decisions on loans or insurance pricing.
MPs also heard concerns about exclusion. Automated systems may lock out people who already struggle to access financial products. Evidence also raised the spread of false information through unregulated AI search tools and a higher chance of fraud.
The report also brought up how AI driven trading could intensify herding behaviour in markets. That could take up stress during periods of volatility and increase the chance of a crisis.
Reliance on a small group of US technology companies added another pressure. UK firms depend heavily on those providers for AI tools and cloud services, according to the committee. Witnesses also flagged cyber security threats tied to that dependence.
More from News
- Will Ursula Von Der Leyen’s EU Inc. Finally Equal The Playing Field For European And US Startups?
- Startup Announces Plans To Build A Hotel On The Moon
- ChatGPT Adds Age Prediction Features, Here’s How It Works
- What Do Trump’s Greenland Tariffs Mean For The Future Of Global Trade And Tariff-Driven Diplomacy?
- Streambased Strengthens Founding Team For Commercial Market Entry
- Only 1 In 4 UK Retailers Are Actually Ready For AI In Commerce, Patchwork Finds
- Ads Are Coming to ChatGPT: How Will This Impact The UX?
- Why Is Taiwan Investing Billions Into The United States?
Why Are Regulators Under Pressure To Act?
The UK has no AI specific law or financial rulebook, the committee said. Regulators supervise AI through existing frameworks. MPs said that setup leaves grey areas around accountability and consumer protection.
The report called on the Bank of England and the FCA to run AI specific stress tests. Those tests would check how firms cope during an AI driven market shock. MPs said that work would raise readiness across the system.
The committee also asked the FCA to publish practical guidance on AI before the end of the year. That guidance should explain how consumer protection rules apply and who carries responsibility when AI causes harm.
Attention also turned to the Critical Third Parties Regime. That system gives regulators powers over non financial firms that supply essential services, such as AI and cloud providers. No company has entered the regime since its creation more than a year ago, according to the report.
MPs asked the Government to designate AI and cloud providers seen as critical to UK finance. They said stronger oversight would support resilience. A Bank of England spokesperson told The Independent that the bank had already taken active action to assess AI risks and would review the recommendations.
In response to the events, Levent Ergin, Chief Strategist for Agentic AI, Regulatory Compliance & Sustainability at Informatica, from Salesforce commented: “AI decisioning on credit decisions and insurance claims, is like putting a plane on autopilot. When it gets the decision right, it can move at speed. When it is wrong, it has the potential to be catastrophic for both consumers and banks alike.
“Reducing the risk of AI on our financial system should be addressed, like all other stress tests. Financial institutions need to be able to demonstrate not just what AI can do, but how it behaves during financial decisioning.
“Fragmented, opaque and poorly understood data can distort outcomes, amplify risk and erode trust, particularly when decisions are made autonomously. Without trusted context, AI isn’t intelligent, it is just guessing. Financial services organisations will need to prove that AI decisions are grounded in trusted context, with a clear understanding of where data comes from, how it’s used, and how those systems perform under stress.”