AI cyber threat warning raises concerns for banks

Concerns are growing across the financial and technology sectors after Anthropic revealed that its new AI cybersecurity tool, Mythos, has already identified thousands of high-severity software vulnerabilities, including flaws affecting major operating systems and web browsers.

The company said access to Mythos Preview will initially be restricted to selected launch partners working in defensive cybersecurity, amid concerns about the potential misuse of the technology.
Anthropic said lessons learned from the rollout would be shared with the wider industry before any broader release.

The announcement has intensified debate around the risks associated with increasingly advanced AI systems, particularly as financial institutions continue accelerating AI adoption across compliance, fraud prevention and customer onboarding processes.

REGULATORY RISK

SmartSearch warns that while AI has the potential to strengthen financial crime controls, weak underlying systems could leave firms exposed to greater levels of fraud and regulatory risk.

Phil Cotter (main picture, inset), Chief Executive Officer at SmartSearch, says: “British banks are set to onboard AI that its own creator warned could surpass the most skilled humans at finding and exploiting software vulnerabilities.

“That same capability is already being used by criminals to fabricate synthetic identities, open accounts, and move money at a speed and scale that manual checks were never designed to detect.”

“AI adoption in financial institutions is a necessary step in the right direction. It will reduce dependence on manual processes and outdated technologies that still underpin most compliance tasks within regulated firms.”

AMPLIFIED RISKS

But he adds: “This opportunity comes with a warning: AI built on weak foundations doesn’t just fail to stop financial crime – it risks amplifying it.

“With many firms becoming liable for criminal prosecution for failing to prevent fraud, it is critical that they have robust systems in place to help verify who they are doing with business with and provide evidence to regulators that they are complying with legal obligations.

“But if AI is layered over existing data gaps and fragile systems, it hands criminals a more powerful set of tools to evade a company’s compliance checks, launder money, and commit fraud at scale, while simultaneously handing a potential criminal sentence for the directors unable to detect it.”

The warning comes as regulators and banks continue grappling with rising levels of identity fraud, synthetic identity creation and increasingly sophisticated cyber-enabled financial crime.

Author

Top 5 This Week

Related Posts