Business & Technology
Anthropic’s Mythos AI sparks UK bank cyber stability alarm
Anthropic has launched controlled access to its Mythos AI model for cybersecurity testing by banks and technology groups. Regulators in the UK, US and Europe are examining what it could mean for financial stability.
Designed for coding and autonomous work, Mythos was able in internal tests to identify and exploit software weaknesses on its own. Those tests uncovered thousands of previously unknown bugs, including zero-day vulnerabilities in major operating systems and web browsers.
That has heightened concern in banking, where many institutions run a mix of modern platforms and older software. Security specialists warn that an AI system capable of quickly inspecting large codebases could expose weaknesses in the systems that underpin payments, trading and customer services.
Shared suppliers and common software stacks increase risk. A vulnerability discovered in one widely used product can affect many institutions at once, increasing the likelihood that a single flaw could have broader consequences across the financial system.
Regulator response
Authorities have moved quickly to assess the threat. In the UK, ministers have warned that AI can now carry out work once limited to expert hackers, including finding weaknesses and writing exploit code at speed.
Officials described Mythos as “substantially more capable” at cyber offence than previous models. The Bank of England has begun simulations to test how such tools could affect financial stability, while British authorities have brought together the Treasury, the Bank, the Financial Conduct Authority and security agencies in a resilience forum.
In the US, the Treasury and the Federal Reserve have held discussions with major Wall Street banks, while the European Central Bank is preparing questions for lenders on their readiness. Canadian regulators have also held a briefing, underscoring how quickly supervisors are trying to understand the issue.
The focus is not only on direct cyber attacks. Officials are also examining whether a wave of AI-assisted intrusions could disrupt payments, undermine confidence in banking systems or create knock-on effects through connected markets and service providers.
Industry reaction
Banks have begun responding by working more closely with technology and cybersecurity providers. Rather than releasing Mythos openly, Anthropic has created a restricted programme, Project Glasswing, for selected partners using the model for defensive work.
The group includes large technology and security companies such as Google, Microsoft and Amazon, as well as major banks. Participants are using the model to test their own systems and identify weaknesses before attackers do.
JPMorgan Chase said participation gave it a chance to assess how next-generation AI tools could help defend critical infrastructure. Goldman Sachs said it already had access to the model and was working with Anthropic and security teams to strengthen its defences.
In Britain, Anthropic is also extending supervised access to lenders. Pip White, Head of UK and Europe at Anthropic, said discussions with British bank leaders had been significant and that banks would soon be able to test Mythos under strict controls.
Arms race
The debate is shifting from whether AI changes cyber risk to how quickly institutions can adapt. One concern among security researchers is that tools such as Mythos reduce the expertise needed to find and exploit flaws, allowing less sophisticated attackers to operate at a much higher level.
That could shorten the time available for defence. If AI systems can identify and weaponise vulnerabilities faster than firms can patch them, banks may need to rethink how they monitor software, prioritise updates and rehearse incident response.
Technology groups, including IBM, have argued that defenders need to respond with their own AI. If attackers use automated systems to search for weak points, defenders must automate scanning, testing and remediation at a similar pace.
Regulators are pressing a similar message. Boards are being told to treat AI-driven cyber risk as a business-wide issue rather than a narrow technical matter. Official guidance has stressed routine measures such as software updates, incident planning and stronger baseline network security.
Policy changes are also being prepared. In the UK, a Cyber Security and Resilience Bill is expected to tighten rules for critical sectors, including finance, while central banks in several jurisdictions are planning more stress tests and scenario work on the effects of advanced AI on markets and payment systems.
Defensive use
Anthropic has committed up to USD $100 million in computing credits for the use of Mythos, along with additional funding for security research. The approach reflects a wider effort to keep the model in controlled settings while organisations assess both its usefulness and its risks.
There is also a more constructive case for the technology. The same kind of model that can expose hidden bugs can also patch code, improve software design and find long-overlooked defects in older systems that human teams may have missed.
That prospect may appeal particularly to banks, which often carry decades of technical debt in systems that cannot easily be replaced. AI tools could help institutions inspect these environments more thoroughly, but they also reveal how much risk may have gone unnoticed inside them for years.
For now, regulators and the industry share the same message: advanced AI has changed the speed and scale of cyber risk. The challenge for banks is to strengthen defences before the same tools become widely available to attackers.