Anthropic’s Mythos AI models under eye from RBA and ASIC as they ‘assess implications’ to financial system
Australian regulators are investigating if Anthropic’s AI models could pose a threat to online security across the financial system.

The Reserve Bank is racing to gauge financial-system risks from artificial intelligence — led by Anthropic’s Mythos agent.
The central bank’s move to call on regulators to assess the threat means it has joined peers worldwide looking to safeguard their financial systems’ security against online crooks using AI.
“RBA, along with peer regulators and government agencies will continue to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system,” the central bank said on Wednesday.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.The RBA is responsible for interest rate policy and regulation of the payments system, which is potentially exposed to new AI models that may be able to exploit technical, or human vulnerabilities in complex legacy software commercial banks rely on.
“For the RBA to make that public statement, they must think the threat is serious, as they’re very deliberate in their statements about the payments system,” said Brad Kelly the managing director of consultancy Payment Services.
“And you’ve got to remember parts of the payments system are designated as critical infrastructure like EFTPOS and the NPP (New Payments Platform).”
Regulators hone in on Anthropic
Other regulators linked to Australia’s financial system and transaction processing include ASIC, AUSTRAC and banking regulator APRA. The Treasury Department is also responsible for regulating markets and the financial system’s integrity.
Anthropic is a flagship US AI-native company that was founded in 2021 and has since soared to an estimated $US800 billion ($1.2 trillion) valuation as its Claude Agent and large language model (LLM) pioneers the industry’s rapid advances.
Anthropic’s US founder Dario Amodei has warned its Mythos model could find weaknesses in software systems to potentially help online criminals hack bank accounts.
“It’s an opportunity for bad actors to exploit loopholes and I think we have to take it quite seriously,” said Mr Kelly.
“And the RBA must have an eye on anything that challenges integrity. So, three or four years ago there was an NPP outage, not because of an attack but because of a bad software update. So, even something as simple as that can grind the system to a halt. Of course you can’t directly relate that and the AI threat, but it shows you how the system can fall over.”
ASIC also investigating
An ASIC spokesperson said it notes the recent announcement by Anthropic and is closely monitoring these developments along with peer regulators to assess possible implications for the Australian market.
“All participants in the financial system have a duty to balance innovation with the responsible and ethical use of emerging technologies,” ASIC said.
“While new technologies can have significant benefits to consumers, they have also led to cyber risks escalating in both scale and sophistication. ASIC expects financial services licensees to be on the front foot every day to ensure that their customers and clients aren’t put at risk by inadequate controls.”
Anti-money laundering and transaction monitoring regulator AUSTRAC did not immediately respond to a request for comment.
