US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street chief executives to Washington on April 7 to deliver a warning: a new AI model from Anthropic PBC called Mythos is capable of finding software vulnerabilities and represents a shift in the cybersecurity threat landscape.

The meeting, held in Washington, marks one of the most direct interventions by senior US financial regulators on the subject of AI-driven cyber risk. Rather than issuing a routine advisory, Bessent and Powell chose to brief bank leaders in person — a signal of how seriously the Treasury and the Fed are treating the potential for advanced AI tools to be weaponised against the financial sector.

What Makes Mythos Different From Previous AI Security Tools

According to Anthropic, Mythos is not a general-purpose AI assistant. The company describes it as a model specifically optimised for identifying vulnerabilities in software and computer systems — and it is reportedly so effective that Anthropic has decided against a standard commercial release. Instead, the company says it is making Mythos available only to a "limited number of carefully-chosen parties", a deployment approach that reflects growing industry concern about so-called dual-use AI capabilities.

The logic is straightforward: a tool that can find vulnerabilities faster and more comprehensively than existing methods is equally useful to defenders and attackers. In the wrong hands, Anthropic warns, Mythos could give malicious actors — including state-sponsored groups — a powerful new instrument for stealing data or disrupting critical infrastructure.

If tools like Mythos fall into the wrong hands, it could provide attackers with a powerful new weapon to steal data or disrupt critical infrastructure.

This puts Mythos in a category of AI systems that researchers call "frontier dual-use" models — capabilities potent enough that their developers have concluded the standard market-release model is inappropriate. Anthropic has not publicly disclosed which parties have been granted access, nor the criteria used to vet them.

Why the Fed and Treasury Are Involved

The involvement of both the Treasury and the Federal Reserve is notable, and points to regulatory concern that has been building for several years around the intersection of AI and financial system stability. Neither the Treasury nor the Fed has a formal AI-specific regulatory mandate over private AI developers like Anthropic. Their authority here is indirect — they oversee the banks and financial institutions that could become victims, rather than the AI companies building the tools.

This jurisdictional gap matters. The April 7 meeting appears to be advisory in nature, not a binding regulatory action. There is no indication that Bessent or Powell issued formal directives, imposed restrictions on bank AI procurement, or announced new compliance requirements. The meeting's purpose, based on available reporting, was to ensure that financial sector leadership understands the threat environment — not to announce a regulatory response to it.

That said, the convening of bank CEOs by two of the most senior figures in US economic policy carries implicit pressure. Institutions that fail to take AI-driven cyber threats seriously following such a briefing would find it difficult to claim ignorance in the event of a significant incident.

The Broader Context: AI and Critical Infrastructure Risk

The Mythos warning does not emerge in isolation. Over the past 18 months, US government agencies including CISA (the Cybersecurity and Infrastructure Security Agency) and the National Security Agency have escalated warnings about AI-assisted cyberattacks. Foreign adversaries — particularly those with significant state investment in AI research — are widely assessed to be exploring offensive applications of large language models and specialised AI tools.

The financial sector is a primary target. Banks hold vast quantities of sensitive personal and commercial data, operate payment infrastructure that underpins the broader economy, and are deeply interconnected — meaning a successful attack on one institution can cascade rapidly across the system. The 2024 National Cybersecurity Strategy identified critical financial infrastructure as a top-tier protection priority.

What is new about the Mythos situation is the specificity of the warning. Rather than a general caution about AI misuse, this is a named, commercially developed model whose own creator has assessed it as too dangerous for open release. That represents an escalation in the public discourse around AI safety and cybersecurity.

Restricted Release: Precedent and Problems

Anthropics decision to restrict Mythos access raises questions that extend well beyond this single model. The company has not described the legal or contractual framework governing its vetted-release programme, nor whether any government body has oversight of who receives access. It is not clear whether this is a voluntary arrangement or one developed in coordination with US national security agencies.

The restricted-release model has precedent in other sensitive technology sectors — export-controlled cryptography, for instance, or certain biosecurity tools. But AI models are harder to contain. Once a model's weights or architecture details are shared with even a small number of parties, the risk of further proliferation increases. Anthropic has not publicly addressed how it intends to monitor or enforce use restrictions among its vetted recipients.

What This Means

For bank executives and their security teams, the April 7 meeting is a signal that AI-driven cyber threats have moved from theoretical concern to active government-level priority — and that the institutions most likely to be targeted are expected to treat this accordingly, even in the absence of new binding regulations.