Wall Street banks have begun internal testing of Anthropic's Mythos AI model, with Trump administration officials urging the financial sector to adopt it specifically for identifying system vulnerabilities, according to Bloomberg Technology.

The move represents one of the most direct instances of a US administration steering major financial institutions toward a specific commercial AI product. Anthropic, the AI safety company valued at $61.5 billion following its most recent funding round, developed Mythos as a model with capabilities suited to complex analytical and security-oriented tasks.

The Trump administration actively encouraging Wall Street to test a specific AI model marks a new phase in how government is shaping enterprise AI adoption.

Why the US Government Is Pointing Banks Toward Mythos

The administration's encouragement appears rooted in broader concerns about the resilience of financial infrastructure. Banks remain high-value targets for cyberattacks, fraud schemes, and systemic vulnerabilities — and officials appear to view AI-powered detection tools as a meaningful line of defense. By directing institutions toward Mythos, the government is signalling confidence in Anthropic's safety-focused development approach, which centres on the company's "Constitutional AI" methodology designed to make model outputs more reliable and aligned.

It is not yet clear whether the administration's encouragement carries any formal policy weight or remains advisory. No regulatory mandate requiring testing has been reported, according to Bloomberg.

What Mythos Brings to Financial Security

Mythos is Anthropic's latest model release, though the company has not publicly disclosed its full technical specifications or benchmark performance at the time of writing. Within financial services, AI models capable of parsing large volumes of transaction data, code, and network activity could offer material improvements in how banks surface anomalies and flag potential breaches before they escalate.

Several large banks already operate internal AI programmes for fraud detection and compliance monitoring. What distinguishes the Mythos pilots is the external political context — the fact that a government is not merely permitting AI experimentation but actively promoting a named product to a regulated industry.

Financial institutions testing the model are doing so internally, meaning the pilots are unlikely to be customer-facing in their current form. The scope, duration, and specific use cases of each bank's testing have not been disclosed publicly.

Anthropic's Growing Footprint in High-Stakes Industries

Anthropic has been steadily expanding its enterprise presence beyond technology firms into sectors where reliability and safety credentials matter most. The company has existing partnerships with Amazon Web Services, which has invested $4 billion in the startup, and Google, which has committed a further $2 billion. Those cloud relationships give financial institutions established procurement pathways to access Anthropic's models within existing compliance frameworks.

The Mythos pilots follow a period of intensified AI competition in which OpenAI, Google DeepMind, and Meta have all released or announced new frontier models. Anthropic's positioning — emphasising safety and interpretability over raw benchmark performance — has made it a credible option for regulated industries that cannot afford unpredictable model behaviour.

Wall Street's appetite for AI tools has grown considerably over the past two years. JPMorgan Chase, Goldman Sachs, and Morgan Stanley have all disclosed significant internal AI initiatives, ranging from coding assistants to client-facing research tools. Vulnerability detection represents a more sensitive application, however, given the systemic consequences of a major bank's defences being compromised.

Regulatory Backdrop Adds Urgency

US financial regulators have been grappling with how to oversee AI adoption without stifling it. The Office of the Comptroller of the Currency and the Federal Reserve have both issued guidance encouraging banks to assess AI-related risks, but formal rulemaking on AI in financial services remains incomplete. The administration's informal push toward Mythos testing may be an attempt to accelerate practical experience in the sector ahead of any formal regulatory framework.

That approach carries its own risks. If a vulnerability is discovered through Mythos-assisted testing — or, conversely, if the model misses something significant — the government's role in promoting the tool could attract scrutiny. The line between encouraging innovation and implicitly endorsing a commercial product is one policymakers will need to navigate carefully.

What This Means

For financial institutions, the Mythos pilots signal that AI-powered security tools are moving from experimental curiosity to a domain where government and industry are actively aligned — meaning banks that delay evaluation risk falling behind both peers and regulatory expectations.