US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell have jointly summoned Wall Street's top executives to deliver an urgent warning about Mythos, a new artificial intelligence system developed by Anthropic PBC, which officials say marks the dawn of a new era in cybersecurity risk.
The joint intervention by two of America's most powerful financial regulators underscores how seriously the federal government is treating the emergence of Mythos. Anthropic, valued at $61.5 billion following its most recent funding round, has positioned itself as one of the more safety-conscious AI developers — making the alarm raised by Bessent and Powell noteworthy to industry observers.
What Makes Mythos Different from Previous AI Systems
Details about Mythos's specific capabilities remain limited, but the framing from Treasury and the Fed is unambiguous: this is not a routine product launch. By characterising the system as the beginning of a "new era" of cybersecurity concern rather than an incremental development, officials appear to be signalling that Mythos introduces capabilities — or risks — that fall outside the parameters of existing regulatory frameworks.
Cybersecurity threats from AI systems broadly fall into two categories: tools that can be weaponised by malicious actors to conduct attacks, and systems whose own infrastructure presents novel vulnerabilities. The fact that both Treasury and the Fed — institutions primarily concerned with financial stability — chose to lead on this issue suggests Mythos may pose particular risks to financial infrastructure, markets, or sensitive economic data.
The joint intervention by two of America's most powerful financial regulators underscores how seriously the federal government is treating the emergence of Mythos.
Wall Street banks and asset managers have spent the past two years aggressively integrating AI into trading, compliance, and risk management functions. That deep and accelerating adoption means any systemic AI-related vulnerability could propagate rapidly across interconnected financial institutions.
Regulators Step Into Territory Usually Left to CISA
Cybersecurity oversight for critical infrastructure typically falls under the Cybersecurity and Infrastructure Security Agency (CISA) and, for financial firms, the Office of the Comptroller of the Currency. A direct briefing led by the Treasury Secretary and the Fed Chair — rather than routed through those agencies — is procedurally unusual and suggests the concern operates at the level of systemic financial risk rather than technical security alone.
Powell, who has spoken cautiously about AI's economic implications in the past, has rarely inserted the Fed directly into technology-specific warnings of this kind. His participation lends the alert a weight that would be difficult for financial institutions to dismiss as regulatory overcaution.
The meeting format — summoning executives rather than issuing written guidance — also implies a degree of urgency that formal regulatory channels may not have been able to convey quickly enough. It echoes the emergency convening of bank chiefs seen during acute financial stress events, a comparison that will not be lost on those who attended.
Anthropic's Position and What Comes Next
Anthropic has built its public identity around responsible AI development, publishing detailed safety research and maintaining a policy team that actively engages with Washington. The company's alignment with safety-first principles makes it an unlikely subject of regulatory alarm — and raises the question of whether the concern stems from Mythos itself or from what its existence signals about the broader trajectory of AI capability.
It is possible that officials are less concerned about Anthropic's intentions and more concerned that Mythos demonstrates a capability threshold that other, less safety-focused developers could replicate or exceed. In that reading, Mythos functions as a proof-of-concept for a class of risk, not merely a single product risk.
For financial institutions, the immediate pressure will be to assess their own exposure. Banks and large asset managers will likely accelerate internal reviews of how deeply AI systems — including those from Anthropic — are embedded in sensitive operations. Regulators may follow the briefing with formal guidance, examination priorities, or new disclosure requirements tied to AI system deployment.
The broader AI industry will be watching closely. A regulatory posture this elevated — driven by Treasury and the Fed rather than technology-focused agencies — could signal a shift toward treating advanced AI as a category of systemic financial risk, with implications for how future systems are approved, disclosed, and deployed across regulated industries.
What This Means
For financial institutions and AI developers alike, the Mythos briefing signals that regulators are prepared to treat advanced AI capabilities as a systemic risk category — meaning compliance, disclosure, and oversight frameworks for AI in finance could change significantly in the months ahead.