US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street chief executives to an urgent meeting on April 10, 2026, to address cyber risks linked to Anthropic's latest AI model, according to Bloomberg Technology.
The emergency gathering marks a significant escalation of AI risk oversight within the US financial sector. Rather than a scheduled regulatory forum, this appears to have been an unplanned convening — a signal that authorities consider the threat sufficiently urgent to pull senior bank leadership into a room at short notice.
The convening of the Treasury Secretary and the Fed Chair together on a single AI risk issue is without modern precedent in US financial regulation.
What Triggered the Emergency Meeting
Bloomberg reported that the meeting was prompted specifically by cyber risks identified in connection with Anthropic's newest model. The precise nature of those risks has not been publicly disclosed, and it remains unclear whether the concern relates to the model's potential for offensive cyber use, vulnerabilities in financial institutions' AI deployments, or systemic exposure through shared infrastructure providers.
The distinction matters significantly from a regulatory standpoint. If the threat is offensive — meaning the model could be weaponised to attack financial systems — the policy response would likely involve the Cybersecurity and Infrastructure Security Agency (CISA) and potentially the intelligence community. If the risk is defensive — institutions using the model in ways that introduce new attack surfaces — the response would fall more squarely within the Fed's and Treasury's existing supervisory authority over bank technology risk.
Neither Anthropic nor the Treasury Department had issued a public statement on the meeting's outcomes as of the time of reporting.
Anthropic's Growing Footprint in Critical Infrastructure
The urgency of the meeting reflects how rapidly Anthropic has moved from research laboratory to infrastructure provider. CoreWeave CEO Michael Intrator, speaking separately to Bloomberg on the same day, confirmed active commercial deals with both Anthropic and Meta, underscoring that demand for AI compute capacity shows no sign of slowing.
CoreWeave, which went public in early 2025, has positioned itself as a primary supplier of GPU infrastructure to frontier AI developers. Its deepening ties with Anthropic mean that the same underlying compute fabric powering Anthropic's models is now commercially entangled with some of the largest financial institutions in the world — many of which are themselves Anthropic customers or investors.
This interconnection creates a concentration risk that regulators have been slow to quantify. When a single AI provider's model is deployed across dozens of systemically important financial institutions simultaneously, a flaw in that model — whether a security vulnerability or an emergent behaviour — could propagate across the entire sector at speed.
The Regulatory Gap This Exposes
Under current US frameworks, AI risk oversight in financial services is fragmented. The Office of the Comptroller of the Currency (OCC), the Fed, and the Securities and Exchange Commission (SEC) each hold partial jurisdiction over different aspects of how banks deploy AI. No single binding federal standard for AI model risk management in financial institutions exists, though guidance documents from the OCC and interagency model risk management guidelines provide a non-binding framework.
The April 10 meeting, if it leads to formal action, could accelerate the consolidation of that oversight. Treasury convening Wall Street CEOs directly — rather than routing concerns through existing supervisory channels — suggests the administration may be preparing to act outside the slower pace of notice-and-comment rulemaking.
It also places the United States in an interesting position relative to the European Union, where the EU AI Act already classifies certain AI applications in critical infrastructure as high-risk, requiring conformity assessments before deployment. The US has so far relied on voluntary commitments and sector-specific guidance rather than binding pre-deployment requirements.
Anthropic's Position and What Comes Next
Anthropichas built its public identity around safety-focused AI development, publishing detailed model cards and maintaining a policy team that engages regularly with regulators in Washington. The company has previously briefed congressional staff and participated in White House AI safety initiatives.
That reputation makes the emergency meeting more, not less, notable. If an AI company with Anthropic's safety profile is generating the kind of cyber risk concern that pulls the Treasury Secretary and Fed Chair into the same room, it speaks to how seriously frontier model capabilities are now being taken as a systemic threat — not merely a consumer protection or bias issue.
What specific action the financial regulators intend to take remains unclear from available reporting. Possible outcomes include mandatory disclosure requirements for banks using frontier AI models, emergency supervisory guidance, or referral to law enforcement or intelligence agencies if the risk involves potential adversarial exploitation of the model.
What This Means
For financial institutions, boards and risk committees that have treated AI model risk as a compliance checkbox should expect significantly heightened regulatory scrutiny in the near term — and the possibility that voluntary guidance is about to become something with teeth.