Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly summoned Wall Street chief executives to an urgent meeting in April 2026, warning that Anthropic PBC's latest AI model could usher in a new era of cyber risk for the financial system.

The meeting, reported by Bloomberg Technology on April 10, 2026, represents one of the most significant public signals yet that US financial regulators view advanced AI capabilities not merely as a competitive opportunity for banks, but as a potential systemic threat. The specific Anthropic model that triggered the warning has not been publicly identified, but the urgency of the convening — involving the heads of both the Treasury and the Federal Reserve simultaneously — indicates that internal assessments reached a threshold serious enough to demand immediate executive-level attention across the sector.

The rare joint intervention by the two most senior US financial authorities signals that AI-driven systemic risk to the banking sector has moved from theoretical concern to active regulatory alarm.

Why Regulators Are Sounding the Alarm Now

Advanced AI models have grown substantially more capable in recent cycles, with frontier systems demonstrating improved abilities to automate complex reasoning, generate convincing synthetic content, and — crucially — identify and exploit software vulnerabilities. Financial institutions represent a high-value target: they hold sensitive customer data, process trillions of dollars in daily transactions, and operate interconnected systems where a breach at one node can propagate rapidly across the network.

The Federal Reserve and the US Treasury each carry distinct but overlapping mandates over financial system stability. The Fed supervises bank holding companies and monitors systemic risk; Treasury coordinates financial intelligence and oversees the Financial Stability Oversight Council (FSOC). A joint convening of this kind bypasses the usual slower cadence of regulatory guidance and suggests the two agencies believe the risk window is near-term, not hypothetical.

It is not yet publicly confirmed whether the meeting produced binding directives or remained advisory in nature. Based on available reporting from Bloomberg, the session was framed as a warning — placing the onus on bank CEOs to assess and address their own exposure — rather than announcing a formal enforcement action or new rule. That distinction matters: advisory guidance carries moral and reputational weight, but does not compel specific remediation timelines or invite legal penalties for non-compliance.

What the Anthropic Connection Signals

Anthropic, the San Francisco-based AI safety company, has positioned itself as focused on responsible frontier development, publishing research on model risks and maintaining a public commitment to safety-oriented practices. Its Claude model family has been widely adopted in enterprise settings, including by financial services firms seeking to automate compliance, customer service, and data analysis tasks.

The specific capability or incident associated with Anthropic's latest model that prompted the Bessent-Powell meeting has not been disclosed in current reporting. Possibilities consistent with this type of regulatory response include: evidence that the model can be used to accelerate phishing or social-engineering attacks at scale; demonstrated capability to assist in identifying exploitable weaknesses in financial infrastructure; or internal government assessments flagging the model's potential for misuse by state or non-state threat actors targeting US banks.

Anthropic has not publicly commented on the meeting, according to available reporting. The company's involvement appears to be as the developer of the model in question, not as a participant in the regulatory convening itself.

The Jurisdictional and Enforcement Picture

In jurisdictional terms, this warning operates within the US domestic financial regulatory framework. The Fed and Treasury hold authority over federally supervised banks and systemically important financial institutions. Their guidance does not extend to AI developers directly — Anthropic is not a regulated financial entity — meaning the regulatory pressure flows toward the banks as the deployers and potential victims of the technology, not toward the model's creator.

This creates a structural gap that regulators have not yet formally closed. Banks can be required to maintain robust cybersecurity programs under existing frameworks, including guidance from the Federal Financial Institutions Examination Council (FFIEC) and the Office of the Comptroller of the Currency (OCC). But no binding US federal rule currently mandates specific AI risk assessments for frontier models used or encountered by financial institutions. The Bessent-Powell meeting may be a precursor to such rulemaking, or it may remain a high-profile but non-binding intervention.

The meeting also arrives against a broader backdrop of intensifying global AI governance activity. The European Union's AI Act, which entered phased enforcement in 2025 and 2026, classifies certain AI applications in critical infrastructure — including finance — as high-risk, requiring conformity assessments and human oversight mechanisms. US regulators have not adopted an equivalent statutory framework, leaving the domestic approach more fragmented and agency-specific.

What This Means

For bank executives and compliance officers, the Bessent-Powell meeting is a clear signal that federal regulators expect financial institutions to treat advanced AI models — whether deployed internally or encountered as threat vectors — as a board-level risk issue, and that formal regulatory requirements in this area are likely to follow.