Two Senate Democrats are advancing legislation to write Anthropic's AI safety limits into federal law, escalating a confrontation between the AI industry and the Trump administration after the Pentagon blacklisted the company earlier this month for refusing to remove restrictions on military use of its models.

The moves come after the Trump administration designated Anthropic a supply-chain risk — a formal blacklisting that restricts the company's government business — following the company's insistence on maintaining what it calls "red lines" governing how its AI systems can be deployed. Anthropic has responded by filing suit against the government, alleging violations of its constitutional rights.

The Two Bills Taking Shape

Sen. Adam Schiff (D-CA) is working on legislation that would "codify" Anthropic's existing red lines, specifically targeting autonomous weapons systems. The bill's central requirement: humans must retain final decision-making authority in any situation involving lethal force. The proposal would apply across federal military and national security contexts, though precise enforcement mechanisms and jurisdictional scope have not yet been made public.

Sen. Elissa Slotkin (D-MI) has already introduced a separate, narrower bill aimed at limiting the Department of Defense's ability to deploy AI for mass surveillance of American citizens. Unlike Schiff's proposal, which focuses on overseas and combat-context autonomy, Slotkin's measure addresses domestic civil liberties concerns — specifically the use of AI-driven surveillance tools against the American public.

The Trump administration blacklisted Anthropic after it set limits on how the military could use its AI models, designating it a supply-chain risk.

Both bills are currently at the proposal or early introduction stage. Neither has advanced to a committee vote, and both face significant headwinds in a Republican-controlled Senate where the administration's position on AI procurement holds considerable weight.

What Anthropic's Red Lines Actually Prohibit

Anthropić has built usage restrictions directly into its model deployment agreements — a practice common among frontier AI developers but rarely enforced against a federal government client. According to the company, its red lines prohibit using its models to enable fully autonomous lethal weapons systems and to conduct mass surveillance operations. The company argues these restrictions reflect core safety principles, not commercial obstruction.

The Pentagon's designation of Anthropic as a supply-chain risk is a significant escalation. Such designations are typically reserved for foreign-linked vendors suspected of security threats — not domestic AI companies in a dispute over usage terms. Anthropic's lawsuit frames the blacklisting as government retaliation for exercising its right to set conditions on its own technology, a constitutional claim that legal observers say raises novel questions about the limits of executive procurement authority.

A Legislative Long Shot With Symbolic Weight

The Democratic bills face an uphill path. With Republicans controlling both chambers and the Trump administration actively pushing for fewer restrictions on military AI development, neither Schiff's nor Slotkin's proposal is likely to reach the Senate floor in its current form. Neither bill has been described as bipartisan, and no Republican co-sponsors have been announced.

Still, the legislative effort carries weight beyond its immediate prospects. By attempting to codify Anthropic's red lines, Schiff is effectively arguing that the company's internal safety policies represent a reasonable floor for federal law — a framing that, if adopted, would constrain future administrations as well as the current one. The proposals also establish a formal Democratic position on military AI governance ahead of what is expected to be a sustained policy debate over the next several years.

The Slotkin bill, focused on domestic surveillance, draws on a distinct but related concern: that AI dramatically lowers the cost and scale of monitoring civilian populations, and that existing legal safeguards — written before large language models and AI-driven image and pattern recognition — are inadequate.

A Broader Fight Over Who Controls AI's Limits

The Anthropic episode has surfaced a question that Washington has largely deferred: who gets to set the ethical boundaries on AI tools used by the military, the vendor or the government? Traditional procurement logic holds that once a contractor sells to the federal government, the government determines use. Anthropic's position — backed now by at least two senators — is that some limits are non-negotiable regardless of who is paying.

That argument has implications well beyond this specific dispute. If upheld legally or legislatively, it would establish that AI companies can condition government sales on ethical use requirements — a precedent that would reshape how the Defense Department, intelligence agencies, and other federal bodies negotiate AI contracts.

What This Means

For AI companies with government clients, the Anthropic case and the legislative response signal that usage-restriction clauses in federal contracts are moving from internal policy decisions to live political and legal battlegrounds — and that the outcome will set durable precedent for military AI governance in the United States.