Anthropic refused a Pentagon request to deploy its Claude model for weapons-related applications — and OpenAI moved quickly to fill the gap, striking a defense deal that multiple observers have characterized as rushed and underscrutinized.
The episode has forced into the open a fault line that has been widening quietly for months: AI companies face intense investor pressure to land lucrative government contracts while simultaneously maintaining public commitments to safety and ethical use. For Anthropic, those two objectives collided head-on.
Anthropic Drew a Line the Pentagon Wouldn't Accept
Anthropic, founded in 2021 by former OpenAI researchers Dario Amodei and Daniela Amodei, has built much of its brand identity around responsible AI development. The company's usage policies explicitly prohibit deploying Claude to assist in developing weapons capable of mass casualties.
When the Pentagon sought to push those boundaries, Anthropic declined. The specifics of what the U.S. Department of Defense requested have not been fully disclosed, but sources familiar with the negotiations indicated the dispute centred on use cases that Anthropic's internal guidelines classify as off-limits. The company's refusal cost it a significant potential contract.
OpenAI Stepped In — and Drew Criticism for How It Did So
OpenAI, which has been steadily expanding its government and enterprise business under CEO Sam Altman, struck a deal with the Pentagon that observers described as rushed. MIT Technology Review's reporting characterises the agreement as "opportunistic and sloppy" — language suggesting the company prioritised speed over the due diligence that a contract involving military applications would ordinarily require.
OpenAI has made a significant concession on the ethical commitments it has used to differentiate itself in a crowded market.
OpenAI has not publicly detailed the scope of the deal or which specific military use cases it covers. AI ethics researchers have argued that the absence of clear public terms makes meaningful oversight impossible. OpenAI did not respond to requests for comment at the time of publication.
The episode raises a jurisdictional question with no clean answer: no binding U.S. federal law currently governs which AI capabilities defense contractors may or may not offer the military. Oversight, where it exists, is advisory and internal.
ChatGPT's User Base Showed Measurable Decline
The military contract controversy arrived alongside separate turbulence for OpenAI. ChatGPT, which reached 100 million users faster than any consumer application in history following its November 2022 launch, recorded a measurable decline in its user base.
The drop is not fully attributable to the Pentagon deal. Competition from Google's Gemini, Anthropic's Claude, and a growing number of open-source alternatives has intensified considerably. But user sentiment data and app download figures, cited by analysts though not fully released by OpenAI, pointed to erosion of the novelty premium ChatGPT once commanded. Retaining users in a market where switching costs are low has proven harder than acquiring them.
London Hosted What Organizers Called the Largest Anti-AI Protest Yet
Public opposition to AI development took an unusually direct form in the United Kingdom, where demonstrators marched through central London in what organizers and press coverage described as the largest protest against artificial intelligence held in the country to date.
The march drew participants from labour unions concerned about job displacement, civil liberties groups focused on surveillance and facial recognition, and academics who argue the pace of AI deployment has outrun the regulatory frameworks designed to govern it. Organizers called for a binding international treaty on AI weapons applications — a demand that now reads as a direct response to the Anthropic-Pentagon dispute.
The protest did not target any single company. It reflected broader anxieties about who controls AI systems, for what purposes, and with what accountability. Both Westminster and Brussels have been moving toward more formal regulatory structures, though neither has enacted binding legislation governing military AI applications.
What This Means
The governance questions that ethicists and policymakers have debated for years — who decides what AI can be used for, what oversight applies when AI enters military contexts, how companies balance commercial survival with stated values — are now being answered in real time, by individual corporate decisions made largely without public input.