OpenAI, Anthropic, and Google have begun working together to stop Chinese AI companies from using their models' outputs to train rival systems — a notable show of coordination among major competitors in the US AI industry.
The collaboration, reported by Bloomberg Technology on April 6, 2026, targets a technique commonly called model distillation, in which a less capable AI system is trained on the outputs of a more powerful one. US officials and AI executives have raised concerns that Chinese developers have used this method to narrow the capability gap with leading American models without independently developing the underlying technology.
This marks the first known instance of OpenAI, Anthropic, and Google coordinating directly on a shared technical threat rather than competing for the same ground.
Why Distillation Threatens the US AI Lead
Model distillation is not inherently illicit — it is a standard technique used across the industry to build smaller, more efficient models. The concern here is specifically about unauthorized, large-scale extraction of proprietary model outputs by foreign competitors, effectively reverse-engineering years of research investment at a fraction of the cost.
The practice became a focal point in early 2025 when DeepSeek, a Chinese AI lab, released models that matched or approached US frontier performance at dramatically lower reported cost. US companies and government officials alleged, though did not conclusively prove, that distillation from American models contributed to DeepSeek's rapid progress. DeepSeek denied those claims.
The episode accelerated internal reviews at major US labs about how their APIs and public-facing products could be used to harvest training data at scale.
What the Three Companies Are Doing
Bloomberg's reporting does not detail the precise technical or legal mechanisms the companies are deploying, and the firms have not issued public statements confirming the scope of the collaboration. It is not yet clear whether the effort involves shared blocklists of suspicious usage patterns, coordinated terms-of-service enforcement, or technical watermarking of model outputs — all approaches that have been discussed in the industry.
What is confirmed, according to Bloomberg, is that the three companies are actively coordinating rather than acting unilaterally. That represents a significant shift. OpenAI, Anthropic, and Google DeepMind operate in direct commercial competition, and information-sharing between them would normally raise antitrust questions. The framing of this effort as a national-security-adjacent initiative may provide legal and political cover for the cooperation.
It is also not confirmed whether any government agency — such as the Commerce Department or the National Security Council — is involved in or has sanctioned the effort. The enforcement mechanism, and whether it has any binding dimension, remains unclear from current reporting.
The Jurisdiction Problem
Enforcing restrictions on model distillation by overseas actors is legally complex. US companies can restrict API access, update terms of service, and deploy detection systems — but jurisdiction over Chinese firms operating outside US law is limited. No international treaty governs AI model output extraction, and the lack of a binding multilateral framework means the companies are largely relying on technical countermeasures and access controls rather than legal remedies.
The US government has used export controls — most notably restrictions on advanced semiconductor exports — to limit China's ability to train frontier models domestically. Distillation represents a potential workaround to those controls: if Chinese labs can learn from US models' outputs rather than building from scratch, chip restrictions lose some of their intended effect. This makes the commercial self-interest of US AI companies align directly with broader government policy goals, even without formal coordination.
Industry Implications
The move has implications beyond the immediate competitive question. API access policies at major US labs may tighten, with more aggressive monitoring of high-volume or anomalous query patterns. Researchers, startups, and developers outside China could face collateral friction if detection systems flag legitimate use.
It also signals a shift in how US AI companies think about their own outputs as strategic assets. For much of the industry's recent history, the dominant concern was protecting model weights — the parameters of a trained model. This effort suggests companies now view model outputs at scale as equally sensitive.
The collaboration also raises questions about what form longer-term industry coordination on security might take. A joint technical body, a shared threat-intelligence function, or a government-convened forum are all possibilities that industry observers have discussed, though none has been formally announced.
What This Means
For policymakers and AI professionals, this alliance signals that leading US labs now treat the extraction of model outputs as a strategic threat serious enough to override commercial rivalry — and that technical countermeasures, not regulation, are currently the primary line of defence.