Anthropic is giving Apple and Amazon early access to an unreleased AI model called Mythos, allowing the companies to probe for cybersecurity risks ahead of any wider release, according to Bloomberg Technology.
The decision reflects a calculated approach to frontier model deployment increasingly common among AI developers: loop in major partners early so potential attack vectors can be identified and addressed before a model reaches the general public. Anthropic, valued at approximately $61.5 billion following its most recent funding round, counts Amazon as its largest cloud and investment partner, making that relationship a natural starting point for early access programs.
Why Cybersecurity Testing Comes Before Launch
Advanced AI models introduce a specific category of risk that differs from conventional software. A more capable model can potentially be manipulated to assist in crafting malware, identifying system vulnerabilities, or generating disinformation at scale. By letting trusted partners test Mythos in controlled conditions, Anthropic aims to surface these risks before bad actors can exploit them.
This kind of pre-release red-teaming — where security researchers and partner companies deliberately attempt to misuse or break a system — has become a standard, if still imperfect, safeguard in the industry. OpenAI and Google DeepMind have run similar programs ahead of major model launches.
Anthropic is granting early access to Mythos specifically to prepare for cyberattacks that might result from making the model more widely available.
The framing is notable: Anthropic is not characterising this as standard beta testing for product refinement, but explicitly as defensive preparation against threats the model itself could enable or attract.
What Is Known About Mythos
Details about Mythos remain limited. Bloomberg's reporting describes it as more capable than Anthropic's current publicly available models, which include the Claude 3.7 Sonnet and Claude 3 Opus series. No benchmark results, capability descriptions, or release timeline have been disclosed by the company.
The model's name has not appeared in prior Anthropic communications, suggesting it represents either a next-generation flagship or a specialised system developed outside the main Claude product line. Anthropic has not confirmed which category applies.
Anthropic employs roughly 3,000 people and has raised more than $12 billion in total funding, with Amazon committing up to $4 billion and Google investing approximately $2 billion. That investor base makes the inclusion of Amazon in an early access programme structurally logical, though Apple's participation signals Anthropic is actively cultivating relationships beyond its core cloud infrastructure partners.
Apple's Involvement Signals Broader Ambitions
Apple's inclusion is the more strategically significant data point. Apple has historically been cautious about third-party AI dependencies, preferring to develop on-device and proprietary cloud models under its Apple Intelligence umbrella. Its participation in Mythos testing suggests either a deepening evaluation of Anthropic's technology for future integration, or a security-focused collaboration aligned with Apple's platform integrity priorities.
Apple and Anthropic announced a partnership in 2024 that brought Claude into Xcode, Apple's developer environment, giving the relationship established precedent. Access to a pre-release frontier model represents a meaningful escalation of that arrangement.
For Anthropic, the relationship with Apple carries competitive weight. Securing Apple as an infrastructure or integration partner would position Anthropic's models inside one of the world's most valuable product ecosystems, placing it in direct competition with OpenAI, which has its own integration agreement with Apple for ChatGPT features in iOS 18.
Industry Pattern: Safety as Competitive Strategy
Anthropic has consistently positioned safety and responsibility as core differentiators, a strategy rooted in its founding story — the company was established in 2021 by former OpenAI researchers, including Dario Amodei and Daniela Amodei, partly over concerns about the pace of AI deployment at their previous employer.
Framing early access as cybersecurity preparation rather than product previewing is consistent with that brand positioning. It signals to regulators, partners, and the public that Anthropic takes seriously the dual-use risks of its own technology — while also building the commercial relationships that pre-release access inherently strengthens.
The approach also reflects a broader industry shift. Governments in the United States, European Union, and United Kingdom have all pushed for greater pre-deployment testing of frontier models, and programmes like this one — even when privately organised — can serve as evidence of responsible practice in regulatory conversations.
What This Means
Anthropic is using controlled early access to Mythos to simultaneously harden its model against misuse and deepen its strategic relationships with two of the world's most influential technology companies — a combination that advances both its safety credentials and its commercial position ahead of what appears to be a significant new model launch.