Anthropic has reported a revenue run rate surpassing $30 billion, up from $9 billion at the close of 2025, and confirmed a chip supply agreement with Broadcom and Google to underpin its expanding infrastructure needs.
The figures, reported by Bloomberg Technology, represent a more than threefold increase in annualised revenue in the span of a single quarter — an acceleration that places Anthropic PBC among the fastest-growing technology companies on record. The Broadcom deal centres on sourcing Google TPU (Tensor Processing Unit) chips, custom silicon designed specifically for AI workloads, giving Anthropic a dedicated hardware pipeline outside the broader GPU market.
From $9 Billion to $30 Billion in One Quarter
The pace of Anthropic's growth is notable even by AI industry standards. At the end of 2025, the company's annualised revenue stood at $9 billion — itself a figure that would have seemed ambitious just a year prior. The jump to $30 billion suggests enterprise adoption of its Claude model family has accelerated sharply, likely driven by large corporate API contracts and expanded partnerships with cloud providers.
A more than threefold increase in annualised revenue in a single quarter places Anthropic among the fastest-growing technology companies on record.
Anthropichas previously secured investment from Amazon, which committed up to $4 billion, and Google, which invested up to $2 billion, with both cloud giants offering Anthropic preferential access to their infrastructure. The new Broadcom arrangement appears to deepen the Google relationship further, routing TPU capacity to Anthropic through Broadcom's supply chain and distribution capabilities.
Why Custom Silicon Matters at This Scale
As AI companies scale their inference and training workloads, access to compute has become as strategically important as the models themselves. The broader market for Nvidia GPUs remains constrained, and competition for allocation is intense. By locking in a dedicated supply of Google TPUs through Broadcom, Anthropic reduces its dependence on the open GPU market and secures a more predictable infrastructure cost structure.
TPUs are purpose-built for the matrix operations that underpin large language model inference, and Google has invested years refining them specifically for this use case. For Anthropic, whose Claude models compete directly with OpenAI's GPT-4 series and Meta's Llama family, compute reliability and cost efficiency at scale are direct competitive variables — not just operational details.
Competitive Positioning in a Crowded Market
Anthropichas consistently positioned itself as a safety-focused alternative to other frontier AI labs, but its commercial ambitions are now firmly in the mainstream. A $30 billion run rate puts it in direct conversation with OpenAI, which according to earlier reports was targeting a $100 billion annualised revenue figure for 2025, and ahead of most other independent AI developers.
The company's enterprise traction appears to be accelerating across sectors including legal, financial services, and software development — areas where Claude's extended context window and instruction-following capabilities have drawn particular attention. Anthropic has not disclosed headcount figures in conjunction with this announcement, according to the Bloomberg report.
The Broadcom angle is also notable for what it signals about the evolving semiconductor supply chain for AI. Broadcom has historically served as a critical intermediary in chip logistics and custom ASIC development, and its involvement here suggests that Google's TPU ecosystem is expanding its commercial reach beyond Google's own data centres.
What This Means
Anthropichas moved from credible challenger to a major commercial force at a speed that will require competitors, investors, and enterprise buyers to reassess the competitive landscape — and its Broadcom-Google chip deal suggests it is building the infrastructure to sustain that trajectory.