Cisco CEO Chuck Robbins has said data centers in space are coming — and that his company is already preparing its networking portfolio for orbital deployment, contrasting with OpenAI CEO Sam Altman's assessment that the idea remains a pipe dream.

Robbins made the comments during an extended interview on The Verge's Decoder podcast, where he discussed AI infrastructure investment, the fragmentation of the global internet, agentic security risks, and whether the current AI boom is a bubble. The conversation offers a candid public assessment from the CEO of a company that sits at the centre of global networking — Cisco's switches, routers, and silicon underpin data centre connectivity for hyperscalers, telecoms, and enterprises worldwide.

Why Cisco Is Taking Orbital Data Centres Seriously

Asked directly whether humanity should build data centres in space, Robbins answered with a single word: "Absolutely." He cited two drivers — unlimited solar power unimpeded by atmospheric interference, and the growing political resistance to terrestrial data centre construction in communities across the United States.

"Up there you don't have to deal with a lot of the challenges, like people who don't want these data centres in or near their communities. So that's obviously off the table."

Robbins said his head of product raised the issue internally "about two or three months ago," and that Cisco teams are now in early-stage analysis of what atmospheric conditions, radiation exposure, and the absence of convective cooling would mean for its hardware. He acknowledged the cooling problem is significant — there is no air in space to dissipate heat — but noted the absence of cooling loads also reduces product weight, a meaningful factor for launch economics.

On Elon Musk's specific plan — which involves filing for approval to launch a constellation of up to one million satellites via SpaceX — Robbins said simply: "I wouldn't bet against Elon." He described Cisco's positioning as wanting to be "on the edge — not bleeding edge" of the technology.

A Bubble Admission from Someone Who Knows What One Looks Like

Perhaps the more consequential disclosure in the interview was Robbins's acknowledgement that the current AI infrastructure boom has bubble characteristics. Cisco was briefly the most valuable company in the world during the dot-com peak — for approximately one day, by Robbins's own reckoning — before the crash wiped out enormous capital.

His read on the current cycle is nuanced. He argues that the dot-com era did not destroy the internet; it destroyed the losers, while the winners — Amazon, Google, and others — reshaped the global economy. He expects AI to follow the same pattern: misplaced capital, company failures, and then the emergence of durable winners.

What differentiates today's build-out, in his view, is utilisation. The dark fibre analogy that haunts dot-com comparisons does not apply, he argued, because current data centres are running at full capacity from day one. "These data centres are being used day one at full capacity," he said.

The Silicon Bet That Put Cisco Inside AI Data Centres

Robbins traced Cisco's current position in AI infrastructure to a 2016 acquisition that was, by his telling, partly luck. An engineer recommended buying an Israeli silicon company called Leaba, which gave Cisco a proprietary silicon architecture deployable across its entire product portfolio. That decision, he said, means Cisco is now one of roughly three companies globally capable of building the networking silicon required to connect GPU clusters and run large-scale AI training workloads.

"If we didn't have that silicon today, we would not be participating in this phase," Robbins said. Without it, Cisco would be purchasing merchant silicon like its competitors — an undifferentiated position in a market where differentiation determines margin.

The financial results reflect that bet. Cisco's enterprise data centre networking business has posted double-digit growth in six of the last eight quarters. Revenue from large hyperscalers — effectively zero five or six years ago — will reach "billions" this fiscal year, driven almost entirely by AI infrastructure demand.

Coopetition With Nvidia, and the Security Advantage No One Else Has

Nvidia's networking business generated approximately $31 billion in its last fiscal year, larger than Cisco's comparable segment. Robbins characterised the relationship as "coopetition" — Nvidia sells an integrated stack that includes networking, which appeals to neoclouds seeking speed and simplicity, while Cisco's decades of enterprise relationships and its security business create a different kind of lock-in.

That security angle is the more strategically interesting claim. As agentic AI systems proliferate across enterprise infrastructure, Robbins argues that security must be enforced at the network layer — where latency requirements make real-time identity validation of AI agents only practical if it happens in the network itself, not at the application layer. No major security company has a networking business at Cisco's scale, and no major networking company has a security business of comparable size. That combination, Robbins contends, is a structural advantage as enterprises begin deploying agents at scale.

He also floated a potential partnership with Okta — whose CEO had recently appeared on the same podcast — suggesting that Okta's proposed "kill switch" for AI agents might be most effectively implemented at the network layer, where Cisco could detect anomalous behaviour invisible to higher application stacks.

Fragmentation, Sovereignty, and the Splintering Internet

Beyond AI, Robbins described a fundamental architectural shift in how Cisco is building its products. The assumption of global cloud instances — partitioned and sold to different markets — is giving way to country-specific deployments designed to run entirely within national borders. Governments are increasingly demanding not just data residency but assurance that no foreign power, including the United States, can remotely disable their infrastructure.

Cisco's response has been to redesign cloud-oriented systems so they can operate as standalone national deployments from the outset, rather than as carved-off segments of a global architecture.

On the memory supply crunch affecting the broader semiconductor industry, Robbins said Cisco expects constraints to persist for roughly 18 months, though networking hardware carries a smaller memory footprint as a proportion of its bill of materials than compute platforms — giving Cisco some insulation from the worst margin pressure hitting consumer hardware manufacturers.

What This Means

For enterprises and infrastructure investors, Robbins's comments signal that orbital data centres have moved from science fiction to active engineering roadmap at one of the world's largest networking companies — and that the AI infrastructure build-out, bubble characteristics notwithstanding, shows sustained investment from where Cisco sits.