Two startups are deploying optical metamaterial technology — traditionally associated with experimental 'invisibility cloaks' — to address the bandwidth and energy constraints facing AI data centers, with commercial products expected as early as 2026.

Optical metamaterials are engineered structures smaller than the wavelengths of light they manipulate, allowing them to bend and redirect photons in ways conventional materials cannot. For roughly two decades the technology remained a laboratory curiosity, most associated with cloaking devices that worked on only a single wavelength and had, as Neurophos cofounder and CEO Patrick Bowen puts it plainly, "no market for them." That calculus is now changing as AI's computational demands for bandwidth force data center operators to look beyond conventional electronic switching.

Why Data Centers Are Running Out of Bandwidth

Modern data centers increasingly rely on optical circuit switches to move data between servers, replacing electronic switches that require repeated conversion between light and electrical signals. Each conversion step consumes energy and introduces latency. The problem is that today's optical switching options carry their own trade-offs: silicon photonics-based switches face energy efficiency limitations, while switches built on microelectromechanical systems — known as MEMS — can suffer reliability problems, according to Sam Heidari, CEO of optical metasurface startup Lumotive, based in Redmond, Washington.

"Having no moving parts significantly improves reliability," — Sam Heidari, CEO, Lumotive

Lumotive's answer is a programmable metamaterial microchip that debuted on March 19. The chip is covered with copper structures fabricated using standard semiconductor manufacturing processes. Liquid crystal elements sit between those copper features — electronically programmable in the same way as a conventional LCD display — allowing the chip's optical properties to be reconfigured in real time. The result is a single component capable of steering, focusing, shaping, and splitting beams of reflected light, effectively replacing multiple discrete optical components with no moving parts.

Lumotive's Path From Lab Physics to Data Center Hardware

The commercial viability of Lumotive's approach required extensive foundry development. "We had to go through a lot of R&D at the foundries to not only make our devices functional, but also commercially viable in terms of the right cost and right reliability," Heidari says. The company reports its chips can handle the industry-standard 256-by-256 port configuration and scale to 10,000-by-10,000 ports — a figure that would represent a substantial increase in switching capacity for large-scale AI clusters. Lumotive plans to ship its first optical switches by end of 2026, though the company has not disclosed pricing or named specific customers.

Neurophos is targeting a different layer of the AI infrastructure stack: compute itself. The Austin, Texas-based startup is building optical processors that perform AI calculations using light rather than electrons — an approach known as optical computing. The appeal is power efficiency: photonic computation can, in principle, perform certain matrix operations at a fraction of the energy cost of GPU-based processing.

Optical Modulators 1/10,000th the Size of Existing Designs

The core obstacle blocking optical computing from commercial relevance has been density. Existing optical processors are too physically large to match the compute density of leading electronic chips. Neurophos says its metamaterial-based optical modulators — the photonic equivalent of a transistor — are 1/10,000th the size of current designs, fabricated entirely in standard CMOS processes with no exotic materials required. "If you wanted to do that with off-the-shelf silicon photonics, your chip would be a square meter in size," Bowen says, describing a 1,000-by-1,000 array of modulators fitted into a 5-by-5-millimeter chip area.

When a laser encoding data strikes a Neurophos chip, the configuration of each metamaterial element modifies the reflected beam to encode outputs from AI inference or training tasks. The company claims this architecture will deliver 50 times greater compute density and 50 times greater energy efficiency compared to Nvidia's Blackwell-generation GPU — a benchmark comparison that, if validated independently, would represent a significant shift in the current GPU-dominated AI compute market. Neurophos says hyperscalers — the world's largest cloud providers — will evaluate two proof-of-concept chips during 2025, with first full systems targeted for early 2028 and production volume ramping mid-2028. The company has not disclosed its funding level or valuation.

Manufacturing Credibility as the Critical Variable

Both companies are leaning heavily on the use of standard chipmaking processes as a signal of commercial seriousness. Exotic fabrication methods have historically prevented photonics startups from progressing beyond the prototype stage, unable to achieve the cost curves needed to compete with mature silicon electronics. By keeping their designs within existing foundry capabilities, Lumotive and Neurophos are attempting to sidestep that trap — though neither has yet shipped a product at scale.

The broader competitive landscape includes well-funded photonics players such as Ayar Labs, Lightmatter, and Celestial AI, all pursuing optical interconnect or optical compute approaches with substantial venture backing. Lightmatter, for instance, raised $400 million at a $4.4 billion valuation in early 2024. Metamaterial-specific approaches remain a smaller subset of that field, and independent validation of performance claims — particularly Neurophos's 50x efficiency figure versus Blackwell — will be closely watched when hyperscaler evaluations conclude.

What This Means

If either company's performance claims hold up under hyperscaler scrutiny, optical metamaterials could reshape the economics of AI infrastructure — reducing the energy and capital costs that currently constrain how fast AI systems can scale.