Tower Semiconductor and Scintil Photonics have produced what they describe as the world's first single-chip DWDM light engine for AI infrastructure, integrating lasers directly into a co-packaged optics design and potentially doubling GPU utilization rates in large AI clusters.
Dense wavelength division multiplexing — transmitting multiple optical signals over a single fiber — has been a cornerstone of telecommunications since the 1990s, but has never been economically or technically viable at data center scale. The two companies plan to detail their manufacturing road map at the OFC 2026 Conference, scheduled for 17 to 19 March in Los Angeles.
Why Lasers Were the Missing Piece in Co-Packaged Optics
AI data centers increasingly rely on optical connections rather than copper to move data between GPUs at the speeds modern workloads demand. Co-packaged optics (CPO) — integrating optical components into the same package as the processor — has already been adopted by Nvidia and Broadcom for scale-out networking, which links separate clusters within a data center. But those deployments use only a single wavelength per fiber.
The harder problem is scale-up networking: directly connecting accelerators within a single rack or cluster so that dozens of GPUs and memory units function as one entity. That demands seamless bandwidth and extremely low latency. Until now, the laser itself had no scalable path onto the CPO chip.
Matt Crowley, CEO of Scintil Photonics, frames the scale-up challenge in practical terms. "When latency spikes, the utilization rate of the GPUs drops significantly," he says. "DWDM-based multiplexing can double that utilization." Any processor running faster than the network forces every other GPU to wait — a bottleneck that compounds across hundreds of accelerators. "The data transmitted within an AI data center is the equivalent of massively scaling a supercomputer," Crowley adds.
How Scintil's SHIP Technology Works
Scintil's answer is its SHIP (Scintil Heterogeneous Integrated Photonics) process, which bonds optical gain materials onto a standard 300-millimeter silicon photonics wafer supplied by Tower Semiconductor. The wafer is flipped to expose its buried oxide layer, and small squares of unpatterned InP/III-V semiconductor dies are bonded precisely at each laser site — minimizing use of the expensive material. Photolithography tools then etch diffraction gratings to form eight distributed feedback lasers.
"We're not reinventing the laser," Crowley says. "The precision of advanced photolithography simply delivers tighter wavelength stability and spacing than traditional manufacturing on silicon could achieve."
The resulting product — the LEAF Light photonic integrated circuit — integrates two sets of eight distributed feedback laser arrays. Each fiber port delivers 8 or 16 wavelengths with 100- or 200-gigahertz channel spacing, preventing overlap or mode hopping. A companion ASIC chip handles electronic control and monitoring of the laser array.
From 400 Gb/s on One Channel to 1.6 Tb/s Across Eight
The architectural shift DWDM enables is significant. Rather than pushing 400 gigabits per second down a single channel — which raises error-correction overhead and latency risk — the LEAF Light chip spreads 50 Gb/s across 8 channels, increasing total capacity per fiber while reducing the pressure on any individual channel. The design supports up to 1.6 terabits per second in a single fiber.
According to a Nvidia road map cited in an IEEE Spectrum report, future DWDM interconnects could eventually achieve sub-picojoule-per-bit energy consumption, a figure that would represent a step change in data center power efficiency at a time when AI infrastructure energy costs are under intense scrutiny.
Crowley identifies latency as a primary benefit. High-bandwidth single-channel links require aggressive forward error correction, which increases the odds of delayed delivery. Spreading the same aggregate throughput across multiple lower-bandwidth DWDM channels reduces that risk, keeping GPUs processing rather than waiting.
Production Timeline and Path to 2028 Deployment
Scintil and Tower plan to deliver tens of thousands of units to customers by the end of 2026, with production scaling by an order of magnitude in 2027. Customer deployment of DWDM in scale-up networks is targeted for 2028, a timeline Crowley says the supply chain will be ready to support.
No valuation or funding figures were disclosed in connection with this announcement. The companies' manufacturing partnership — combining Scintil's heterogeneous integration process with Tower's established silicon photonics wafer production — represents the commercial model: Scintil contributes the photonic design and bonding process; Tower provides the semiconductor manufacturing infrastructure.
The broader industry context matters here. As AI model training and inference workloads grow, the interconnect between GPUs has emerged as a primary bottleneck and power sink. Moving from electrical to optical links in scale-out networks was the first wave; integrating multiwavelength lasers into the processor package itself is the next.
What This Means
If Scintil and Tower deliver at scale, AI data center operators gain a credible path to doubling GPU utilization while cutting interconnect power — two metrics that directly determine the economics of training and running large AI models.
