Pre-fabricated AI data centers small enough to transport by truck are emerging as a faster, cheaper alternative to conventional facilities, with Duos Edge AI and LG CNS each deploying modular units packed with 576 Nvidia GPUs that can be operational in approximately six months.

The urgency is real. AI hardware procurement has surged, but the construction timelines for traditional data centers — typically two to three years — mean that companies are sitting on expensive GPUs with nowhere to put them. "I just came back from Nvidia's GTC, and a lot of [companies] are sitting on their deployment because their data centers aren't ready, or they can't find the space," said Doug Recker, CEO of Duos Edge AI.

The turnaround is so quick that building the pre-fabricated unit isn't always the constraint — permits are.

How the Pods Work

Duos Edge AI's compute pods measure 55 feet long and 12.5 feet wide — slightly larger than a standard shipping container and built for truck transport. Inside, racks of GPUs mirror the layout of conventional data centers, supported by liquid cooling systems capable of handling intensive AI workloads. Power modules sit alongside the compute pods on-site, and redundant fiber connections allow multiple pods to operate in unison.

Recker described the assembly process plainly: "Everything is built off-site at a factory, and we can put it together like a jigsaw puzzle." Site preparation involves pouring a concrete pad — no steel-and-concrete shell required. The company recently signed a deal with AI infrastructure firm Hydra Host to deploy four pods totalling 2,304 GPUs, with an option to double that to 4,608 GPUs.

Duos is not new to modular deployments. The company previously served rural customers, including a school district in Amarillo, Texas, though those earlier edge data centers were less powerful than the GPU-dense units now targeting AI workloads.

LG CNS Aims for Hyperscale Ambitions

In South Korea, LG CNS — the IT infrastructure subsidiary of LG — has announced its own AI Modular Data Center, also built around 576 Nvidia GPUs per unit. The company plans to deploy its first units in the port city of Busan, with ambitions to eventually place up to 50 units at a single site. At that scale, a Busan campus would house more than 28,000 GPUs in total.

"By adopting a modular approach, the AI Modular Data Center can be incrementally expanded through the combination of dozens of AI Boxes," said Heon Hyeock Cho, vice president and head of the datacenter business unit at LG CNS. The company is also developing an expanded single unit capable of supporting more than 4,600 GPUs, with a planned launch before the end of this year, according to Cho.

The Busan deployment illustrates a different strategic logic than Duos pursues. Where Duos targets smaller, faster deployments that sidestep complex permitting, LG is using modularity as a path to hyperscale capacity — building large by assembling many small units incrementally.

A Market Growing Fast Enough to Attract Giants

Duos and LG CNS are not operating in isolation. Hewlett Packard Enterprise, Vertiv, and Schneider Electric all have modular data center products available or in development. According to market research firm Grand View Research, the global modular data center market could more than double in size by 2030.

The economics underpin the interest. Recker said a five-megawatt modular deployment costs approximately $25 million, with Duos' cost per megawatt running at roughly half the rate of larger conventional facilities. Smaller deployments also encounter less regulatory friction — an increasingly relevant factor as local governments in the United States and Europe grow more resistant to large data center projects citing energy consumption, water use, and grid strain.

For Duos, the permitting advantage is structural. Its smaller footprint and lower power draw fall below the thresholds that trigger the most intensive regulatory review, allowing the company to move faster than hyperscale developers. The constraint, Recker noted, isn't manufacturing — a pre-fabricated unit can be built in 60 to 90 days — but obtaining site permits, which can take longer even for modest installations.

What This Means

For any organization holding Nvidia GPUs without a facility to run them in, modular data centers now represent a viable, cost-competitive alternative to waiting years for a conventional build — and the entry of HPE, Vertiv, and Schneider Electric indicates this is becoming mainstream infrastructure, not a niche workaround.