If Nvidia really commits up to $100 billion to turbocharge OpenAI’s next wave of AI data centers, the headline number isn’t the only big figure. The power bill—and the grid capacity behind it—will be staggering. Recent reporting says the OpenAI infrastructure push (“Stargate” and related campuses) targets multi-gigawatt sites across the U.S., with Nvidia supplying the silicon backbone and taking a central role in the build-out. Think AI factories, not server rooms.
Below, we unpack what that means in plain numbers: megawatts, terawatt-hours, dollars—and what it will take for the grid to keep up.
First-order power math (and why it’s huge)
Let’s translate capacity into electricity:
-
If the platform averages 10 GW of load (round number for “ten nuclear reactors”), that’s:
-
Energy/year: 10 GW × 8,760 h ≈ 87.6 TWh.
-
With a modern PUE (power-usage-effectiveness) of ~1.2 (typical for hyperscale AI with liquid cooling), total grid draw becomes ~105 TWh/year.
(PUE accounts for cooling, power conversion, etc.)
-
-
Near-term phase at ~7 GW (as some reports outline): ~61 TWh/year IT load; ~73 TWh/year at PUE 1.2.
For context, the IEA estimates all data centers globally used ~415 TWh in 2024 and may double by 2030; a single 10 GW AI platform would already be a quarter of today’s entire sector—or roughly a tenth of the IEA’s 2030 projection—all by itself. That’s why utilities, regulators, and operators are sounding alarms.
What drives those watts?
1) Accelerators, not CPUs
AI clusters are now dominated by GPU/accelerator servers. A single Nvidia DGX H100/H200 node is an 8U box drawing several kilowatts; scale that to rows of 100 kW+ racks and halls in the 100–300 MW range—and you see why multi-site build-outs hit gigawatt levels.
2) Cooling moves to liquid
At these densities, air alone won’t cut it. Hyperscalers are standardizing direct-to-chip liquid cooling and even 400 VDC bus to feed denser racks, with roadmaps pointing at 600 kW to 1 MW per rack later this decade.
3) Utilization patterns
AI power isn’t just a steady hum. Training runs hit sustained peaks; inference can be bursty by time-of-day and product traffic. That volatility complicates grid interconnects and pricing.
What would the electricity bill look like?
Electricity cost varies wildly by site and contract. At $0.07–$0.10/kWh—not unusual for industrial-scale power in North America—annual energy spend for 10 GW at PUE 1.2 (~105 TWh/yr) lands around $7.3–$10.5 billion per year. Locking in long-tenor supply (PPAs), behind-the-meter generation, and flexible tariffs will be decisive to keep unit economics sane. (Estimate derived from the load math above; pricing ranges are typical industry bounds.)
Can the grid actually deliver this?
Yes—but only with years of planning. The IEA’s 2025 analysis highlights that while data-center demand is still a single-digit share of global electricity, it’s concentrated in specific nodes (think transmission constraints), making interconnections, substations, and new lines the critical path. Expect developers to pair utility-scale solar/wind + storage, long-duration batteries, heat reuse, and even nuclear PPAs/SMRs as siting expands.
Goldman Sachs projects 160% growth in data-center power demand by 2030; AI is the main driver, and local grid headroom will dictate where these AI factories land.
A quick reality check on “10 reactors worth of power”
That metaphor isn’t hyperbole. Reports around Stargate now reference 10 GW targets and multiple U.S. sites; Nvidia’s own Kyber/NVL rack roadmaps point toward 600 kW-plus per rack, which—multiplied by thousands of racks—lands squarely in multi-GW campuses. Whether OpenAI’s fleet runs at those averages every hour is another matter; but nameplate + realistic utilization still implies nation-scale electricity draws.
Efficiency levers that really matter
-
Model/algorithmic efficiency (sparsity, quantization, distillation) reduces FLOPs per task.
-
System-level scheduling—shifting flexible training to off-peak or high-renewable windows.
-
Liquid cooling + heat reuse—already mainstreaming at hyperscale; it cuts PUE and can supply district heating.
-
Power electronics upgrades—400 VDC distribution trims losses and frees rack space.
-
Silicon diversity—Nvidia leads, but ASICs/TPUs/custom accelerators and optical interconnects can cut joules/operation.
(IEA and EPRI outline these pathways in their 2025 guidance.)
Final Thoughts:
Imagine building a whole new kind of factory—except instead of steel or cars, it makes intelligence. That’s what the Nvidia–OpenAI plan really is: a $100B network of AI campuses packed with chips and humming servers. These places won’t just glow; they’ll sip (and sometimes gulp) electricity by the terawatt-hour. If the 10-gigawatt goal comes true and those machines stay busy, we’re talking roughly 90–105 TWh a year once you include cooling and overhead (that “PUE ~1.2” bit just means the buildings are pretty efficient, but still need extra power beyond the computers themselves). In plain terms: it’s the kind of energy use you’d normally associate with an entire country, not a handful of buildings. Doing this the right way means teaming up with utilities, signing long-term deals for wind and solar, and squeezing more work out of every watt. Think liquid-cooled rooms instead of fans, capturing waste heat instead of dumping it, and smarter power systems that waste less along the way. The tech is exciting—but making the energy puzzle fit is what turns a bold idea into something sustainable.
Q&A: The Power Reality Behind OpenAI–Nvidia’s “AI Factories”
1) What scale are we talking about—really?
A build targeting ~10 gigawatts (GW) of continuous load would consume ~87.6 TWh/year of IT energy (10 GW × 8,760 hours). With a modern PUE ≈ 1.2, total facility draw would be ~105 TWh/year.
2) How does that compare to the whole data-center sector?
The IEA estimates data centers used ~415 TWh in 2024 and could reach ~945 TWh by 2030. A single 10-GW platform at ~105 TWh/yr would be ~25% of 2024’s total or ~11% of 2030’s projection—by itself.
3) What’s the likely near-term phase, and what does it use?
At ~7 GW, IT energy is ~61 TWh/yr; at PUE 1.2, total draw is ~73 TWh/yr.
4) What would the annual electricity bill look like?
At $0.07–$0.10/kWh (industrial-scale power), ~105 TWh/yr implies $7.35–$10.5B/year in energy spend. (Simple: 105e9 kWh × price.)
5) Why do AI sites pull so much power?
Because they’re built around accelerators (GPUs), not general CPUs. An Nvidia DGX H100 system (8 H100 GPUs) is commonly cited at ~10.2 kW peak; multiply by thousands of nodes and dense racks and you reach hall- and campus-scale megawatts.