The Uptime Institute’s 2026 predictions report is pretty blunt about it: power availability — not chips, not capital, not permits — is the thing that will actually constrain data centre expansion through 2030. Their words: “It is unclear how the industry will continue to deliver capacity at the rate that many projections forecast.”
Mark Zuckerberg put it even more directly. Meta signed nuclear power deals totalling more than 6 gigawatts in January 2026 — making them the largest single purchaser of nuclear power among the AI tech giants. His take: “Power will be the bottleneck that will limit AI growth.”
If your organisation is spending $50K–$500K a year on cloud, this is not a problem that belongs to the people who build data centres. It is your problem. This article gives you a concrete framework for assessing AI power risk: how to evaluate cloud regions, what to ask your cloud providers, and how to present all of this to a CEO or board who has no energy background.
Why is power availability now a strategic risk, not a facilities problem?
Power is the binding constraint on AI infrastructure through 2030 and the gap is structural, not temporary. Data centres can be built in roughly three years. A nuclear plant takes at least a decade. Microsoft, Google, Amazon and Meta are signing multi-gigawatt nuclear deals because the grid simply cannot scale as fast as the server farms being built on top of it.
One term worth understanding is baseload power. Unlike solar or wind, baseload sources — nuclear, gas, geothermal — generate electricity continuously. AI workloads running 24/7 cannot be paused while waiting for the clouds to clear. That is why nuclear is getting so much attention right now, and why nuclear power’s AI renaissance is not a niche energy story — it is an infrastructure story.
For a business spending $50K–$500K a year on cloud, you are exposed to the second-order effects of decisions being made at the top of this supply chain. Constrained supply upstream becomes pricing pressure downstream. And it will show up in your cloud bills.
How do hyperscaler power constraints flow through to your cloud costs?
Here is the chain of events when a hyperscaler cannot secure enough power in a given region: data centre supply growth slows, available compute capacity shrinks, cloud pricing goes up. Simple as that.
Hyperscaler power constraints → slower data centre supply growth → regional capacity limits → cloud pricing pressure → AI inference OPEX rises → AI roadmap budget compressed.
In the PJM Interconnection — the 13-state Mid-Atlantic and Midwest electricity market that covers Northern Virginia, the highest-density AI data centre region on the planet — power costs are expected to surge 60% in 2025. Washington DC residential customers were already seeing bills increase $21 a month from June 2025, with roughly $10 of that attributable directly to data centres. That is not a projection. That is the transmission mechanism in real time.
Between April and June 2025, the zoning bottleneck constraining cloud data centre supply became undeniable: communities blocked or delayed $98 billion in projects across 11 states — two-thirds of tracked projects halted. When supply growth is constrained by local opposition, available capacity grows more slowly than demand, even if your cloud provider never mentions the word energy.
Then there is the grid interconnection queue — the backlog of projects waiting to connect new power sources to the grid. In the US, some requests are facing a seven-year wait. Understanding how hyperscalers are responding to the power crisis explains why the solutions here are measured in decades, not quarters.
What does cloud region selection look like when energy risk is a factor?
Cloud region selection used to come down to three things: latency, compliance, and cost. Energy availability is now the fourth.
The highest-risk US cluster: PJM territory. The PJM Interconnection covers 13 US Mid-Atlantic and Midwest states, including Northern Virginia — the world’s highest concentration of AI data centres. The Trump administration directed PJM to hold an emergency electricity auction because AI data centre expansion was increasing costs for residential customers. Some data centres in Virginia are waiting seven or more years for grid connections.
The clearest European example: Dublin, Ireland. Ireland’s Transmission System Operator stopped accepting applications for new data centres until 2028. That is grid saturation and its effect on cloud region availability made concrete. No applications. Until 2028. Worth knowing before you build your architecture around that region.
Gas turbines as a risk signal. A cloud provider running primarily on on-site gas turbines has a very different risk profile to one with long-term nuclear Power Purchase Agreements in place. Gas turbines mean higher near-term cost exposure and no long-term pricing stability.
For practical evaluation: check whether a region has active capacity warnings, whether it sits in PJM territory (for AWS that is US East and US East-2; for Azure it is the East US regions), and whether your provider has disclosed anything about energy sourcing for that region.
What questions should you ask your cloud providers about long-term power sourcing?
Cloud providers do not proactively disclose energy sourcing at the regional level. You have to ask explicitly.
First, a definition. A Power Purchase Agreement (PPA) is a long-term contract — typically 15 to 25 years — where a buyer purchases electricity directly from a generator at a negotiated price. This is how hyperscalers lock in nuclear and firm-power supply. Knowing how to evaluate this as a due diligence criterion — including evaluating power source reliability as a vendor due diligence criterion — is what gives these questions their teeth.
The seven questions to ask your cloud providers:
-
What percentage of your data centres in [region] are powered by carbon-free or nuclear energy today? Separates marketing claims from infrastructure reality.
-
Do you have long-term PPAs for nuclear or firm-power energy sources in your key regions? A yes with specifics — counterparty, plant, term length — signals serious long-term energy security.
-
What is your backup power strategy for the next 3–5 years as grid capacity tightens? On-site gas turbines or grid-connected firm power with multi-year contracts?
-
Has community opposition ever delayed or blocked a planned data centre expansion in your target regions? A provider that speaks to it honestly understands the risk landscape.
-
How does your regional capacity expansion plan account for grid interconnection queue timelines? A realistic answer acknowledges multi-year timescales.
-
Are your data centres in [region] currently connected to the grid or operating on on-site generation? Distinguishes stable long-term power from bridge solutions.
-
How would a sustained increase in regional energy costs affect your pricing commitments in that region? Tests whether your provider has hedged its energy costs — or whether rising OPEX will flow through to your invoice.
How do you frame energy risk for a CEO or board with no nuclear background?
The goal is not to explain how nuclear reactors work. Translate the risk into the language boards already use: budget predictability, AI roadmap deliverability, and business continuity.
Here is how you frame it:
- Context: The world’s largest cloud providers are signing multi-gigawatt nuclear energy deals because the grid cannot keep pace with AI data centre growth.
- Risk: If cloud capacity in key regions grows slower than demand, pricing pressure on AI compute will follow — and our AI roadmap budget assumptions may not hold.
- Exposure: We currently have [X]% of our AI workloads in [region], which sits in the highest-risk cluster for near-term energy availability and pricing volatility.
- Action: We recommend [evaluation of regional concentration / provider due diligence / multi-region hedging] to reduce concentration risk and build a contingency into the AI compute budget.
Keep the nuclear engineering out of it. Put the specific numbers in: PJM power costs expected to surge 60% in 2025; $98 billion in blocked US data centre projects in a single quarter; a 15% GPU price hike adds $3,700+ a month per instance. Then run a stress-test — model a 10–20% increase in GPU instance pricing in your primary region over 24 months alongside a diversion scenario where workloads shift to a higher-cost region. Run both against your AI roadmap budget.
The nuclear buildout reshaping AI infrastructure is happening because the grid cannot deliver what AI requires. That is the budget-language version of this story — which is all a board needs.
What signals suggest power risk is becoming real for your infrastructure?
Power risk does not arrive as a single announcement. It shows up through observable signals you can start monitoring right now.
Signal 1 — Cloud provider capacity warnings. Watch for notices that specific regions are “at capacity” or that new instance types are available only in limited regions.
Signal 2 — GPU and AI instance pricing changes. Regional price divergence — where the same instance type costs materially more in one region than another — indicates energy or capacity constraints are being priced in.
Signal 3 — SLA modifications. Changes to uptime commitments in specific regions signal a provider managing capacity constraints.
Signal 4 — Data centre permit news. In Indiana alone, a dozen projects lost rezoning bids last year. Permit denials in your provider’s key regions constrain future supply growth.
Signal 5 — Utility rate filings. PJM capacity prices surged by a factor of 10 relative to prior cycles. Rate hike news where your provider has heavy data centre concentration is an upstream cost signal.
Signal 6 — Hyperscaler earnings disclosures. When major cloud providers mention energy costs or capacity constraints in earnings calls or 10-K filings, treat it as forward pricing guidance.
Set up a Google Alert for “data center permit [region]” and “cloud capacity [provider]”. Takes minutes, costs nothing, gives you early warning before it reaches your invoice.
What is the honest caveat about AI bubble risk and what does it mean for this framework?
This framework’s urgency rests on one assumption: sustained AI investment through 2030.
GMO investment research has concluded that “we are in a US stock market and AI bubble.” JP Morgan finds nearly 40% of the S&P 500’s market cap is exposed to AI perceptions or realities. If enterprise AI adoption plateaus before the 2028–2032 window when new nuclear capacity comes online, the supply-demand dynamic driving this risk may relax.
What the caveat does not change: the due diligence questions are worth asking regardless. A cloud provider that cannot answer questions about their energy sourcing is carrying undisclosed risk beyond power alone. No cost to asking. Run it once, understand your exposure, revisit when signals shift. And keep an eye on nuclear power’s AI renaissance for the bigger picture as things develop.
Frequently Asked Questions
What is AI power risk and does it affect companies that don’t own data centres?
AI power risk is the downstream exposure your business faces when hyperscaler power constraints cascade through cloud pricing, regional availability, and AI inference costs. Yes — you are exposed even if you own no servers. Rising energy costs for data centre operators eventually show up as pricing pressure on cloud services. You are buying compute from providers subject to these constraints, full stop.
What is the PJM Interconnection and why does it matter for cloud infrastructure?
The PJM Interconnection is the electricity market covering 13 US Mid-Atlantic and Midwest states, including Northern Virginia — the highest-density AI data centre region in the world. PJM’s grid is approaching saturation, which means cloud regions within this territory carry elevated near-term energy and pricing risk. If your primary cloud region sits inside PJM territory, this is worth knowing about.
What is a Power Purchase Agreement (PPA) and why should a CTO care?
A PPA is a long-term contract — typically 15 to 25 years — to buy electricity directly from a generator at a set price. A cloud provider with PPAs for nuclear or firm power is more insulated from spot energy market volatility than one relying on market-rate purchases or on-site gas turbines. The hyperscalers are treating this as a multi-decade structural shift, and so should you when evaluating providers.
How does zoning opposition to data centres affect cloud pricing for my company?
Community opposition blocked or delayed $98 billion in US data centre projects across 11 states in a single quarter (Q2 2025). When supply growth is constrained, available capacity grows more slowly than demand. Indirect, but traceable and material — and it shows up in your bills eventually.
Is on-premises AI infrastructure safer from power risk than cloud?
Not necessarily. On-premises shifts energy risk to your company, requiring reliable grid access and potentially long procurement cycles. For most organisations, cloud remains the lower-risk option. The task is not avoiding cloud — it is selecting regions and providers with stronger energy security.
What does a cloud region moratorium look like in practice?
Dublin, Ireland is the clearest current example. Ireland’s Transmission System Operator stopped accepting applications for new data centres until 2028. Cloud customers depending heavily on Dublin-region infrastructure have limited options for capacity expansion until grid investment is completed. That is a moratorium in practice — and it happened with very little warning.
How should I think about gas turbines as a risk signal when evaluating cloud providers?
On-site gas turbines are a short-term bridge, not a stable long-term energy strategy. A provider relying heavily on gas turbines rather than long-term PPAs has higher near-term energy cost exposure. Ask specifically whether key regions run on grid power with PPAs or on on-site generation awaiting grid connection. The answer tells you a lot.
How much of a cloud cost increase should I model as a scenario for rising energy costs?
Model a 10–20% increase in GPU instance pricing in your primary region over 24 months, alongside a scenario where that region faces capacity constraints and workloads must divert to a higher-cost alternative. Run those against your AI roadmap budget and find the threshold at which the roadmap becomes undeliverable. That is your planning number.
What is the grid interconnection queue and why does it explain why cloud capacity takes so long to expand?
The grid interconnection queue is the backlog of projects waiting to connect new power sources to the electrical grid. In the US, some requests face a seven-year wait. Even when a cloud provider commits to new capacity in a constrained region, it may not come online for years — regardless of how much capital they deploy. Capital is not the constraint. Grid access is.
What’s the earliest sign that power risk is affecting my current cloud costs?
Watch for unexplained pricing changes on GPU or AI-intensive instance types in your primary cloud region. Regional price divergence — where the same instance type costs materially more in one region than another — is a leading indicator that energy or capacity constraints are already being priced in.