We’re looking at an electricity demand crisis that’s being driven by AI data centres, and it’s threatening to overwhelm the power infrastructure we have right now. Electricity consumption from data centres is projected to grow from 4.4% to potentially 12% of total US electricity by 2028-2030. Why? GPU clusters in AI facilities are consuming 10-20x more power per rack than traditional servers.
This article is part of our comprehensive guide on Big Tech’s nuclear pivot, where we explore how Microsoft, Amazon, and Google are investing billions of dollars in nuclear power solutions. If you’re planning AI infrastructure investments, you need to understand what’s driving this crisis. This article explains what makes AI infrastructure fundamentally different, puts numbers on the demand projections, and clarifies why nuclear power is emerging as the solution.
What Is the Difference Between an AI Data Centre and a Traditional Data Centre?
AI data centres are facilities purpose-built for training and running large language models and neural networks. They’re not for general-purpose computing.
Traditional data centres are packed with network racks that are typically air-cooled and need 5kW to 10kW of electrical power. AI racks require more than 10 times as much power—and that forces fundamental redesigns. You can’t just swap them in.
The hardware difference is all about dense GPU clusters for parallel processing versus traditional CPU servers built for sequential tasks. In practice? A generative AI training cluster might consume seven or eight times more energy than a typical computing workload.
Air cooling is fine for traditional centres. But AI facilities generate extreme heat concentration, so liquid cooling often becomes mandatory. As Noman Bashir from the MIT Climate and Sustainability Consortium puts it, “what is different about generative AI is the power density it requires.”
The workload characteristics are different too. Traditional computing handles intermittent request-response patterns. AI training runs continuously for weeks at a time. 70 per cent of the footprint is now outside of the IT room with equipment that powers the chips and keeps them cool, compared to conventional facilities where servers took up most of the floorspace.
How Much Electricity Do AI Data Centres Consume Compared to Traditional Centres?
Here’s the scale we’re talking about. A typical AI-focused hyperscaler annually consumes as much electricity as 100,000 households, while the larger ones currently under construction are expected to use 20 times as much.
U.S. data centres consumed 183 terawatt-hours (TWh) of electricity in 2024, which works out to more than 4% of the country’s total electricity consumption. But projections show U.S. data centre electricity consumption could reach 325 to 580 TWh annually by 2030, up from 176 TWh in 2023.
And it’s creating regional bottlenecks. In 2023, data centres consumed about 26% of the total electricity supply in Virginia—creating grid constraints as AI facilities concentrate demand in specific regions.
At the workload level, an AI inference query for something like ChatGPT uses approximately 10 times more electricity than a traditional Google search. Training a GPT-3 class model requires weeks of continuous operation. A single large training run can cost millions of dollars in electricity alone.
Despite hardware improvements, model size and complexity are growing faster than efficiency gains. As Bashir notes, “the demand for new data centres cannot be met in a sustainable way.”
Why Are Electricity Demands from AI Expected to Triple by 2030?
Microsoft with OpenAI, Google with Gemini, Meta, Amazon—they’re all racing to build bigger models and deploy them at scale.
In the US, data centres used around 4% of the nation’s electricity in 2023 and this is set to rise to 7-12% by 2028. That’s tripling in five years.
Each new model generation requires exponentially more compute for training. Parameters are increasing from billions to trillions. GPT-3 had 175 billion parameters. GPT-4 is estimated to have over a trillion. Future models will be larger still.
As AI becomes embedded in search, productivity tools, and consumer applications, inference queries are exploding from millions to billions daily. AI has been responsible for around 5-15% of data centre power use in recent years, but this could increase to 35-50% by 2030.
Hyperscalers are building hundreds of new data centres globally. The U.S. remains the dominant market for AI-driven data centre expansion, with 40 GW of new projects under development.
Efficiency improvements can’t keep pace. DeepSeek demonstrated 10x inference efficiency improvements, but the scale growth is overwhelming these gains.
What Causes the Dramatic Power Consumption in GPU Clusters?
GPUs perform thousands of parallel calculations simultaneously versus CPUs handling sequential tasks. This is what makes them perfect for AI workloads and also what makes them power hungry.
AI model training involves thousands of graphics processing units (GPUs) running continuously for months, leading to high electricity consumption. These aren’t occasional bursts of activity. The GPUs run at full throttle the entire time.
Modern AI GPUs like NVIDIA’s H100 pack hundreds of billions of transistors into small chips, generating extreme heat. A large data centre might have well over 10,000 of these chips connected together.
AI training keeps GPUs at 90-100% utilisation for weeks continuously. Typical CPU workloads have idle periods when power consumption drops. AI workloads don’t have idle time.
Constant data transfer between GPU memory and compute cores consumes additional power. High-speed interconnects between GPUs add overhead.
Removing heat from dense GPU clusters can consume up to 40% of total facility power on top of the compute load. You’re not just powering the computation—you’re powering the infrastructure to keep it from melting.
How Does Power Density Create Infrastructure Challenges?
Traditional data centre design assumes 5-10 kW per rack across the facility. AI racks exceed 50-100 kW, breaking standard layouts completely. You can’t just swap them in.
Existing power feeds, transformers, and circuit breakers can’t support AI density without major upgrades. Data centres are becoming significantly more expensive to build, with the cost per square foot averaging $987 in October 2025, a 50% increase from a year prior.
You can’t simply add more AI racks to existing facilities without a complete electrical infrastructure overhaul. Meta ripped down a data centre that was still under development and redesigned it for higher-powered chips. That’s the scale of the problem.
Standard air cooling can’t dissipate heat from 50-100 kW racks. Liquid cooling retrofits become mandatory.
Local power grids can’t supply the necessary megawatts in many regions. Electric transmission constraints are forcing some data centres to wait up to seven years or more to secure grid connections.
Purpose-built AI data centres take 3-5 years from planning to operation. That’s if you can get grid capacity allocated.
Current data centre power constraints often arise due to transmission and distribution limitations rather than a lack of power generation capabilities. The power exists in the grid overall, but getting it to where you need it, when you need it, in the quantities you need it—that’s the problem.
What Role Does AI Model Training Play in Electricity Consumption?
Training is the most power-intensive AI workload—weeks or months of continuous operation at multi-megawatt scale. The model learns by processing datasets billions of times, each iteration requiring full GPU cluster utilisation.
Training GPT-3 consumed 1,287 megawatt hours of electricity—enough to power about 120 average U.S. homes for a year.
That’s for one training run of one model. And training isn’t a one-time event. Models require periodic retraining with updated data and continuous experimentation.
Larger models demonstrate better capabilities. This creates competitive pressure to train ever-bigger models despite power costs. The market rewards capability, so companies keep building bigger models.
Training cost—electricity plus hardware—is now significant enough to influence model architecture decisions. Only a handful of organisations, such as Google, Microsoft, and Amazon, can afford to train large-scale models due to the immense costs associated with hardware, electricity, cooling, and maintenance.
Why Is Nuclear Power Emerging as the Primary Solution?
AI training can’t tolerate interruptions. It requires 24/7 baseload power that intermittent renewables can’t provide alone. That’s the core reason nuclear is emerging as the primary solution.
Nuclear plants generate hundreds of megawatts continuously, matching hyperscaler data centre scale. Microsoft and Constellation Energy committed $1.6 billion to restart Three Mile Island Unit 1, with a targeted 2028 reopening, where the reactor can generate 835 MW.
Tech companies have sustainability commitments requiring zero-carbon electricity. This rules out natural gas despite its reliability.
On-site or nearby nuclear generation bypasses grid capacity constraints that block data centre expansion in many regions. The 24/7 baseload generation eliminates the intermittency challenges of renewables, providing the continuous power that AI workloads require.
Nuclear power purchase agreements lock in predictable pricing over 20-40 year horizons. When you’re planning infrastructure that will operate for decades, price certainty matters.
Small modular reactors offer modular scalability, allowing precise matching to data centre growth—starting with a single 77 MW module and expanding as computational needs increase.
Amazon Web Services committed to deploy 5 gigawatts of SMR capacity by 2039 through a $500 million investment in X-energy. Amazon also signed agreements to invest in four SMRs to be constructed by Energy Northwest to power data centres in eastern Oregon.
Tech giants are committing over $10 billion to nuclear partnerships. The first commercial SMR-powered data centres will come online by 2030.
FAQ Section
How much power does a single GPU rack consume in an AI data centre?
A typical AI-optimised GPU rack consumes 50-100 kilowatts continuously, compared to 5-10 kW for traditional server racks. High-density configurations with NVIDIA H100 or similar GPUs can exceed 100 kW per rack. This requires liquid cooling and specialised electrical infrastructure that standard data centres simply can’t support.
What percentage of US electricity do data centres currently use?
Data centres currently consume approximately 4.4% of total US electricity—that’s roughly 200 terawatt-hours annually. With AI expansion, the IEA projects this will grow to 12% by 2028-2030. That’s a near-tripling of consumption driven primarily by GPU-intensive AI workloads rather than traditional computing.
Can renewable energy solve the AI power problem?
Renewable energy is part of the solution but it’s not enough on its own. AI training requires 24/7 reliable power that solar and wind can’t provide due to intermittency. Nuclear power—particularly SMRs—offers the combination of carbon-free generation, continuous availability, and scale that matches hyperscaler requirements. That’s why Big Tech is investing billions in nuclear alongside renewables.
How long does it take to train a large AI model like GPT-3?
Training GPT-3 required weeks of continuous operation using thousands of GPUs running at near-100% capacity. The process consumed an estimated 1,287 megawatt-hours of electricity, equivalent to the annual consumption of 120 average US homes. Larger modern models require even more time and power.
Why do AI models require so much electricity to train?
AI training involves processing datasets billions of times through neural networks with billions or trillions of parameters. This requires continuous parallel calculations across thousands of GPUs, each consuming hundreds of watts while running at 90-100% capacity for weeks or months. The sheer volume of mathematical operations, combined with data transfer and cooling overhead, creates unprecedented electricity demand.
What is power density and how has AI changed it?
Power density measures electricity consumption per physical space, typically in kilowatts per rack. Traditional data centres operate at 5-10 kW per rack. AI facilities require 50-100+ kW per rack due to dense GPU configurations. This 10-20x increase breaks standard data centre designs, requiring complete electrical and cooling infrastructure redesigns.
How does AI inference energy use compare to regular web searches?
A single AI inference query—something like ChatGPT—consumes approximately 10 times more electricity than a traditional Google search. While individual queries seem negligible, billions of daily AI interactions create substantial aggregate consumption. As AI becomes embedded in search engines, productivity tools, and consumer applications, inference loads are projected to rival or exceed training consumption.
Are AI data centres really causing power grid failures?
AI data centres are creating grid capacity constraints in concentrated markets where utilities can’t supply sufficient power for planned facilities. While they’re not yet causing widespread failures, the demand concentration forces grid operators to delay new data centre interconnections and drives hyperscalers to seek alternative solutions like on-site nuclear generation.
What happens when data centres run out of power?
Power constraints force hyperscalers to delay or relocate AI infrastructure buildouts, potentially creating competitive disadvantages. Companies respond by pursuing long-term power purchase agreements—including nuclear—building in regions with available capacity, or implementing energy efficiency measures. The crisis is driving the billions being invested in nuclear power solutions.
How can companies reduce AI data centre electricity consumption?
Efficiency improvements include model optimisation, hardware upgrades to newer GPUs with better performance-per-watt, workload scheduling during low-demand periods, advanced cooling systems reducing overhead, and selecting smaller models that meet requirements without unnecessary scale. However, efficiency gains are often outpaced by model size growth and increased usage.
Is nuclear power the only solution for AI data centres?
Nuclear power is the primary solution for baseload, carbon-free, grid-independent electricity at the scale hyperscalers require. But it’s complemented by renewable energy, efficiency improvements, grid upgrades in some regions, and workload optimisation. The combination of reliability requirements, sustainability commitments, and scale makes nuclear—particularly SMRs—uniquely suited to the challenge.
What are the business implications for SMB tech companies adopting AI?
SMB companies face rising cloud AI costs as hyperscalers pass through electricity expenses, potential service constraints if power shortages limit capacity expansion, and competitive pressure to adopt AI despite costs. Your strategic planning should include long-term AI cost projections, evaluation of efficiency versus capability trade-offs, and monitoring of power availability in your preferred cloud regions.
Conclusion
The electricity demand crisis driven by AI data centres is reshaping the entire energy landscape for technology infrastructure. With consumption projected to triple by 2030, the power requirements of GPU-intensive AI workloads are forcing fundamental changes in how hyperscalers approach energy strategy. For a complete overview of how Big Tech is responding to this challenge and what it means for the future of cloud computing, see our comprehensive guide on nuclear-powered data centres.