Insights Business| SaaS| Technology The Three Trillion Dollar AI Infrastructure Bet – Capex Concentration and Circular Investment Risk
Business
|
SaaS
|
Technology
Feb 12, 2026

The Three Trillion Dollar AI Infrastructure Bet – Capex Concentration and Circular Investment Risk

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic The AI Bubble Debate - 95% Getting Zero ROI or the Most Important Thing Ever?

The AI bubble debate comes down to one simple question: is $3 trillion in infrastructure spending through 2030 smart investment or speculative overbuilding? The infrastructure layer gives us hard numbers to work with—actual capital expenditure, actual equipment orders, actual data centre construction.

Microsoft is throwing $80 billion at AI infrastructure in 2025. Amazon committed over $100 billion. Google planned $75 billion. Meta increased spending to $60-65 billion. These four companies alone are deploying $320 billion in 2025.

This dwarfs anything we’ve seen before. 90% of S&P 500 capital expenditure growth flows to AI, 75% of market returns come from AI-related stocks, and there’s a web of circular investments connecting chip makers, cloud providers, and AI labs that creates both growth and potential contagion.

How Much Are Companies Investing in AI Infrastructure Through 2030?

Companies are pumping approximately $3 trillion into AI infrastructure through 2030. Wall Street consensus estimates put annual spending at $527 billion for 2026. Goldman Sachs Research suggests the actual number could hit $700 billion if spending accelerates to match the late 1990s telecom cycle.

McKinsey projects AI infrastructure spending will reach nearly $7 trillion by 2030. Data centre construction alone is projected to exceed $400 billion in 2025.

This spending concentrates on data centres, GPU chips, and networking equipment required for AI model training and deployment. It’s the largest technology infrastructure buildout in history.

This scale reflects the broader AI investment paradox where massive capital commitment coexists with enterprise implementation failures.

Here’s context for you: AI capex currently equals 0.8% of GDP, compared with peaks reaching 1.5% of GDP during previous technology booms. To match the 1990s telecom peak, AI spending would need to reach $700 billion in 2026.

Third-quarter earnings for hyperscalers showed capital spending of $106 billion—year-over-year growth of 75%. Consensus estimates have proven too low for two years running. At the start of both 2024 and 2025, estimates implied 20% growth but actual growth exceeded 50% in both years.

Andy Jassy, Amazon CEO, makes the case this way: [“When AWS is expanding its capex, particularly for a once-in-a-lifetime type of business opportunity like AI, I think it’s actually quite a good sign, medium to long term”](https://www.businessinsider.com/big-tech-ai-capex-spend-meta-google-amazon-microsoft-earnings-2025-2).

This buildout reflects bubble dynamics identified through historical pattern analysis. The question is whether $3 trillion proves prescient or reckless.

Who Are the Magnificent Seven and Why Do They Dominate AI Spending?

Understanding who drives this spending tells you why concentration creates systemic risk. The Magnificent Seven—Microsoft, Google, Amazon, Meta, Apple, Nvidia, and Tesla—have the financial resources and strategic imperatives to deploy AI infrastructure at scale. Only these companies have balance sheets enabling $50-100 billion annual AI infrastructure spending.

Apollo Global Management’s chart book documents AI concentration within S&P 500’s market cap, returns, earnings and capex. Hyperscalers’ capital expenditure share of US private domestic investment has doubled since 2023.

Microsoft combines Azure cloud dominance with its 20% OpenAI stake. Google leverages search and Cloud Platform. Amazon leads through AWS. Meta pursues open-source infrastructure with its Llama strategy. Nvidia supplies the GPU foundation enabling all AI training and inference workloads.

Goldman Sachs Research reports the average stock in their basket of AI infrastructure companies returned 44% year-to-date, compared with a 9% increase in consensus two-year forward earnings-per-share estimate. That gap signals either market prescience or speculation, depending on whether AI demand materialises.

Since June 2025, average stock price correlation across large public AI hyperscalers has dropped from 80% to just 20%. The market is differentiating between companies showing genuine revenue growth from AI and those funding capex via debt without demonstrable returns.

Investors have rotated away from AI infrastructure companies where capex is being funded via debt without demonstrable returns. They’ve rewarded companies demonstrating a clear link between capex and revenues.

Apollo’s research shows that capital expenditure share of GDP is much higher for hyperscalers today versus telecom companies during the dot-com bubble. Earnings growth is concentrated in the Magnificent 7 and slowing down.

This level of market concentration parallels historical bubble conditions whilst also reflecting genuine technological leadership. The Magnificent Seven are both investors in and customers of AI-native companies.

What Are Circular Investment Patterns and Why Do They Create Systemic Risk?

These circular investment patterns raise a fundamental question: is the infrastructure being built creating genuine value or amplifying financial risk?

Circular investment patterns occur when companies along the AI supply chain invest in each other whilst simultaneously maintaining customer-vendor relationships. This creates interconnected equity stakes and revenue dependencies.

CoreWeave is the perfect example. First, consider the business fundamentals: The former cryptocurrency mining firm turned AI data centre operator has zero profits and billions in debt. CoreWeave’s IPO in March 2025 was the largest of any tech start-up since 2021, with share price more than doubling afterward.

After going public, CoreWeave announced a $22 billion partnership with OpenAI, $14 billion deal with Meta, and $6 billion arrangement with Nvidia. CoreWeave expects to bring in $5 billion in revenue in 2025 whilst spending roughly $20 billion.

The company has taken on $14 billion in debt, nearly a third coming due in the next year. It faces $34 billion in scheduled lease payments starting between now and 2028.

Second, customer concentration creates vulnerability. A single customer, Microsoft, is responsible for as much as 70% of CoreWeave’s revenue. CoreWeave’s next biggest customers, Nvidia and OpenAI, might make up another 20% of revenue.

Third, the circular investment web tightens. Nvidia is CoreWeave’s chip supplier and one of its major investors, meaning CoreWeave is using Nvidia’s money to buy Nvidia’s chips and then renting them right back to Nvidia. OpenAI is a major CoreWeave investor with close financial partnerships with both Nvidia and Microsoft.

Nvidia has struck more than 50 circular deals in 2025, including a $100 billion investment in OpenAI and (with Microsoft) a $15 billion investment in Anthropic.

OpenAI has made agreements to purchase $300 billion of computing power from Oracle, $38 billion from Amazon, and $22 billion from CoreWeave. OpenAI is projected to generate only $10 billion in revenue in 2025—less than a fifth of what it needs annually just to fund its deal with Oracle. OpenAI is on track to lose at least $15 billion in 2025 and doesn’t expect to be profitable until at least 2029.

Understanding these circular patterns is essential to evaluating whether AI investment represents bubble speculation or strategic positioning.

By one estimate, AI companies collectively will generate $60 billion in revenue against $400 billion in spending in 2025. The one company making money from the AI boom, Nvidia, is doing so only because everyone else is buying its chips in hopes of obtaining future profits.

There’s a legitimate explanation. Nvidia might be using its low cost of capital to support capital-constrained customers, similar to GM Financial providing loans to car buyers. Vendor financing is normal business practice.

The concerning interpretation draws parallels to dot-com circular advertising deals that artificially inflated revenues. Paul Kedrosky, managing partner at SK Ventures and MIT research fellow, warns: “When I see arrangements like this, it’s a huge red flag. It sends the signal that these companies really don’t want the credit-rating agencies to look too closely at their spending.”

Mark Zandi, chief economist at Moody’s Analytics, has changed his assessment: “A few months ago I would have told you that this was building toward a repeat of the dot-com crash. But all of this debt and financial engineering is making me increasingly worried about a 2008-like scenario.”

To finance their investments, AI companies have taken on hundreds of billions of dollars in debt, with Morgan Stanley expecting this to rise to $1.5 trillion by 2028.

Despite this massive infrastructure investment interconnection, 95% of enterprise AI implementations fail to show ROI.

What Is Dark Fiber 2.0 and Could AI Infrastructure Be Overbuilt?

Dark fiber refers to unused fibre-optic cables deployed during the 1990s telecom bubble. Companies including Level 3, WorldCom, and Global Crossing deployed massive networks expecting exponential demand. The infrastructure sat unused from 2001 to 2005. It ultimately enabled cloud computing from 2006 onwards.

The term “dark fiber 2.0” describes potential AI infrastructure overbuilding. Companies are committing $3 trillion through 2030 whilst AI-generated revenue remains below $100 billion annually. AI companies are investing $400-500 billion annually in infrastructure, creating a 4-5x investment-to-revenue gap.

Data centre construction races ahead of proven revenue generation. Infrastructure capacity could significantly exceed near-term utilisation if enterprise AI adoption doesn’t accelerate beyond current 95% failure rates.

The historical precedent suggests two outcomes. Either demand eventually catches up and infrastructure proves prescient, or overbuilding triggers asset value collapse when anticipated growth disappoints.

Once data centres are built, they represent stranded assets if demand disappoints. Sunk cost dynamics mean the infrastructure exists regardless of utilisation rates. The question is whether enterprise AI adoption accelerates to justify the buildout, or whether the 95% implementation failure rate persists.

Darrell M. West at Brookings Institution notes: “Based on press reports, Amazon says it is devoting $100 billion to data centres this year, whilst Meta has said it will spend over $600 billion in the coming three years.”

Understanding historical bubble patterns helps contextualise current infrastructure spending. The dot-com infrastructure overbuilding provides both warning and precedent—90% of companies failed yet the internet transformed everything.

This infrastructure versus utilisation gap contributes to the AI productivity paradox. The investment is visible and quantifiable. The returns remain invisible in aggregate economic data.

Direct revenues from AI services have increased nearly ninefold over the past two years. That growth trajectory needs to continue for years to justify current infrastructure spending.

The debate centres on whether AI follows exponential improvement curves that justify exponential infrastructure investment, or whether current spending reflects bubble dynamics where infrastructure deployment races ahead of sustainable demand.

How Do Public Cloud AI and On-Premise Infrastructure Costs Compare?

Understanding infrastructure costs helps you work out whether cloud or on-premise deployment makes sense for your organisation.

Public cloud AI services offer immediate access without capital expenditure. AWS, Azure, Google Cloud, and CoreWeave charge approximately $2-5 per hour for GPU instances, varying by chip generation and configuration.

On-premise infrastructure requires $50,000-150,000 per GPU unit upfront, plus ongoing operational costs for power, cooling, and maintenance. Hyperscale data centres can contain up to 10,000 file servers and cost one billion dollars each.

The total cost of ownership comparison depends on utilisation rates and time horizon. Organisations with consistent, predictable AI workloads and multi-year planning horizons may achieve 50-70% cost savings with on-premise infrastructure. Those with variable demand, experimentation phases, or short-term projects benefit from cloud flexibility despite 2-3x higher per-hour costs.

If you’re running GPU workloads 40-50% of the time over 3+ years, on-premise infrastructure economics become favourable. Below that threshold, cloud rental makes more financial sense.

Nvidia releases new architectures every 12-18 months, making on-premise hardware obsolete whilst cloud providers absorb the obsolescence risk. Power consumption, cooling requirements, and maintenance staffing represent significant operational costs that accumulate over the infrastructure lifetime.

Strategic considerations extend beyond cost. Data sovereignty matters for organisations handling sensitive information. Model training IP protection becomes relevant if you’re developing proprietary AI capabilities. Vendor lock-in risk increases when you’re deeply integrated with a single cloud provider’s AI services.

Hybrid strategies combine both approaches. Use cloud infrastructure for peaks and experimentation. Deploy on-premise capacity for steady-state workloads.

AI-native companies as infrastructure beneficiaries like Cursor and OpenAI represent the success path this infrastructure buildout enables. Understanding OpenAI and Cursor infrastructure dependencies reveals who these investments are designed to support. The build versus buy decision parallels MIT’s finding that vendor solutions succeed 67% versus 33% for internal builds.

Why Are Nvidia GPUs the Central Bottleneck in AI Infrastructure?

The infrastructure buildout depends fundamentally on GPU supply, creating a bottleneck that affects every organisation pursuing AI deployment.

Nvidia dominates AI infrastructure with approximately 95% market share in accelerator chips for model training and inference. This creates a supply bottleneck where data centre expansion, model development timelines, and AI service scaling all depend on Nvidia’s manufacturing capacity and allocation decisions.

The CUDA software ecosystem, developed over 15+ years, creates switching costs that entrench dominance. AI researchers train on Nvidia architectures. Frameworks including PyTorch and TensorFlow optimise for CUDA. Production systems assume Nvidia hardware.

This technical lock-in compounds business concentration. Microsoft alone accounts for 20% of Nvidia’s revenue. You face vendor lock-in, lead times of 6-12 months for H100 chips, and exposure to Nvidia’s pricing power and product roadmap decisions.

AMD is attempting to gain market share. OpenAI holds a 10% equity stake in AMD and has committed to purchase tens of billions in AMD chips.

Hyperscalers are developing custom silicon for inference workloads. Google deploys TPUs. Amazon developed Trainium. Microsoft created Maia.

Current Nvidia chip portfolio includes H100 as the current generation workhorse. Blackwell is shipping in 2025. Rubin is planned for 2026.

Switching costs from CUDA to alternatives including AMD’s ROCm or custom silicon require significant engineering investment. You’re rewriting software, retraining models, and rebuilding production systems. That migration cost keeps most organisations locked to Nvidia even when alternatives exist.

Single vendor dominance creates supply vulnerability for the entire AI ecosystem. Pricing exposure affects everyone. Product roadmap decisions impact which AI applications become economically viable.

What Metrics Indicate AI Infrastructure Concentration Risk?

Analysts and investors monitor concentration risk through four quantitative metrics tracked quarterly using publicly available research.

Metric 1: Market Return Concentration

Since November 2022, 80% of U.S. stock gains came from AI companies. Apollo Global Management’s research shows the top 10 companies represent 41% of S&P 500 total market capitalisation.

This approaches 2000 tech bubble concentration levels. Analysts consider concentration above 80% of gains or 45% of market cap as historically high.

Metric 2: Capital Expenditure Concentration

Apollo’s chart book documents that 90% of capex growth since November 2022 goes to AI ecosystem.

Current AI spending equals 0.8% of GDP versus historical peak of 1.5%. Sustained capex exceeding 1.5% of GDP without corresponding revenue growth would indicate concentration levels historically associated with overinvestment.

Metric 3: Revenue Dependency Analysis

Microsoft provides 70% of CoreWeave’s revenue and 20% of Nvidia’s revenue. OpenAI has committed $300 billion to Oracle, Amazon, and CoreWeave deals whilst generating only $10 billion annually.

When a single customer exceeds 30% of major vendor revenue, or when circular deal aggregate exceeds $500 billion, concentration risk becomes elevated by historical standards.

Metric 4: Valuation Dispersion Assessment

Apollo states the AI bubble today is bigger than the IT bubble in the 1990s based on concentration metrics. Analysts track AI stock valuations versus long-term trend using GMO’s 2-sigma methodology.

All 300+ historical 2-sigma events eventually returned to trend.

This concentration risk framework helps answer the central question in our broader analysis: is AI investment justified transformation or dangerous speculation?

These concentration metrics align with GMO’s bubble identification framework showing 2+ sigma deviation from historical trends.

What Happens If Circular Investment Patterns Break?

If circular investment patterns break, contagion could cascade through interconnected equity stakes and revenue dependencies. This would create a 2008-style systemic crisis where isolated failures amplify across the financial system.

Scenario 1: OpenAI Revenue Disappointment

OpenAI needs to justify $300 billion in commitments against $10 billion annual revenue. If revenue disappoints, those commitments become unsustainable. Nvidia’s $100 billion investment becomes impaired. CoreWeave loses 20% of its revenue. Debt covenants breach. Creditor losses propagate.

Scenario 2: CoreWeave Debt Default

CoreWeave carries $14 billion in debt, nearly a third coming due within a year, plus $34 billion in lease payments. Default triggers GPU collateral liquidation. Used GPU market floods. Nvidia pricing power weakens. Data centre valuations decline. Banking system exposure reveals itself.

Scenario 3: Nvidia Demand Decline

If AI infrastructure demand disappoints, Nvidia reduces revenue guidance. Stock declines. Wealth effect reduces hyperscaler capex. CoreWeave and Oracle lose Nvidia-driven customers. The circular pattern unwinds in reverse.

Private-equity firms have lent about $450 billion in private credit to the tech sector. Federal Reserve studies estimate that up to a quarter of bank loans to nonbank financial institutions are now made to private-credit firms, up from just 1% in 2013.

Financial engineering amplifies the risk. Meta structured a $27 billion Louisiana data centre deal through Blue Owl Capital using a special-purpose vehicle to keep debt off balance sheet. Enron used SPVs to mask shady accounting practices before its 2001 collapse.

GPU-backed loans create specific vulnerability. Several data-centre builders including CoreWeave have obtained multibillion-dollar loans by posting existing chips as collateral. When new chip models are released, the value of older models tends to fall, potentially creating a vicious cycle.

Paul Kedrosky warns: “Investors see these complex financial products and they say, I don’t care what’s happening inside—I just care that it’s highly rated and promises a big return. That’s what happened in ’08.”

These contagion scenarios mirror 2008 financial crisis patterns where interconnected exposures amplified isolated failures into systemic collapse.

For you, mitigation strategies include vendor diversification, contract negotiation that includes financial health monitoring requirements, and scenario planning for what happens if key suppliers face distress.

FAQ Section

What are circular investment patterns in AI?

Circular investment patterns occur when companies along the AI supply chain invest in each other whilst maintaining customer-vendor relationships. OpenAI holds AMD equity whilst purchasing AMD chips, Nvidia invests in CoreWeave whilst also buying CoreWeave cloud services, and Microsoft owns 20% of OpenAI whilst providing 70% of CoreWeave’s revenue. These create interconnected dependencies.

Who is CoreWeave and why does it matter?

CoreWeave is a former cryptocurrency mining firm turned AI data centre operator with zero profits, $14 billion in debt, and revenue concentrated from three interconnected customers. It’s the perfect example of circular investment risk where Nvidia invested $350M whilst being CoreWeave’s chip supplier and cloud customer.

What is dark fiber and how does it relate to AI infrastructure?

Dark fiber refers to unused fibre-optic cables deployed during the 1990s telecom bubble that lay dormant for years before ultimately enabling cloud computing. “Dark fiber 2.0” describes potential AI infrastructure overbuilding where data centres are constructed ahead of proven demand—either prescient investment if adoption accelerates, or stranded assets if the 95% enterprise failure rate persists.

Why is Nvidia so dominant in AI chips?

Nvidia maintains approximately 95% market share through its CUDA software ecosystem, developed over 15+ years, which creates switching costs. AI researchers train on Nvidia architectures, frameworks optimise for CUDA, and production systems assume Nvidia hardware. Migrating to alternatives requires significant engineering investment to rewrite software and retrain models.

Conclusion

The $3 trillion AI infrastructure bet through 2030 presents a fundamental paradox: unprecedented capital concentration creating both transformative opportunity and systemic risk. The Magnificent Seven deploy $527 billion annually whilst circular investment patterns connect chip makers, cloud providers, and AI labs through equity stakes and revenue dependencies that mirror both 2008 financial engineering and dot-com infrastructure overbuilding.

For technical leaders evaluating whether this represents smart investment or speculative excess, the evidence points both ways. Infrastructure spending exceeds revenue generation by 4-5x ratios reminiscent of bubble conditions. Yet AI-native companies demonstrate genuine growth trajectories suggesting this capacity will eventually prove prescient. The question centres on timing—whether demand growth accelerates fast enough to prevent dark fiber 2.0 outcomes where stranded assets sit unused for years.

Understanding these infrastructure dynamics within the broader AI bubble debate requires examining both market concentration metrics and enterprise implementation reality. The buildout is real, quantifiable, and historically unprecedented. Whether it represents transformation or speculation depends on whether enterprise implementation failures persist at 95% rates or give way to widespread adoption that justifies the investment.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices Dots
Offices

BUSINESS HOURS

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Monday - Friday
9 AM - 9 PM (Sydney Time)
9 AM - 5 PM (Yogyakarta Time)

Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660
Bandung

BANDUNG

JL. Banda No. 30
Bandung 40115
Indonesia

JL. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Subscribe to our newsletter