Horizon Fund Update: We Are Building Pyramids Again
|
|
For more analysis on datacentre electricity consumption, see On the Horizon #7: AI & Energy
Humans aren't really wired to think in exponentials. In our AI & Energy essay, we referenced Leopold Ashenbrenner, who wrote a piece last year extrapolating datacentre scaling, which leads to 100GW clusters by 2030 (20% of US power generation). We wrote: "We take a more measured view, especially in light of recent research into scaling the post-training and inference stages of AI models. These new directions, along with architectural breakthroughs that may allow things like memory and context to be scaled sub-quadratically, may spare big tech from embarking on a 10 or 100GW cluster to scale pre-training". This comment proved timely, as, just a week later, DeepSeek's algorithmic efficiencies rocked financial markets and had investors wondering if we even needed the datacentres we'd built so far! This fear was compounded by rumours of Microsoft cancelling multiple GWs of datacentre leases.
Lots has happened since then, everyone is better informed, and there is increasing conviction that the global AI infrastructure megaproject is locked in for the next few years. The $500BN Stargate announcement around President Trump's inauguration seemed fanciful at the time (and did not get priced in to any forward earnings for Oracle, NVIDIA etc.), but the project is now well underway in Abilene, Texas, and will ultimately house $40 billion of NVIDIA GPUs. We have had numerous other gigawatt-scale announcements since then, from Meta's 2GW campus in Louisiana which would cover a significant portion of Manhattan Island to xAI's plans to increase the size of their Memphis datacentre tenfold to 1.2GW/1 million GPUs. Then there is 'Sovereign AI', a concept seemingly wished into existence by NVIDIA CEO Jensen Huang, who started talking about it on earnings calls about a year ago. Various announcements from the UK, Europe and the Middle East sum to trillions of dollars in investment commitments, adding to the massive capital investment plans of the hyperscalers, currently forecast to run at $330-$380 billion per annum over the next three years.
|
|
xAI's Colossus datacentre in Memphis, Tennessee came online last year and was the first datacentre to house 100k GPUs, with a initial power draw of 150MW. Elon Musk plans to expand the site tenfold to 1 million GPUs with a power draw of 1.2GW
|
|
|
Source: xAI
|
|
Meta's 2GW datacentre in Richland, Louisiana, a 4 million sqft campus set on 2,200 acres of land
|
|
|
Source: Meta
|
|
The first two buildings of the OpenAI/Oracle 'Stargate' datacentre in Abilene, Texas, which eventually comprise eight buildings, containing $40BN of NIVIDIA chips, with a max power draw of 1.2GW
|
|
Saudi Arabia has partnered with Datavolt to build the Oxagon datacentre as part of their NEOM project, which will eventually reach a capacity of 1.5GW
|
|
|
Source: NEOM
|
|
Scala Data Centers (owned by DIgitalBridge) have ambitious plans to build a 4.7GW datacentre campus on 1,760 acres in Rio Grande, Brasil
|
|
|
Source: Scala
|
|
Observation 1 is that there's an interesting fractal quality to these pyramids of the modern age, which look at a macro scale quite similar to the miniature circuit boards and microprocessors that they house.
Observation 2 is that it's quite difficult to precisely model the total investment that will be committed to AI infrastructure over the next few years, and therefore the implications for AI infrastructure stocks. You can follow the money, but many of the Stargate-like announcements are not yet funded, and will seek capital from private equity or even involve partnerships with hyperscalers, meaning the sums are double counted in hyperscaler capex. You can follow electricity demand forecasts too, but these have similar problems which we'll come to later.
Bottom-up forecasts for hyperscaler capex haven't changed much since the large revisions higher we saw in Q1, after every hyperscaler bar Microsoft raised their FY25e capex guide. But there have been several estimate upgrades from the sell side on a top-down basis, with increasing conviction that 2026 will be another strong growth year. JPM's estimate was recently upgraded to +20% growth (double current consensus average) and Bank of America are forecasting +51% growth in datacentre systems TAM next year, and a +26% CAGR through 2030. Their model is interesting as key assumptions within it tie in with other forecasts, such as AMD CEO Lisa Su's $500BN AI accelerator TAM forecast by 2027, and Dell'Oro's estimate of $1 trillion in annual AI datacentre investment by 2029-30 (a figure that has been adopted by NVIDIA). Another piece that maps is the assumed rise in AI hardware spending as a % of IT budget - this rises from 3% to 8% over the next three years in BAML's model, which produces an 3Yr CAGR of +49% for AI accelerators, while a JPM model based on CIO surveys has similar share gains (on a different base) of 5.9% share rising to 15.9% in three years, producing a 3Yr CAGR of +43%. The difference in the assumptions is +8% annual IT budget growth in the former and +4% in the latter.
|
|
Apollo's chief economist estimates datacentre capex added one percentage point to US GDP growth in 1Q25
|
|
|
Source: Apollo; Green Ash Partners
|
|
The US hosts 75% of global AI compute capacity, and China is in second place with 15%. The EU has just 5% of global capacity
|
|
|
Source: Epoch AI
|
|
The average of current bottom-up sell side forecasts expect hyperscaler capex to flatten out over the next two years
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
The BAML model forecasts much stronger growth, and is interesting in that key assumptions tie in with other forecasts from industry and independent research
|
|
|
Source: Bank of America; Green Ash Partners
|
|
Another interesting thing, is that BAML assume IT as a share of global GDP remains more or less flat in the mid-single digits, with most of the AI spending coming at the expense of non-AI IT spend
|
|
|
Source: World Bank, Bank of America; Green Ash Partners
|
|
This chart actually makes AI spending look on the low side
|
|
|
Source: World Bank, Bank of America; Green Ash Partners
|
|
You can also track power, but as we highlighted in our April monthly letter, Independent Systems Operators are probably overstating electricity demand from datacentres over the next few years. PJM (Eastern Seaboard), ERCOT (Texas) and MISO (Midwest) collectively serve 46% of US electricity demand, but their combined 2030 datacentre demand forecasts are twice 3rd party estimates for the whole of the US. This is due to datacentre builders scouting multiple locations per project, before making their final decision.
Even the 3rd party estimates are huge numbers, though perhaps manageable from a power generation perspective in the context of ~62GW in new nameplate capacity that was added to the US grid in 2024 (total US nameplate capacity is 1,300GW).
|
|
|
Source: PJM, ERCOT, MISO, BCG, McKinsey, S&P and BNEF; Constellation Energy. Adjusted for capacity factor
|
|
Deloitte came out with a 2035 forecast this month, which actually lines up somewhat with the 3rd party forecasts for 2030, but expects this high growth rate to continue beyond
|
|
|
Source: Deloitte analysis of data from DC Byte, Wood Mackenzie, S&P Global, Lawrence Berkeley National Laboratory, Center for Strategic and International Studies, and Well Fargo; Green Ash Partners
|
|
What Does All This Mean for AI Infrastructure Stocks?
|
|
All of the usual caveats remain in force:
- There is a risk AI progress hits a wall, leading to great disillusionment and an AI winter (least likely in our view)
- Something something DeepSeek - a technical breakthrough of far greater magnitude than DeepSeek's efficiency improvements that changes the paradigm for AI infrastructure requirements
- AI may take far longer to diffuse through the 95% of the economy that is non-tech, resulting in delayed ROI for the massive AI infrastructure investments (possible, but not how things are trending)
- AI moves too quickly, causing severe disruption to the economy and society, which would be bad for stocks as well as everything else
With that out of the way, we will take the techno-optimist path, and try to frame what the high level TAM forecasts imply for a few bellwether AI stocks.
|
|
Using the BAML model, we infer the following AI system share this year, based on bottom-up AI revenue estimates for the companies
|
|
|
Source: BAML, Bloomberg; Green Ash Partners
|
|
Now we map it on to bottom-up forecasts for AI revenues through 2030, with small adjustments based on expected changes in market share
|
|
|
Source: BAML, Bloomberg; Green Ash Partners
|
|
The first thing to note here is that $551 billion for NVIDIA is a huge number - 3.1x this year's expected datacentre revenues and 63% above street forecasts for FY30e. If we mapped today's NTM EV/revenue multiple to that figure, it produces an entreprise value of nearly $10 trillion, 3.5x higher than it is today. If we use power estimates and apply $40 billion of revenues per GW, adjusted for share, we get even larger numbers - over $2 trillion in cumulative revenues for NVIDIA through 2030 and $3.5 trillion through 2035 in the US alone.
The upside is similarly large for the other stocks. Marvell's FY30e AI revenues of $21BN are 4x higher than total revenues last year. Broadcom's FY30e AI revenues of $74BN is 2.5x its semiconductor business last year.
There is a general consensus that AI workloads will shift from training to inference in the coming years, which might suggest ASIC designers like Broadcom and Marvell may take far greater share at the expense of NVIDIA. There is plenty of room to accommodate this in NVIDIA's $213BN FY30e estimate gap, further turbo-charging these smaller players.
|
|
Things are moving so quickly in AI, that there is only one thing we can be sure of - that any and all predictions looking out five years will be wrong. Our aim with this exercise was just to illustrate the implications of current trendlines and forecasts for AI infrastructure stocks which have led the market for so long.
We are surprised ourselves to see the momentum that has gathered this year - we were prepared for a gentle tailing off of capex growth, albeit at historically high levels. But there has been a steady cadence of datapoints from the semiconductor industry, frontier labs and now governments that have fortified our conviction that this generational capex cycle will ramp for several more years. We are building pyramids again.
|
|
|
|