|
Green Ash Horizon Fund Monthly Factsheet - September 2025
|
|
The Horizon Fund’s USD IA shareclass rose +14.75% in September (GBP IA +14.63% and AUD IA +14.39%), versus +3.21% for the MSCI World (M1WO).
- It was the best monthly return for the strategy since November 2020, and the greatest outperformance versus the MSCI World since inception in November 2017
- There is a growing realisation in the market that nearly all of US GDP growth is being driven by the AI infrastructure build out which continues to grow ever larger in ambition and scale
- From an investment perspective, the trend should be supportive of the fund’s themes into year end, which will be propelled by strong earnings momentum and potentially more easing from the Fed
Please click below for monthly factsheet and commentary:
|
|
|
|
Source: Bloomberg; Green Ash Partners. The Green Ash Horizon Strategy track record runs from 30/11/17 to 08/07/21. Fund performance is reported from 09/07/21 launch onwards (USD IA: LU2344660977; performance of other share classes on page 3). Strategy Track record based on managed account held at Interactive Brokers Group Inc. Performance calculated using Broadridge Paladyne Risk Management software. Performance has not been independently audited and is for illustrative purposes only. Past performance is no guarantee of current of future returns and you may consequently get back less than you invested. Benchmark used is M1WO Index
|
|
|
It's been a busy month, even by AI standards:
|
|
- We can scarcely keep up with the gigantic AI infrastructure projects and partnerships being announced by hyperscalers and frontier labs. Most recently, NVIDIA announced plans to take a $100 billion stake in OpenAI, and form a strategic partnership to build 10GW of AI datacentre capacity. For some context, a 1GW datacentre represents about a $40 billion revenue opportunity for NVIDIA, and 10GW would require ~5-6 million GPUs - about equal to NVIDIA's total GPU production in 2025
- This is incremental to the OpenAI/Oracle/Softbank Stargate project targeting 7GW of capacity. At the site of their first >1GW facility in Abilene, Texas, OpenAI revealed an ambition to spend $1 trillion on 20GW of datacentre capacity in the coming years - one OpenAI executive remarked that, ultimately, meeting demand for AI could require 100GW of capacity (or $5 trillion in investment) - larger than the GDP of Japan or Germany
- Of course, if it looks like OpenAI can achieve this, Google DeepMind, Anthropic/Amazon, Meta and xAI all have to pursue it too, if they want to stay in the race
|
|
|
OpenAI's revenue projections keep going higher, but they will still need to raise hundreds of billions in capital to realise their capex ambitions
|
|
|
|
Source: The Information
|
|
- In early 2024, our #1 prediction was that memory would be the next hot sub-sector in semis. At the time we wrote, "The oligopoly of three [memory manufacturers] have made major capacity cutbacks, which should accelerate the pace of price gains as we enter the next upcycle. Generative AI's voracious demand for high bandwidth memory (HBM) will further tighten the market as these higher priced chips take up more wafer capacity than other products"
- It has taken some time to play out, but things have really taken off in the last couple of months - traditional markets for DRAM and NAND are coming back to life at the same time as HBM estimates are going higher (NVIDIA's Rubin Ultra, due in 2H27e, has 4x the HBM content of the current Blackwell chips)
|
|
|
We use Micron here as a proxy for memory semis - SK Hynix has near-identical returns YTD
|
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
|
Micron's share price has closely followed street EPS upgrades, which have doubled since we made our prediction
|
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
|
AI is steadily consuming DRAM capacity, tightening traditional end markets like PCs and smartphones
|
|
|
|
Source: Company data, JPMorgan estimates; Green Ash Partners
|
|
- The impact of Stargate finally appeared in Oracle's quarterly earnings report - fueled by massive AI demand, management issued a bold multi-year forecast for Oracle Cloud Infrastructure (OCI) revenue, projecting +77% growth to $18 billion in FY26e and an eventual rise to $144 billion over the next five years
|
|
|
Needless to say, a +$305BN/+205% beat to street RPO estimates by a 40 year old boomer tech company is completely unprecedented
|
|
|
|
Source: Company reports, Bloomberg; Green Ash Partners
|
|
- Following Genie 3 and Nano-banana in August, Google DeepMind continues to advance the state of the art in world models on a number of fronts:
- In Gemini Robotics ER 1.5, GDM has released a vison/language/action model (VLA) that can generalise movement across different robotic embodiments, uses test time compute to reason (as with today's advanced LLMs), and incorporates new embodied forms of reasoning such as visual/spatial understanding, task planning and progress estimation
- Taking another angle on world models, Google trained a highly efficient model called Dreamer 4, which learned enough about game mechanics from offline Minecraft videos to obtain diamonds - an action that requires a sequence of over 20,000 mouse and keyboard actions
- Google also released a paper on their video generation model, Veo 3, entitled "Video Models are Zero-Shot Learners and Reasoners" (a nod to the 2022 paper "Large Language Models are Zero-Shot Reasoners", which introduced the concept of chain-of-thought and contributed to the breakthrough in reasoning LLMs and test time compute scaling). In the paper, the authors suggest "sparks of visual intelligence" can be discerned from the scaling up between Veo 2 and Veo 3, and introduce the concept of chain-of-frame. We are very early in video generation models, so this will be an area of active R&D
|
|
|
This video explains what is meant by generalisation across embodiments
|
|
|
|
Source: Google DeepMind; YouTube
|
|
|
On the topic of video models, OpenAI just released Sora 2, which is fairly amazing - here are some examples of what it can generate (with sound)
|
|
|
|
Source: OpenAI; YouTube
|
|
- Anthropic released Claude Sonnet 4.5, which, as expected, focused most of its performance gains on coding, though also has made significant progress on agentic tool use and computer use. Anthropic say Sonnet 4.5 was given the task to re-create an app like Slack or Teams and it was able to run autonomously for 30 hours on the project, generating 11,000 lines of code
|
|
|
METR haven't get published Claude 4.5's performance on their long-horizon benchmark, but a 30 hour time horizon would be a major acceleration to the (already exponential) rate of improvement (note the log scale on the y-axis)
|
|
|
|
Source: METR; Green Ash Partners
|
|
- Another one of our January 2024 predictions was that self-driving cars would take off. There is a strange dichotomy here, whereby in parts of California, robotaxis have become so ubiquitous they are barely worth remarking on, while everywhere else they are basically still science fiction
|
|
|
Monthly robotaxi passenger miles have increased 16x over the last 20 months, and are up +624% YoY
|
|
|
|
Source: California Public Utilities Commission, Our Wolrd in Data; Green Ash Partners
|
|
- We are starting to see the impact of datacentre power consumption on electricity prices. What's more concerning is that hardly any of the pipeline of large AI datacentres planned since ChatGPT have come online yet - of the 4.1GW of new datacentre capacity added globally in 1H25 (+9% YoY), 70% related to projects that broke ground pre-ChatGPT
|
|
|
Datacentres are starting to cause a rise in electricity prices
|
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
|
Datacentres as a share of electricity consumption are growing rapidly across many States
|
|
|
|
Source: Bloomberg News analysis of data from DC Byte and the US Energy Information Administration
Note: States shown are those where data centers accounted for 5% or more of total electricity consumption in 2024, the most recent year with full data available.
|
|
|
|