|
Green Ash Horizon Fund Monthly Factsheet - November 2025
|
|
The Horizon Fund’s USD IA shareclass fell -4.90% in November (GBP IA -5.26% and AUD IA -5.07%), versus +0.28% for the MSCI World (M1WO).
- Autumnal seasonality finally appeared in November, with many of the winning themes YTD seeing significant corrections intra-month and the VIX shooting up +9pts to 26.
- The shifting narrative that drove the Mag 7 dispersion in November is emblematic of chronic zero-sum thinking in the markets. GPUs, TPUs, ASICS – all are supply constrained. Demand is insatiable, and AI progress is, if anything, accelerating.
- We see a positive set up for risk assets in the coming weeks: the Fed has turned dovish again making a December rate cut more likely, the US government has re-opened, QT is coming to an end, earnings estimates continue to track higher, and the US consumer remains robust
Please click below for monthly factsheet and commentary:
|
|
|
|
Source: Bloomberg; Green Ash Partners. The Green Ash Horizon Strategy track record runs from 30/11/17 to 08/07/21. Fund performance is reported from 09/07/21 launch onwards (USD IA: LU2344660977; performance of other share classes on page 3). Strategy Track record based on managed account held at Interactive Brokers Group Inc. Performance calculated using Broadridge Paladyne Risk Management software. Performance has not been independently audited and is for illustrative purposes only. Past performance is no guarantee of current of future returns and you may consequently get back less than you invested. Benchmark used is M1WO Index
|
|
|
Here are some tidbits on the themes:
|
|
- Meta announced plans to diversify their compute suppliers by renting TPUs from Google in 2026 for inference, and buying TPUs in 2027 to run in their own datacentres. Following hot on the heels of the October announcement that Anthropic plan to rent up to 1 million TPUs from Google, this woke the market up to the realisation that Google is the one hyperscaler that is fully vertically integrated through the entire AI tech stack (something we wrote about in early 2023)
- Google released high-level specs of their latest generation TPU (Ironwood) at Hot Chips back in August, showing performance close to NVIDIA's Blackwell systems, albeit a few months behind in terms of release date
- There are compelling cost incentives to diversify into TPUs, which by some estimates cost $20BN/GW, versus $34BN/GW for NVIDIA systems - this more than makes up for the slightly worse performance, in total cost of ownership terms (TCO)
|
|
|
Google's TPU v7 is only a few months behind NVIDIA's Blackwell in terms of release date, and has similar performance by several metrics
|
|
|
|
Source: SemiAnalysis; Green Ash Partners
|
|
- What does this mean for Google? By opening up their TPUs to external customers, they are positioning themselves as the #2 supplier in the AI compute space (bad news for AMD). Morgan Stanley projects Google will sell about 5MM TPUs in 2027 and 7MM in 2028, totalling 12MM units over 2 years, compared to 7.9MM produced for internal use in the previous 4 years. Each 500k TPUs sold could add $13BN in revenue and $0.40 to EPS (so about a +4ppts EPS boost per 500k, using FY25e as a baseline)
- What does it mean for NVIDIA? Nothing too bad. While TPUs are somewhat flexible for training and inference, GPUs are far more programmable, broadly performant, and less vulnerable to obsolescence due to model architecture changes. For example, a change in number formats (e.g. from FP8 to FP4) makes the TCO of TPU v7s 35-40% worse than Blackwell GPUs. Next year, NVIDA's Vera Rubin generation will raise the bar across the board, with more GPUs per rack, higher energy density and much better performance per $ per watt, while Google's v8 TPU is expected to be more of an iterative update, on the same process node as the v7
|
|
|
We generated this infographic with Gemini to illustrate GPUs' flexibility advantage versus TPUs' efficiency advantage
|
|
|
|
Source: Green Ash Partners
|
|
- The market has been highly susceptible to zero-sum thinking throughout the AI bull market. Google started as a winner, only to be redesignated a loser, and is now back to a winner again, while Microsoft and Oracle have experienced the reverse, and Amazon's narrative has been all over the place. The outlook for memory was bearish, until suddenly we are now in the most severe supply crunch of all time, and smaller networking/optical semis shoot higher or lower on any whisper out of Taiwan about a design win/loss or a one-liner delivered at a hyperscaler conference. Ultimately we are in the early innings of a massive AI infrastructure build out, that is likely to grow at a +30-50% CAGR through 2030, depending on whose estimate you choose. A few may fall short of capitalising on this extraordinary period of investment, but generally speaking the rising tide can lift many boats
|
|
|
Zero-sum thinking in November created a bifurcation between the Googleverse (GOOGL/AVGO) and the OpenAIverse (MSFT, NVDA, ORCL)
|
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
- November was a huge month for new model releases, led by Gemini 3.0 from Google. This new foundation model advanced the frontier on all fronts, from verifiable domains that are amenable to reinforcement learnings (like maths code, and science) to agentic benchmarks, and the multi modal domains of computer vision, video understanding, physics and spatial awareness that are essential to realise the promise of embodied AI and humanoid robots
|
|
|
Gemini excels in agentic benchmarks like Vending Bench 2, which simulates the operation of a vending machine, requiring the model handle adversarial suppliers, negotiations, delivery delays, and customer complaints over long time horizons
|
|
|
|
Source: Company reports, Bloomberg; Green Ash Partners
|
|
- Even more surprising is Gemini's grasp of geospatial and temporal reasoning - for example, giving the following prompt generates an aerial image of the attack on Pearl Harbour:
|
|
|
"Show the full environment map for the scene at 21°21′54″N 157°57′00″W on Dec. 7, 1941 from the point of view of a person in the air."
|
|
|
|
Source: Gemini; Green Ash Partners
|
|
- Broad-based advancements across all benchmarks and modalities are usually the hallmark of larger models, and this was seemingly confirmed by DeepMind's VP of Research Oriol Vinyals: "Contra the popular belief that scaling is over—the team delivered a drastic jump. The delta between 2.5 and 3.0 is as big as we've ever seen. No walls in sight!"
- At the mid-point of parameter estimates, and using the 20x training tokens per parameter typical of MOEs, Gemini 3.0's pre-training run could have been as high as 6.75e+27 FLOPs - over 300x the compute used for GPT-4
|
|
|
Gemini 3.0's pre-training run may have been more than 300x larger than GPT-4's
|
|
|
|
Source: Green Ash Partners
|
|
|
Scaling pre-training is back in vogue, as we look forward to several 1GW training clusters coming online over the course of the next year
|
|
|
|
Source: EpochAI
|
|
|
Even DeepSeek acknowledges the return of scaling, in the paper published with the release of their latest model
|
|
- Anthropic released an update to their largest model, Claude Opus 4.5. While Gemini 3.0 remains the best broad model, Anthropic have solidified their lead in coding once again. In fact, given Anthropic's notoriously difficult take-home exam for software engineering applicants, Opus 4.5 scored higher than any human candidate ever
- This may seem relevant only to coders, but this latest release has some important implications for everyone:
- Opus 4.5 appears to have broken the smooth performance improvement trendline, showing a discontiguous jump in capability on novel machine learning problems (the WeirdML benchmark), while simultaneously improving token efficiency by 3x (meaning better reasoning/higher intelligence density). This could be an early indicator of AI starting to accelerate the development of AI - Dario Amodei recently commented that some of Anthropic's researchers and engineers have stopped manually writing code entirely
- With strong coding capability comes the capability to conduct cyberattacks - Anthropic's red team reports: "in just one year, AI agents have gone from exploiting 2% of vulnerabilities in the post-March 2025 portion of our benchmark to 56% - a leap from $5,000 to $4.6 million in total exploit revenue. More than half of the blockchain exploits carried out in 2025 - presumably by skilled human attackers - could have been executed autonomously by current AI agents"
|
|
|
The Sonnet model family demonstrated a smooth, asymptotic curve of improvement in solving novel machine learning problems, whereas Opus 4.5 showed a discontiguous leap and simultaneous 3x increase in token efficiency
|
|
|
|
Source: WeirdML; Peter Gostev
|
|
- All of this to say that November was a month of of multiple 'ChatGPT moments' for close watchers of the field. One notable absentee from the pitch was OpenAI, which has reportedly triggered a 'Code Red' internally. There are a couple of models rumoured to be in the pipeline for a 1Q26 release that may return OpenAI to the top of the leaderboard, though we would note that Grok 5 and Gemini 3.5 will probably arrive around then also
|
|
- Last month we cited GS economists' estimates for a 15% uplift in US labour productivity from AI over the next ten years. Anthropic got surprisingly close to this figure after analysing 100,000 chat interactions and cross-referencing against O*NET occupations and BLS wage data
- The study concludes that AI augmentation reduces task completion times by -80% on average, and full adoption over the next ten years could add 1.8% to annual labour productivity per year
- One interesting thing is that the data extraction, data analysis and the writing of the report itself seems to have been largely undertaken by Claude, Anthropic's language model. Perhaps we have crossed the Rubicon and will now routinely see unedited language model outputs in professional materials (especially following the recent breakthroughs in infographics and slide presentations)
- Diffusion of AI into entreprise has seemed to lag the pace of capability expansion, but perhaps part of this is due to most of frontier labs' attention being focused on coding. Now that other areas of knowledge work have risen up the priority list, we may see AI adoption accelerate more broadly
|
|
|
Claude estimating Claude's impact on human productivity
|
|
|
|
Source: Anthropic
|
|
|
|
Source: Anthropic
|
|
- Power efficiency often comes up in bear arguments on the useful life of GPUs - yes, maybe you can get paid back on your capex after 2-2.5 years, but why would you continue operate older chips if newer ones are far more power efficient, and therefore far cheaper on a token per $ per watt basis?
- As a counter, we highlight that A100s from 2020 are still seeing high utilisation rates, and are renting for about $1/hr. At the average US electricity cost of $0.17, these chips can generate 92% gross profit margins, and US electricity prices would need to double for gross margins to turn negative
|
|
|
US electricity prices would have to double for A100 gross margins to turn negative
|
|
|
|
Source: Green Ash Partners
|
|
- Power consulting firm Grid Strategies put out a report detailing the scale of new generation capacity under construction in the US. Some of the key points:
- 5-year utility forecasts jumped from +24GW (2022) to +166GW (2025) - equivalent to adding 15x NYC's peak load
- Datacentres driving 55% of total demand growth through 2030 (90/166GW), with +5.7% annual electricity usage growth (largest since the 60s ACs buildout) vs +3.7% peak demand growth
- Total electricity consumption to rise +32% by 2030, reflecting high datacentre load factors plus industrial/manufacturing renaissance
|
|
|
Total electricity consumption in the US is forecast to rise +32% over the next 5 years
|
|
|
|
Source: Grid Strategies, SPP; Green Ash Partners
|
|
- As we've mentioned before, there may be an element of double-counting in the 90GW figure for datacentre demand, though it is in the same ball park as Goldman (82GW) and BNEF (77GW). BNEF released a comprehensive database of datacentre's under construction in early December, raising their 2030 and 2035 estimates by +19% and +36% respectively on their forecasts of just 8 months ago
|
|
|
Using $50BN/GW, a +28GW forecast hike versus April 2025 numbers equates to an additional $1.4TN of capex investment through 2035
|
|
|
|
Source: BNEF; Green Ash Partners
|
|
|
|