|
Green Ash Horizon Fund Monthly Factsheet - December 2025
|
|
The Horizon Fund’s USD IA shareclass rose +0.51% in December (GBP IA +0.47% and AUD IA +0.44%), versus +0.81% for the MSCI World (M1WO). This brought the full year return to +31.77% (GBP IA +31.47% and AUD IA +29.46%), versus +21.09% for the MSCI World.
- Despite some tumultuous moments, it was overall a very solid year for equities indices around the globe. Many talk about equity valuations but nearly all of the S&P’s price return was driven by earnings growth in 2025
- Looking ahead to 2026, the US remains the main source of growth (and actually appears to be accelerating), S&P 500 earnings are expected to grow another +15%, and the AI infrastructure build-out appears locked in for another year at least
Please click below for monthly factsheet and commentary:
|
|
|
|
Source: Bloomberg; Green Ash Partners. The Green Ash Horizon Strategy track record runs from 30/11/17 to 08/07/21. Fund performance is reported from 09/07/21 launch onwards (USD IA: LU2344660977; performance of other share classes on page 3). Strategy Track record based on managed account held at Interactive Brokers Group Inc. Performance calculated using Broadridge Paladyne Risk Management software. Performance has not been independently audited and is for illustrative purposes only. Past performance is no guarantee of current of future returns and you may consequently get back less than you invested. Benchmark used is M1WO Index
|
|
|
Here are some tidbits on the themes:
|
|
- After the close on Christmas Eve, NVIDIA announced a $20BN deal to non-exclusively license the technology AI inference chip start up Groq. In practice this is an aquihire to skirt antitrust scrutiny, as reportedly 90% of Groq's employees will be moving to NVIDIA, including the founder Jonathan Ross, who will take the role of NVIDIA's Chief Software Architect. Before founding Groq, Ross was involved in the development of Google's first Tensor Processing Unit (TPU)
- Groq's chips offer extremely low latency by using on-chip SRAM rather than HBM. The trade-off is that only a small amount of memory will fit on each chip, so many chips need to be networked together - at native FP8 precision you could run the full-size DeepSeek model on 4 of NVIDIA's B200s, but you would need >3,000 Groq LPUs, and even still would be restricted on context length, which adds a lot of memory overhead as it scales
- Groq's technology works best for small model inference at the edge, in use cases like voice (e.g. customer service), or multi-agent architectures involving lots of small sub-agents with shorter context requirements
- The deal fits with NVIDIA's plan to disaggregate prefill/decode workloads starting with the Rubin generation this year, though Groq's contribution may not appear until the Feynman generation, expected around 2028
|
|
|
Groq's LPUs don't scale well to larger models, requiring an infeasible number of server racks to serve just one instance of DeepSeek 671B
|
|
|
|
Source: Green Ash Partners
|
|
|
However, by bringing Groq's LPUs into the CUDA ecosystem, NVIDIA can dominate both large model/high throughput and small mode/low latency use cases
|
|
|
|
Source: Green Ash Partners
|
|
- NVIDIA unveiled their next system, the Vera Rubin, at CES 2026. The new generation goes way beyond normal iterations on GPUs, leaning in to full system design across all aspects of the rack. A few highlights:
- To train a 10T mixture-of-experts model in one month, Rubin would use -75% fewer GPUs than Blackwell
- For inference, at the same power draw, Rubin can serve about 10x more tokens per second (so 10x lower cost)
- The Vera Rubin NVL72 integrates six new chips so compute, networking, and control act as one system - each of these chips are newly designed and state-of-the-art of compute: Rubin GPU, Vera CPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet
- The system can use warm water (45°C) for liquid cooling, bringing down overall power consumption by -6% due to less need for chillers
- Also unveiled was a new architecture for context memory, which enables SSDs to be used as KV cache overflow - this seeks to address inference bottlenecks encountered when serving very large models with long contexts (>1 trillion parameters + >1 million tokens)
|
|
|
|
Source: NVIDIA developer blog; Green Ash Partners
|
|
- The volatility in AI datacentre stocks towards the end of the year was largely driven by concerns over capital markets funding. These fears have been partially allayed since, with the news that OpenAI is raising another $100BN (at a $750BN valuation), xAI has raised $20BN (at a $220BN valuation) and Anthropic is looking for another $10BN (at a $350BN valuation). All three may IPO this year, potentially raising 10s of billions in further funding. This supports our view that the capex cycle in AI datacentres is locked in through 2026, with increasingly clear visibility into 2027
- We re-iterate that the significant new capacity due to come online over the course of the year is not adequately represented in street estimates for hyperscale cloud revenue growth
|
|
- Earnings call transcripts suggest we are still very early in the AI adoption curve. Only a quarter of the S&P 500 fall into the AI adopters category, as defined by Morgan Stanley (and 70 of 500 S&P companies/14% are in the information technology sector, suggesting very low penetration in non-tech sectors)
|
|
|
The diffusion rate of AI through large entreprise is rapid relative to previous technologies, yet still very early, with only 15% of the S&P 500 discussing quantifiable benefits on earnings calls
|
|
|
Companies experiencing quantifiable AI benefits
|
|
|
|
Source: Morgan Stanley Research
|
|
|
Labour productivity and cost savings are by far the most cited benefits on company earnings calls
|
|
|
Mentions of AI benefits in earnings/conference transcripts
|
|
|
|
Source: Morgan Stanley Research. n=13,500 transcripts
|
|
|
This chart from Goldman Sachs is striking - perhaps this is a canary for other areas of knowledge work as AI adoption spreads
|
|
|
|
Source: Tony P, Goldman Sachs
|
|
- Progress in robotics is accelerating, with humanoid form-factors starting to perform useful work in factories. For a well-reasoned and detailed essay on the topic, we recommend The Final Offshoring, by Jacob Rintamaki, which includes a dynamic financial model allowing readers to plug in their own assumptions
|
|
|
Amazon is on track to have more robots than employees by the end of the decade
|
|
|
|
Source: ARK Invest; Green Ash Partners
|
|
|
There are $7BN prescriptions filled in the US per annum, at a cost of $13-15 per prescription. Furthermore, medication non-adherence (cause in part by friction in getting a prescription) is estimated to cost the healthcare system $300BN per year
|
|
|
|
Source: Green Ash Partners
|
|
- >5% of all ChatGPT messages globally are about health-related issues and 230 million users ask a question about healthcare every week. OpenAI have responded to this data by announcing ChatGPT Health. This allows users to connect their medical records, and health and fitness apps to a separate ChatGPT environment to help with things like understanding recent test results, preparing for doctor's appointments, getting advice on how to approach diet and workout routine's or understanding the trade-offs of different insurance options based on personal healthcare data
|
|
- We expect battery storage (BESS) to become ubiquitous in GW-scale datacentre power architecture for load-balancing, especially where grid connections have been supplemented by on-site gas turbines and solar
- xAI is using 420 megapacks at their 1GW Colossus datacentre in Memphis. This is purely for load-load balancing rather than back-up power (the 1.64GWh capacity would last less than 2hrs at full load). If the anticipated ~100GW of AI datacentres being built globally through 2030 adopted batteries for this purpose, it would only add ~1% to the total cost per GW (~$400-500MM), but would be a big deal for Tesla (~$40-50BN spread over 5 years would double energy generation & storage segment revenues)
|
|
|
Tesla's battery deployments have risen nearly 12x in five years and now represent nearly a quarter of Tesla's gross profit
|
|
|
|
Source: Tesla; Green Ash Partners
|
|
|
Megapacks on-site at xAI's Collosus datacentre
|
|
|
|
Source: xAI; Green Ash Partners
|
|
|
|