|
Source: SDXL, FusionArt
|
|
Green Ash Horizon Fund Update: AI - The Next Leg
|
|
Executive Summary
We are currently working through a period of 'peak' uncertainty when it comes to interest rates and the outlook for economic growth. This will remain a source of anxiety for investors, though we remain confident that inflation was largely a supply-side phenomenon, brought about by the pandemic and, to a lesser extent, the Russian invasion of Ukraine.
We view the boom in AI infrastructure investment YTD as the new normal - a durable, secular growth theme with an expanding TAM, that will lift AI-exposed semi stocks in particular and the semiconductor industry in general.
We expect to see an inflection higher in top line cloud growth earlier than is being priced by the market, as the tailwind from GenAI activity becomes apparent, and builds on a recovery in cloud activity after the post-pandemic normalisation. This could happen as soon as next quarter.
We expect Google to release Gemini in Q4, which has been trained from the ground up to be multimodal (speculated to be text, code, image, audio and possibly video). It is also expected to have improved memory and planning capabilities. OpenAI is rumoured to be training a multi-modal 'everything-to-everything' model of their own. These next generation models may demonstrate new emergent capabilities with implications for robotics and autonomous agents.
Having already seen a revenue inflection in AI hardware this year, we think that entreprise software is the next big revenue opportunity in the near term, as copilots are rolled out across software platforms with hundreds of millions of users. Microsoft, Alphabet, Adobe, Salesforce and Intuit are already well into their launches, and we will see a material uplift in revenues from these new products in the coming quarters. We expect new innovations to follow as capabilities expand, and new behemoths to emerge, just as smartphones and the app store unleashed a wave of disruptive business models and services.
The change ahead is unprecedented - larger in scope and faster to market than previous tech trends. We are still just in the foothills, with a steep climb ahead and peak that is far out of sight. We think AI is the theme of this decade, something with implications for every company in every sector. Also unusual, is that investors can get exposure to much of the opportunity and value in public markets. The main focus of the Horizon Fund is to capitalise on this in the years ahead.
We are available for calls if investors want to discuss further - please do get in touch.
|
|
|
It has been less than 10 months since ChatGPT captured the public imagination, bringing AI to the forefront of the cultural gestalt. Since then we have had Senate hearings on AI safety, a broad-based scramble for AI strategies amongst large corporates, and millions upon millions of words written by journalists, economists and analysts attempting to predict the real world impact of such a powerful general purpose technology. Analogies for AI range from electricity and the internet all the way through to nuclear weapons.
This excitement helped drive a massive rally in perceived 'AI stocks', with the Nasdaq 100 up almost +50% on the year at one point over the summer. Some of this has unwound in August and September, and there is a sense of ChatGPT fatigue amongst investors and in the press. This has been coincident with waning confidence in the disinflation narrative - hard economic data has been far stronger than expected at the start of the year, when most forecasters were predicting an imminent recession, the labour markets have been resilient, and there are even signs of life in survey data, pointing to a possible pick up in economic activity ahead. A solid US economy might sound like good news, but has resulted in a 'higher for longer' base case for interest rates being prices into equity valuations and a general sense of gloom in the markets.
|
|
Equity returns this year have been largely driven by AI momentum - without the Top 7 MegaCap tech stocks, the S&P is only be up +4%YTD
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
AI momentum has also enabled tech stocks to overcome their inverse correlation to government bond yields, though the recent move higher in yields has started to re-assert this relationship
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
Having set the scene, we would like to make the case that the Age of AI has only just begun, and we see numerous catalysts ahead, which, individually or as a whole, are sufficiently impactful to wrestle the spotlight away from 'the higher for longer' narrative as we head into year end.
|
|
|
Source: adapted from a16z Enterprise; Green Ash Partners
|
|
Compute Hardware - AI Accelerators
|
|
The most immediate financial impact of the AI roll out has been felt in the semiconductor sector, most notably NVIDIA, which has rallied +180% YTD joined the $1 trillion market cap club.
|
|
NVIDIA's rapid rise has been underpinned by EPS growth, with calendar 2023/fiscal 2024 adj. EPS estimates +145% higher than the start of the year
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
Despite the huge rally YTD, at $415, NVIDIA's NTM P/E of 28x is the same as it was in Oct-22 (stock price $108) and Mar-20 (stock price $54)
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
"AI computing is the future of computing. So long as we continue to make our platform the best platform for AI computing, I think we're going to have a good shot of winning lots of business. GPUs will be all over companies." - Jensen Huang, November 2016
NVIDIA's ~70-80% share of the AI accelerator market isn't an accident, but the culmination of 10-15 years of investment by Jensen Huang, who recognised the deep learning revolution early on. Part of their moat is in software, starting with their CUDA toolkit which was released in 2006, and abstracted away a lot of the painful, low-level machine code required to program GPUs.
|
|
NVIDIA's push into full 'AI factory' DGX systems via InfiniBand connectivity (through their Mellanox acquisition) and their Grace Hopper CPU, have helped datacentre sales rise to 73% of total revenues, versus just 5% 10 years ago...
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
...and enabled industry-leading gross margins
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
But as per recent comments from AMD's Lisa Su, the AI accelerator market is in its early stages of growth - likely to finish the year at around $30BN by her estimates, but expected to grow to $150BN over the next 4 years (+50% CAGR).
For hyperscalers with the resources to build their own datacentre architectures, there is a compelling inventive to develop chips in-house to evade NVIDIA's >70% gross margins. This is achieved through partnerships with semi companies on Application-specific Integrated Circuits (ASIC) designs.
Broadcom has been Google's partner for several generations of TPUs, and this year has seen a major ramp in AI revenues - $1BN last quarter alone, and guided to ramp by +50% QoQ/+100% YoY next fiscal quarter. Management expect AI-derived revenues to rise from 14% to 25% of semiconductor revenues over the next five quarters (implying an $8 billion annual run rate). Amazon are reportedly in talks with Marvell, despite having their own in-house solutions.
|
|
A recent deep dive from SemiAnalysis puts Broadcom's FY24e/25e P/E at 15.2x/13.5x in an 'AI-acceleration' scenario
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
State of the art AI accelerator chips all rely on next generation packaging called CoWoS (Chip on Wafer on Substrate), which is solely supplied by TSMC. NVIDIA, Broadcom, AMD and Amazon are all placing large orders, and consequently watching industry comments on TSMC's CoWoS capacity ramp is a good way of monitoring forward demand. TSMC has already massively upgraded their capacity plans for next year to 15-20,000 wafers per month (12,000 currently), and the latest rumour is that they may be raising this again to 25-30,000 by the first half of next year.
While we have so far focused on the compute side of things, AI-optimised datacentre architecture requires cutting edge connectivity solutions (also provided by Broadcom and Marvell) and large amounts of high bandwidth memory (HBM, provided by SK Hynix, Samsung, and, soon, Micron). The last few years has seen a run rate of ~$200BN in cloud investment on top of the datacentre boom underpinning digitalisation, but the worlds estimated $3TN installed base of compute infrastructure is largely based on CPUs and not optimised for AI workloads. Total cost of ownership (TCO) and energy usage considerations make a strong case to upgrade for the new era of computing.
We view the boom in AI infrastructure investment YTD as the new normal - a durable, secular growth theme with an expanding TAM, that will lift AI-exposed semi stocks in particular and the semiconductor industry in general.
|
|
The shift to Public Cloud is still far from complete, but growth took a pause in 2022, as corporate belt tightening and some post-COVID normalisation met hyperscaler's consumption-based business models. This was referred to prosaically as 'optimisation', but it manifested as a material slowdown in top line growth rates for the Big 3 public cloud providers.
|
|
The Big 3 Hyperscalers were growing revenues at an average of +38% YoY during the pandemic, stepping down to +32% YoY during the last six quarters of 'optimisation'. Street forecasts expect stabilisation at a lower at +23% YoY growth rate over the next six quarters, with no inflection built in for GenAI
|
|
|
Source: Bloomberg; Green Ash Partners
|
|
Annual cloud revenues from the Big 3 have hit a run rate of >$180BN, so the law of large numbers may hold some sway over the top line growth percentages ahead, though we would argue that the secular shift of on-prem datacentre workloads to public clouds is still sufficiently early (still <50% penetrated) to maintain this level of growth without any contribution from generative AI.
And we believe the contribution from generative AI will be large. While data is scarce on exact training costs for leading-edge foundation models, comments from prominent industry figures suggests log scaling is still underway. Anthropic, for example, is budgeting $1 billion to train their next version of Claude. But ultimately the majority of compute spend will come from inferencing, as models are integrated into every software product.
Exactly the extent of the tailwind GenAI might deliver to cloud providers remains an open question. Microsoft has been the first to include some impact in their guidance (management estimate GenAI contributing +200bps to growth currently), but we are in the earliest stages of the roll out. Full deployment of GenAI could increase inferencing volumes by several orders of magnitude (though we expect inferencing costs to decline by orders of magnitude too).
We expect to see an inflection higher in top line cloud growth earlier than is being priced by the market, as the tailwind from GenAI activity becomes apparent, and builds on a recovery in cloud activity after the post-pandemic normalisation. This could happen as soon as next quarter.
|
|
The Cambrian explosion is an over-used analogy in tech, but is worth remembering that the speed of iteration and development in AI remains rapid, and extends well beyond ChatGPT.
When ChatGPT was released at the end of last year, open-source language models were at least a year behind in capabilities and the exponentially expanding compute requirements to stay at the frontier greatly limited participation. Meta has since shaken things up by open-sourcing pre-trained language models Llama and Llama-2, the latter approaching the original ChatGPT in performance.
|
|
Competent open-source models are proliferating, aided by commercial license LLM releases from Meta
|
|
In On the Horizon #5, we wrote: "As developers now build on top of pre-trained foundation models, the pace of progress is defined less by compute or model parameters and more by capability, which is advancing across multiple domains on a weekly basis. Mass adoption of LLMs across the industries mentioned in this essay would require huge amounts of compute resources, however over time Moore’s Law (or Huang’s Law) and better algorithmic efficiency should bring down costs significantly"
Since then, we have seen significant progress in optimising LLM's and extracting useful levels of performance from much smaller models. Not only does this reduce costs, but also allows inferencing on edge devices, opening up a whole new set of potential use cases and features.
|
|
Ever smaller transistors (from 28nm down to 5nm) have only accounted for 2.5x of the 1000x increase in NVIDIA's single-chip inference performance over the last 10 years. The bulk of the gain has been through optimisation (better number representation 16x, complex instructions 12.5x, sparsity 2x)
|
|
|
Source: NVIDIA chief scientist Bill Dally at IEEE HotChips 2023, NVIDIA; Green Ash Partners
|
|
Clever optimisations relating to data preparation, model architectures and fine-tuning pipelines help models large and small, open or closed source alike, but, at the frontier, the next breakthroughs will be coming from multi-model foundation models and scale.
Back in March, we pegged the market opportunity for foundation models at a similar level to AI accelerators and the cloud platforms, however since then the lines between these layers of the stack have become increasingly blurred through vertical integration. NVIDIA (as expected) is verticalising into Cloud (through their Oracle partnership) and training foundation models of their own. Microsoft (with OpenAI) and Google are offering pre-trained models via their clouds, as well as making their own finished products available directly.
Multi-modal foundation models have not hit the mainstream yet - there has been some experimentation with Google Lens' integration into Bard, and OpenAI have just announced the release of GPT-4's vision capabilities, along with audio transcription and voice generation, but these models are not natively multi-modal, and were trained over a year ago.
|
|
We expect Google to release Gemini in Q4, which has been trained from the ground up to be multimodal (speculated to be text, code, image, audio and possibly video). It is also expected to have improved memory and planning capabilities. OpenAI is rumoured to be training a multi-modal 'everything-to-everything' model of their own. These next generation models may demonstrate new emergent capabilities with implications for robotics and autonomous agents. This could provide the next major catalyst for AI stocks, and also set off another cascade of public debate as the potential disruptive impact of generally capable AI broadens even further.
|
|
ChatGPT was intended at a tech demo, not a product, and everyone, including OpenAI. were astonished by its reception - it was the first tech release to reach 100 million MAUs in just a few weeks and is currently the 28th most visited website in the world. Since then, OpenAI has steadily added features to their paid offering ($20/pm), and is reportedly on track to generate $1BN in revenues this year and 'many billions' next year. ChatGPT Plus is guesstimated at just 1% penetrated, suggesting about three quarters of OpenAI's 2023 revenues will come from entreprise users via their API - in Microsoft's last earnings call they noted there are now 11,000 entreprise customers on the Azure OpenAI platform, up +144% QoQ.
The point here is that the next wave of AI revenue uplift will not come from the consumer, as in previous platform shifts like the smartphone, but from entreprises. Every major provider of productivity software has lept on the idea of AI assistants, with Microsoft's Copilot leading the charge, and these well established platforms have the distribution to drive rapid adoption in the workplace.
|
|
Mentions of AI in earnings calls have exploded since the launch of ChatGPT, and not just in the tech sector
|
|
|
Source: Bloomberg; Green Ash Partners. BICS Sectors
|
|
Early studies, as well as internal reports from large enterprises, peg the productivity gains from GenAI at +30-50% - early use cases range from coding, to marketing, sales and customer service (customer service alone is a $400BN business in the US). JPM estimate the value of a knowledge worker in the US at $150,000-200,000 per annum - it is easy to imagine very high uptake of a tool that can increase productivity amongst this group by ~+30% at a cost of a few hundred dollars a year.
Having already seen a revenue inflection in AI infrastructure hardware this year, we think that entreprise software is the next big revenue opportunity in the near term, as copilots are rolled out across software platforms with hundreds of millions of users. Microsoft, Alphabet, Adobe, Salesforce and Intuit are already well into their launches, and we will see a material uplift in revenues from these new products in the coming quarters. We expect new innovations to follow as capabilities expand, and new behemoths to emerge, just as smartphones and the app store unleashed a wave of disruptive business models and services.
|
|
We are currently working through a period of 'peak' uncertainty when it comes to interest rates and the outlook for economic growth. This will remain a source of anxiety for investors, though we remain confident that inflation was largely a supply-side phenomenon, brought about by the pandemic and, to a lesser extent, the Russian invasion of Ukraine.
We have previously highlighted papers from researchers and analysts attempting to quantify the impact of AI on the economy and the labour market (e.g. Goldman Sachs estimate a +1.5% rise in productivity over the next ten years and $7 trillion added to global GDP; or, Accenture estimate that 40% of working hours across industries may be impacted by LLMs). Studies continue to emerge showing significant productivity gains across various industries and tasks, and we have started to notice productivity gains highlighted in corporate earnings calls, especially in areas like software engineering. We expect to see this expand across industries in the coming quarters, and be sufficiently impactful to start showing up in labour market data as soon as next year, especially in companies with high G&A expenses and in many areas of knowledge work.
|
|
A recent real-world study, conducted by BCG on their own staff, showed the gap between above average and below average consultants narrows from 22% to 4% with access to an AI assistant, as measured by a set of 18 business-related tasks
|
|
It is important to note that these analyses are based on models which were trained over a year ago. It's impossible to predict the impact of future models if they continue to improve exponentially with scale, multimodal capabilities are added, and advancements in planning, reasoning and memory enable some level of autonomy.
This time last year, no one would have believed that an AI chatbot would appear almost overnight and drive 1.5 billion website visits per month, or that tech behemoths like Alphabet and Microsoft would radically transform their crown jewels - Search and Office 365 - infusing generative AI into every aspect of the user experience. Who would have believed that we were months away from the release of a language model that could pass the Bar exam, or achieve expert-level medical diagnosis? Or that weeks after that, a healthcare specific model like Med-PaLM M could add other modalities like genomic data or medical images like x-rays and mammograms to provide holistic answers to medical questions? Meanwhile voices can be cloned from a sample a few seconds long, podcasts are available in different languages in the podcasters own voice (video lip-syncing to follow), and image, video and music generators improve every month. Tesla has replaced >300,000 lines of code in their self-driving program to just 3,000, relying entirely on the compute/data scaling approach that has been so successful in these various modalities. and success here will have implications well beyond cars.
The change ahead is unprecedented - larger in scope and faster to market than previous tech trends. We are still just in the foothills, with a steep climb ahead and peak that is far out of sight. We think AI is the theme of this decade, something with implications for every company in every sector. Also unusual, is that investors can get exposure to much of the opportunity and value in public markets. The main focus of the Horizon Fund is to capitalise on this in the years ahead.
|
|
|
|