Tech Thoughts Newsletter – 2 June 2023.
Market: Tech continues to outperform the general market this year, and the strong demand for AI solutions is just adding to that trend. The price movements have been widening out somewhat in tech but are still very much driven by the mega caps.
Portfolio: we have not made any major adjustments to our portfolio this week as we feel our exposure to the growing trends like AI are the right one, by size and companies.
Nvidia’s (owned) excitement of results last week was followed by Jensen Huang’s appearance at the Computex trade show in Taiwan, alongside a slew of new announcements from businesses collaborating with Nvidia on new AI capabilities:
- Ad agency WPP will work with Nvidia to develop a generative AI-based content engine that will enable creative teams to produce “high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client’s brand”.
- Mediatek announced it would adopt Nvidia’s AI accelerators for its future range of auto semi-SoCs, with plans to release the first product in late 2025 on TSMC’s 3nm node.
- And on the manufacturing side, Nvidia announced models for simulating and testing robots, and automated optical inspection and quality control, with partners including ADLINK, Aetina, Siemens and Deloitte and Pegatron creating virtual factories to try out new processes in a simulated environment
Again the message is that this shift to accelerated computing is happening and that Nvidia clearly has the best chips (and, importantly – the best ecosystem) to build on.
In his keynote, Jensen outlined the benefits of shifting workloads to GPUs (recalling last week’s quote from the earnings call of 1 trillion dollars of infrastructure which needs to shift) and, very specifically, what training a large language model with CPUs would cost vs GPUs, demonstrating how much better GPUs are from a total cost perspective.
- It costs $10m for 960 CPU servers, consuming 11GWh to train 1 large language model.
- For the same cost ($10m), you can buy 48 GPU servers, consuming 3.2 GWh to train 44 large language models.
- Or you can buy 2 GPU servers for $400k, consuming 0.13 GWh to train the same 1 large language model – 25x less expensive.
It’s important because the pushback for moving workloads to GPU is that GPU servers are expensive. And that is true in absolute unit terms. But if you want to build the most cost-effective data centre (not the most cost-effective single server), GPUs are much better for those parallelisable workloads. That part of the keynote ended with Jensen saying, “The more (Nvidia chips) you buy, the more you save”(!!), which is correct.
We’ve written before about Nvidia’s biggest long-term competitive advantage, CUDA – the software ecosystem that sits on top of Nvidia.
One of the arguments Jensen made was that “Accelerated computing is full stack”. It’s different from CPUs, which are relatively more flexible and easy to program. Writing parallelisable software is much more difficult – you need to re-engineer everything from the chip to systems to the system software.. Which is where CUDA comes in. CUDA gets rid of a lot of the complexity in writing parallelized software and makes it much easier to implement programs that run on Nvidia GPUs. It’s a large part of what has allowed Nvidia to integrate itself into most of the world’s AI frameworks.
There is little doubt that Nvidia will dominate the large language model market for the foreseeable future. There are 4 million developers, 3,000 applications, 40 million CUDA downloads, and 15,000 startups built on Nvidia. Nvidia has become the de-facto standard for software developers around GPUs and accelerated computing – that software and developer ecosystem is a very effective moat (you can make a parallel with Apple’s business model).
There are two broader questions around (1) how many workloads that run on CPUs today make sense to be shifted to parallelised workloads; and (2) how much bigger will the inference market be, and to what extent can Nvidia sustain its competitive advantages and dominate to the same extent that we assume they will for the training market. We think there is more of an open market around inference, where the GPU and networking/interconnect requirements (where Nvidia have a key advantage) are lower, and we expect AMD can be competitive.
What is clear, though is that Nvidia’s chips are currently the best at performing these generative AI tasks, and its competitive moat CUDA and the developer ecosystem it has built is very hard to supplant – it makes it the clear winner in the initial AI build-out.
Other news/results:
AI as an enterprise software opportunity is the key debate.
- Following on from Workday last week, we saw more software companies reporting this week.
- We’ve spoken before that AI as an enterprise opportunity makes sense. Established enterprise software businesses who are able to (1) leverage very large installed bases to upsell AI features; (2) have large cash balances to invest in AI; (3) have the right business model for AI to match the increased cost – ie. subscription, not pay per click (like search).
- And for enterprise software customers, it makes much more sense to get AI capabilities via their existing platform software providers (Microsoft, Workday, ServiceNow, and Salesforce can also benefit from upselling AI features) than to try to implement it themselves.
- Salesforce (owned) reported this week and that was certainly the message CEO Marc Benioff wanted to give:
- “The coming wave of generative AI will be more revolutionary than any technology innovation that’s come before in our lifetime or maybe any lifetime. Like Netscape Navigator, which opened the door to a greater Internet, a new door has opened with generative AI, and it is reshaping our world in ways that we’ve never imagined. Every CEO realizes they’re going to have to invest in AI aggressively to remain competitive, and Salesforce is going to be their trusted partner to get them to do just that. Every CEO I’ve spoken with sees AI as a revolution beginning and ending with the customer, and every CIO I’ve spoken with wants more productivity, more automation, and more intelligence through using AI.”
- While we’d acknowledge a good degree of positive PR on the call from a pretty promotional CEO, there was also a decent amount of substance: Their Einstein AI product generates more than 1 trillion predictions a week around sales conversion and suggested follow up actions for sales leads.
- And it’s not hard to imagine, given Salesforce’s product set around data cloud, marketing and commerce clouds and slack – that they are well set up to utilise AI. In just the same way as Microsoft operating AI across its suite of apps is compelling, Salesforce should be able to do the same.
- The million dollar question, though, is when and how Salesforce can start to monetise this.
- The results themselves were solid – top line KPIs (revenue and cRPO) were ahead of guidance and consensus, though there was no upgrade to full year revenue guide, which perhaps speaks to the macro uncertainty (Salesforce particularly noted a slowdown in their professional services business) all software businesses will face into the second half, with no one escaping (even with an AI tailwind) elongated deal cycles and more deal scrutiny.
- Margins delivered again and they did raise their FY24 OPM guidance ~100bps to 28% (that will be +550bps yr/yr).
Portfolio view: We have to acknowledge that, so far in the fundamental results we’re seeing, AI is a cost for software companies, and aside from Microsoft’s cloud business, which we think is seeing share gains largely as a result of its partnership with OpenAI, there has been little evidence of ARPU upside coming from AI.
The question is when and if AI starts to be a meaningful revenue generator for software companies rather than the current cost of doing business. We’re optimistic that platform players like Microsoft, ServiceNow, Workday, and Salesforce have the ability to invest and continue to see spend (including AI spend) consolidate around them
Datacentre/server – AI demand is real but still in the early innings; general-purpose compute still seeing weakness in enterprise spend
- HPE (not owned) missed on sales and reduced their full year top line guidance somewhat to 4-6% from earlier 5-7%. Sales cycles have elongated, with customers more reluctant to quickly commit to large projects and some seeking additional internal approvals at the time of the order.
- The weakness was strongest in North America with Financial and manufacturing customers while Asia and European demand held up well.
- The weakness was mainly focused on general-purpose compute, servers and storage.
- On the AI side, though, the company are seeing strength – AI, Intelligent Edge and the hybrid cloud solution Greenlake enjoyed strong demand.
Portfolio view: We think the comments around the results from the company are as expected, weakness in general compute while AI and Edge demand is strong. That’s reflected in Trendforce’s new forecast for AI server growth: 38% in 2023 and 25-27% growth 2024-2026.
We don’t own server (box) businesses, which are exposed to inventory cycles and pricing pressure, though we follow them as a good read to the broader spend environment.
Intel (not owned) fighting for relevance in AI… but accepting defeat in training..
- Intel was out speaking at an investor conference this week. More broadly they seem to be seeing signs of an end of the datacentre inventory correction (which we cover with HPE below) but on the AI opportunity it sounds like they are accepting of Nvidia’s (the “hero systems”) dominance in the training GPU space, and will instead try to find a place in inference in the edge compute (closer to the end device, which is more cost effective than sending every inference query back to the cloud every time):
“But we see as things progress in AI, yes, at the beginning, probably a lot of investment in what Pat likes to call these hero systems, the big systems that require a lot of compute where cost probably is the bigger factor there. But as things proliferate into the network, into the enterprise data center, into the PC space into the edge compute, a lot of that will be handled more off of CPUs.”
- More broadly for their datacentre business, a bit like HPE, the general purpose compute data centre market has clearly been undergoing an inventory correction in enterprise (though signs that that looks to be closer to the end, given Intel reaffirming guidance at the upper end of their prior forecast range).
- The strength they are seeing is in their own AI accelerator business Gaudi, where deal pipeline (albeit small) is up 2.5x over the last 90 days. As above, there’s no way that Intel is trying to position itself as competing with Nvidia, but AI demand is real beyond the immediate Nvidia effect.
Portfolio view: We still back AMD to be the winner in CPU workloads within AI, where they have a significant performance leadership over Intel.
Networking and AI
- Broadcom (owned, small position), sales and earnings came broadly in line with expectations.
- The outlook was slightly above expectations driven by the networking semis business, mainly the Tomahawk switching chips. The growth is mainly coming from connecting AI clusters of GPUs and CPUs.
- The demand for normal X86 CPU servers is still quite weak and is going through a major inventory adjustment – as we’ve heard elsewhere from Intel and HPE (although Broadcom has very limited inventories).
- Generative AI is still largely a hyperscale business so edge and corporate AI spending is just picking up.
Portfolio view: Broadcom statements support our view that we are very early into the AI roll out and it’s still mainly hyperscale data centers that are deploying today. We expect this to be a multi decade tailwind for infrastructure plays – not only because of the initial build of infrastructure but also because we expect a continued refresh cycle as chips and networking become more efficient (and therefore cheaper to run) in the years ahead.
PC market still tough
- HP Inc (not owned) reported weak revenue numbers, and while there’s some hope the inventory correction in this market might be coming to an end (PC units were still down 28% – the same as Q1, but at least not getting worse), BUT there are increasing signs of ASP weakness.
- Dell (not owned) beat on both the top line and the bottom. The CSG business performed slightly better than expected as the company saw some early signs of demand stabilisation in commercial PC in their small and medium business segment. The demand is however still under pressure – this quote from the CEO:
- “Consistent with our commentary in recent quarters, the demand environment remains challenged, and customers are staying cautious and deliberate in their IT spending. We continued to see demand softness across our major lines of business, all regions, all customer sizes, and most verticals.”
- As with HPE, Dell also started to see strong demand for their AI servers but highlighted that it’s still very early days.
Portfolio view: PC market still not a market we want to be exposed to – with signs of ASP pressure compounding weakness in demand