{"id":14460,"date":"2023-11-17T14:38:56","date_gmt":"2023-11-17T13:38:56","guid":{"rendered":"https:\/\/www.gpbullhound.com\/?post_type=article&p=14460"},"modified":"2023-11-17T15:57:46","modified_gmt":"2023-11-17T14:57:46","slug":"tech-thoughts-newsletter-17-november-2023","status":"publish","type":"article","link":"https:\/\/www.gpbullhound.com\/articles\/tech-thoughts-newsletter-17-november-2023\/","title":{"rendered":"Tech Thoughts Newsletter \u2013 17 November 2023."},"content":{"rendered":"\n

Market: <\/strong>It was all about <\/strong>inflation again this week, with better CPI numbers driving the market higher early in the week. Worth noting Walmart’s comments on its earnings call, as the biggest retailer in the world: “In the US, we may be managing through a period of deflation in the months to come. And while that would put more unit pressure on us, we welcome it because it’s better for our customers.”<\/em> Might it be true? Was the Fed listening in? Good for tech sentiment if so. <\/p>\n\n\n\n

Portfolio: <\/strong>We made no major changes to the portfolio this week.\u00a0<\/p>\n\n\n\n

First up, Nvidia news<\/strong> (ahead of its results next week): In October, we noted that Nvidia’s latest Investor presentation included this slide detailing its latest roadmap and moving to a one-year “cadence” with its H200 chip (the “successor” to its current H100 “sold out” AI chip) coming in early 2024 and its subsequent B100 following later in the year.<\/p>\n\n\n\n

\"\"
Source: Nvidia<\/figcaption><\/figure>\n\n\n\n

This week, Nvidia officially launched<\/strong> its H200, the first time we’d seen the chip’s full specs. The highlight is the inclusion of HBM3e (high bandwidth memory). <\/strong>We’ve spoken before about the importance of memory in the AI world <\/strong>\u2013 given the need to store and retrieve large amounts of data<\/strong>. The H200 is further evidence of that (see the performance and efficiency benefits of the extra memory below). <\/p>\n\n\n\n

As a bit of background, AMD originally developed HBM for gaming, as they found that scaling memory to match increased GPU performance required more and more power to be diverted away from the GPU, impacting performance. HBM was developed to be a new memory chip with low power consumption.<\/strong> What was a problem for gaming has turned into much more of an issue in datacentres and AI,<\/strong> given the amount of data and speed necessary to store and retrieve in both training and inference. <\/p>\n\n\n\n

That’s also what makes the advanced packaging (CoWoS – chip-on-wafer-on-substrate) so important \u2013 HBM and CoWoS go hand in hand, <\/strong>effectively enabling short and dense connections between the logic and HBM (more so than possible on a PCB) in turn driving up GPU utilisation and performance. <\/strong><\/p>\n\n\n\n

The H100 includes 96GB of HBM3 memory, whereas the H200 has 141GB of faster HBM3e memory. Beyond that, the chips are very similar (both are built on TSMC’s 5nm node). That makes the performance improvements (driven by the memory content) impressive.<\/p>\n\n\n\n

\"\"
Source: Nvidia<\/figcaption><\/figure>\n\n\n\n

It’s important because <\/strong><\/p>\n\n\n\n

    \n
  1. Nvidia has $11.15bn purchase commitments (and AMD $5bn), which relate primarily to CoWoS and HBM<\/strong>, which have become the main bottlenecks in the AI supply chain. That speaks to a significant barrier to entry both Nvidia and AMD have vs. competition <\/strong>\u2013 effectively securing supply <\/strong>of the bulk of TSMC’s capacity. We own both Nvidia and AMD <\/strong>\u2013 and TSMC (who have effectively had their CoWoS capacity expansion underwritten). <\/strong><\/li>\n\n\n\n
  2. It means that AMD’s strategy of stuffing its MI300X with HBM3 (a total of 192GB) <\/strong>\u2013 though we haven’t seen proper performance comparisons yet <\/strong>\u2013 looks credible.<\/strong> While we expect Nvidia to remain the dominant GPU supplier in training, we’ve said before that there are many reasons why AMD’s MI300X can be a credible alternative in GPU. We expect sales to be materially above their guidance next year. <\/li>\n\n\n\n
  3. Capacity expansion for HBM at memory players (Hynix, Samsung, Micron) is even more important<\/strong> \u2013 which will be a significant driver of memory capacity expansion and capex spend on semicap equipment. Relatedly, there were reports late last week that SK Hynix plans to increase their FY24 Capex +50% yr\/yr \u2013 we think DRAM capex has <\/em>to be up in 2024, given such low spending levels in 2023. In September, Micron also revised its capex to be up yr\/yr (August 2024 year-end), driven by a doubling of HBM capacity. While we don’t own any memory players in the portfolio, we do own semicap equipment players whose tools will be bought to build out this capacity <\/strong>(see also Applied Materials results this week below).<\/li>\n<\/ol>\n\n\n\n

    Onto more newsflow and results: <\/p>\n\n\n\n

    Microsoft’s infrastructure build-out in focus at Ignite<\/strong><\/p>\n\n\n\n

      \n
    • It’s worth following the Nvidia H200 commentary with the main news from Microsoft’s Ignite conference this week.<\/strong> Nvidia CEO Jensen Huang was on stage alongside Microsoft CEO Satya Nadella<\/strong> (I’ll come to your investor day if you come to mine). <\/li>\n\n\n\n
    • We’ve commented before on the vast amount Microsoft is spending on its infrastructure build-out \u2013 their data centre capex will be >$30bn this year (fun fact: when adjusted for inflation, the Apollo space program cost approximately $25bn).<\/strong><\/li>\n\n\n\n
    • This most significant slice of this is going to Nvidia H100 GPUs but Microsoft also announced that it will build its own AI chip<\/strong>s, joining Google (TPU), Amazon (Trainium and Inferentia), both also hyperscalers offering their own ASIC to customers for AI workloads. <\/li>\n\n\n\n
    • There are two chips, Cobalt, a CPU, and Maia, an ASIC. ASIC chips are different from GPUs: ASICs are Application Specific Integrated Circuits <\/strong>\u2013 processors designed for a particular use,<\/strong> rather than for general-purpose uses and workloads. We’ve discussed before that GPUs are designed for parallel processing at a large scale across different workloads<\/strong> (effectively performing the same calculation over and over again) and are very flexible (that’s what makes CUDA on top of it very important because the flexibility also makes it quite complex to write parallelised software). ASICs are built to be more specific for the application <\/strong>\u2013 they are typically higher performance (like for like) but with a more limited scope in terms of workloads that can run on them.<\/strong><\/li>\n\n\n\n
    • It makes sense for the hyperscalers to move downmarket to ASICs to run specific workloads, where they can run workloads more economically and also reduce their dependence on Nvidia chips.<\/strong><\/li>\n\n\n\n
    • Is it bad news for Nvidia? No. We’ve said that we expect more chip winners in AI, given that no player wants to be tied into one dominant supplier, Nvidia. Given their high utilisation, extensive user-installed base, and specific use cases, the hyperscalers are motivated to build their own to handle specific workloads<\/strong>. <\/li>\n\n\n\n
    • It is also worth pointing out that, in addition to the partnership with Nvidia, Microsoft announced an expanded partnership with AMD, which is deploying the MI300X chip \u2013 we think Microsoft could account for over half of AMD’s MI300X revenue next year. <\/strong><\/li>\n<\/ul>\n\n\n\n

      Portfolio view:<\/strong> We own Microsoft \u2013 outside of the chip companies benefiting directly downstream of capex, it will be the first company to see meaningful revenue directly from AI thanks to CoPilot. <\/strong>Its competitive moat \u2013 which was already high thanks to sticky B2B customers in its Office software business \u2013 gets sustained in the move to AI. We commented last week on Microsoft’s advantage in optimising GPU utilisation across its full stack of software products. <\/strong>That scale advantage also means it can optimise its chip stack too, including building its own to reduce further the cost of compute. It’s a perfect flywheel <\/strong>\u2013 scale that enables it to offer AI compute at lower prices, driving more innovation on top of its infrastructure and more AI use cases (a la OpenAI). <\/strong><\/p>\n\n\n\n

      But there is no change to our view on Nvidia and AMD (both owned) \u2013 as we said at the start, they dominate a constrained supply chain and (worth noting that Microsoft’s chips will also be built at TSMC 5nm). <\/p>\n\n\n\n

      China and Nvidia chips <\/strong><\/p>\n\n\n\n

        \n
      • Following on from our letter last week, which spoke to Nvidia’s H20 chip release specifically for the Chinese market<\/strong>, Tencent was very explicit about its stockpiling efforts of Nvidia chips on its earnings call: <\/strong><\/li>\n\n\n\n
      • “In terms of the chip situation, right now, we actually have one of the largest inventory of AI chips in China among all the players. <\/em>And one of the key things that we have done was we were the first to put an order for H800, and that allow us to have a pretty good inventory of H800 chips. So we have enough chips to continue our development of Hunyuan for at least a couple more generations.<\/em><\/strong> And the ban does not really affect the development of Hunyuan and our AI capability in the near future.”<\/em><\/li>\n\n\n\n
      • Alibaba meanwhile announced that it would no longer spin off its cloud business given “uncertainties created by recent US export restrictions on advanced computing chips”. <\/em>To be clear, that doesn’t mean they haven’t stockpiled lots of chips too, just that the IPO filing “risks” section was likely perceived as too long for any investor to accept.<\/strong><\/li>\n\n\n\n
      • Lenovo spoke to using the new Nvidia China chips within its products. <\/li>\n\n\n\n
      • Biden and Xi met this week at the APEC summit, though semis didn’t feature in any of the official scripts<\/strong>. TSMC founder Morris Chang is meeting Biden too (Chang is Taiwan’s representative). Could be quite an interesting semiconductor discussion if Chang and Xi meet in the corridor.<\/li>\n<\/ul>\n\n\n\n

        Portfolio view:<\/strong> There is a potential risk that you’ve had a large pull forward in demand from China that isn’t sustainable – Nvidia knew more restrictions were coming \u2013 we suspect they put a lot of China demand to the top of the queue and front-loaded those orders to get them through. What makes us relatively comfortable is that (1) we know \u2013 from Dell\/Super Micro etc \u2013 that there is still a very significant backlog of Western demand, and (2) The H20 chip likely sustains that pattern of behaviour<\/strong> (order lots in case they get restricted too). It’s something we need to keep watching in the commentary, but for now, we think Western demand (for H100 and now H200) takes us to Q3\/Q4 next year, which then takes us to the B100 refresh cycle. Given the political tail risk, we don’t own any Chinese companies, which makes the downside (to both the multiple and earnings) difficult to frame.<\/p>\n\n\n\n

        EV shift and semiconductor content increases playing out \u2013 all about China <\/strong><\/p>\n\n\n\n

          \n
        • Infineon <\/strong>(owned) issued an overall “ok” set of results. Still, investor focus is all about its auto business, which continued to show resilience and is guided for double-digit growth again next year (after 26% growth this year). <\/strong><\/li>\n\n\n\n
        • There has been a (we think unwarranted) bear case around Infineon’s auto business this year \u2013 with some investors arguing that demand would fall and pricing with it. This set of results is another data point showing that it is not happening.<\/strong><\/li>\n\n\n\n
        • We think one of the issues in the market is that there is too much focus on the global OEMs (US\/European\/Japanese) who are all reporting higher EV inventory and sales struggling <\/strong>(ex Tesla).<\/strong> What the market needs to look more closely at is China. <\/strong>We spoke last week about the record-high October EV sales and higher-end models from Li Auto likely to challenge BMW et al. <\/li>\n\n\n\n
        • More China EV news this week as Xiaomi’s SU7 EV specs and pictures were leaked. Again, this is very much targeting the high-end<\/strong> (Tesla S and Porsche Taycan compare on performance) \u2013 we said last week the share shift away from international OEMs to Chinese \u2013 not just domestically in China \u2013 is likely to accelerate. <\/li>\n\n\n\n
        • As Infineon CEO said on the conference call: “We grow with the winners and also don’t forget the Chinese OEMs want to go into export and especially in the export situation, quality and reliability are key for them.” <\/em><\/strong>We don’t know yet but there is a reasonable probability that Infineon might be supplying Xiaomi with SiC components. <\/li>\n\n\n\n
        • An interesting data point: Infineon’s value in a high-end Chinese auto is more than \u20ac800 per car. If you do the maths on that, for the 3 million BYD autos that’s \u20ac2.4bn, vs the current China auto revenue of what we think is ~\u20ac2bn.<\/strong> It means that as the premiumisation story plays out in EV, the revenue growth potential for Infineon is significant. Auto OEMs are competing with content and features like the old smartphone world \u2013 it’s a multi-year driver for the semiconductor companies that supply into them (and unlike smartphones, these suppliers are designed for 7-year cycles). <\/strong><\/li>\n\n\n\n
        • Infineon expects double-digit growth in its auto business next year, assuming flat (~1%) car production.<\/strong> We speak below on the difficulty of investing in semis markets that don’t have a structural content buffer, which makes you much more sensitive to volatile unit numbers. For Infineon, 8% unit growth in autos this year has translated to 26% top-line growth.<\/li>\n<\/ul>\n\n\n\n

          Portfolio view: <\/strong>Auto, along with AI, is a bright spot in semis end demand, with the structural increase in power semis in the move to EV. We continue to think the competitive environment for the global auto manufacturers is challenging \u2013 Tesla price reductions speak to that \u2013 and we don’t own any auto OEMs (manufacturers). But there are only a handful of auto semiconductor suppliers, which are designed over long cycles, and which can maintain pricing power \u2013 this makes it an attractive place to be in the value chain. <\/p>\n\n\n\n

          “Cost of money” and billings vs revenue <\/strong><\/p>\n\n\n\n

            \n
          • Palo Alto<\/strong> (owned) beat the quarter on top-line and profitability but disappointed with their billings guidance. <\/strong><\/li>\n\n\n\n
          • On the call, they spoke to a higher cost of money, resulting in more customers asking for deferred payment terms, discounting and deal finance terms on longer deal terms (~3 years)<\/strong>. Effectively, customers that previously signed 3-year contracts are now asking for discounting on those, and instead of agreeing, Palo Alto is asking them to move to annual payments. That impacts billings (because deferred revenue is lower).<\/li>\n\n\n\n
          • The company argues this is only about giving customers flexibility: “You want to pay me, want me to do a three-year deal, you got to go finance it in benefits. I can do that, but I can say ‘just pay me on an annual basis, I’m okay.’ I’ll collect my money every year. If I go in that direction, my billing will change. It does not change anything in my pipeline, close rates or demand function. Those are my points. I think we’re going to keep having this debate, where you keep calling it guiding down on billings, I’m going to keep calling it flexibility, you want to keep calling it guideline downward billing, so I’ll keep telling it doesn’t change my numbers. So we agree that we will be saying that because I don’t <\/em>\u2013 nothing has changed the prospects of Palo Alto of three months ago.”<\/em><\/li>\n\n\n\n
          • The big question is, “do we believe them?” <\/strong>From a demand perspective, Palo Alto has stood out in cyber security, benefitting from spend consolidation and outperforming the industry regarding growth (and market share) and profitability. <\/strong>The billings\/revenue divergence is a dynamic we’ve seen in prior cycles (SAP had customers ask for terms around its maintenance revenue), and is something we’ve seen from peers \u2013 whom Palo Alto is still outperforming (Palo Alto guided for billings growth of 15-18% next quarter, while Fortinet will decline).<\/li>\n\n\n\n
          • Importantly, for a fast-becoming incumbent business, it continues innovating and capturing industry growth, growing completely new products and markets. Their next-gen security (NGS) business is a $3bn\/quarter business (a little over three years old), still growing 50% yr\/yr. That speaks to demand being better than billings implies (noting too that NGS contracts are lower duration in nature).<\/strong><\/li>\n<\/ul>\n\n\n\n

            Portfolio view: Remove the billings guide <\/strong>(which doesn’t impact P&L or cash); the company is still the standout cyber stock regarding growth and profitability and is still a stock we want to own. For now, we believe them, though this shows they’re not immune from some of the broader spending trends we’ve seen. Indeed, this is something to watch for in the broader software space (where the same billing dynamic exists).<\/p>\n\n\n\n

            AI lifting network requirements but shorter-term enterprise spend caution?<\/strong><\/p>\n\n\n\n

              \n
            • Cisco <\/strong>(owned) reported a fine set of results, but forward guidance was much lower than expected, with orders declining significantly: <\/strong>down 20% yr\/yr.<\/li>\n\n\n\n
            • Cisco has been shifting its focus to software, and its software business has shown decent growth<\/strong>: +13 % year over year and $24.5bn ARR. <\/li>\n\n\n\n
            • The issue is in its product business \u2013 a better supply environment helped deliver orders to clients in the quarter just gone, but these orders are now being digested, with Cisco estimating its clients are carrying 1-2 quarters of inventory (effectively, clients are installing equipment, and not currently ordering more).<\/strong><\/li>\n\n\n\n
            • While Cisco is moving more toward a software business model (something its Splunk acquisition will help with), the bulk of its business is still turns\/orders. This means that when customers change their ordering patterns, Cisco feels it immediately. <\/strong><\/li>\n\n\n\n
            • In short, the weaker macro combined with the normalisation of networking spending impacts Cisco more than we expected. <\/strong><\/li>\n\n\n\n
            • Longer term, we expect Cisco to benefit from increased demands for AI networking. <\/li>\n\n\n\n
            • The big debate this year has been the extent to which Nvidia’s Infiniband (which it acquired as part of Mellanox) will become the de facto industry standard for AI workloads or whether ethernet (which Cisco sells) will be adopted for AI.<\/strong> The back end is important because that’s where much of the performance\/power equation rests <\/strong>\u2013 more compute requires more network bandwidth and switching intensity and more demand for their products<\/strong> \u2013 GPU clusters used to run AI chats need about 3x more bandwidth than a traditional compute network today. Arista and Cisco are pushing ethernet and related tech, Nvidia is pushing Infiniband. Currently, we think Infiniband is ~90% of the AI market. Still, we expect ethernet to take an increasing share, which is part of our thesis for both Arista and Cisco (relatedly, we think the server racks for Microsoft’s Maia 100 dual source Arista and Cisco ethernet). <\/strong><\/li>\n\n\n\n
            • Cisco management updated their AI networking commentary, with $1bn of AI orders (up from last quarter’s $0.5bn commentary) that could be realised in FY25.<\/strong><\/li>\n<\/ul>\n\n\n\n

              Portfolio view: <\/strong>We own Cisco and Arista and see AI increasing networking requirements in the back end. In the same way that we expect hyperscalers not to want to be tied entirely to Nvidia chips, they will also want an alternative to Infiniband. Therefore, we expect ethernet to be adopted for some AI workloads,<\/strong> which will benefit both Cisco and Arista. <\/p>\n\n\n\n

              Semicap resilience, China export restrictions a headache<\/strong><\/p>\n\n\n\n

                \n
              • Applied Materials<\/strong> (owned) beat on top and bottom line, and guidance was better than consensus. <\/li>\n\n\n\n
              • The more important news, which sent shares down 6% last night, was Reuters reporting (coincident with results) that they’re under investigation for having evaded export controls, shipping equipment to SMIC apparently via a subsidiary in South Korea.<\/strong> They received a subpoena from the DoC last year, which they disclosed in their 10K in October 2022 \u2013 it needs to be clarified whether there’s anything new in this.<\/li>\n\n\n\n
              • On results, they expect 2024 growth to be driven more by leading-edge logic (we’ve said before we think TSMC needs to increase 5nm capacity given demand <\/strong>\u2013 that goes back to the H200 news at the start and build out its GAA 3nm node). <\/strong><\/li>\n\n\n\n
              • DRAM utilisation and pricing are improving. As we said, AI requires more DRAM (HBM) capacity. DRAM capex can go up a lot next year.<\/strong><\/li>\n\n\n\n
              • There is weakness in industrial and softness in auto. They say they’re seeing lower utilisation here, which agrees with some of the industrial semis commentary and push-outs <\/strong>\u2013 again, that speaks to companies like TI\/Microchip keeping utilisation low. <\/strong><\/li>\n\n\n\n
              • Their long-term growth equation remains: semis growing > GDP, semicap > semis.<\/li>\n\n\n\n
              • Interestingly in the context of the Reuters report, they say they do not expect the updated export rules to have any impact <\/strong>\u2013 they expect Chinese demand to remain strong, given China’s domestic manufacturing capacity is structurally below its share of semis consumption <\/strong>\u2013 we’ve said before it makes a lot of strategic sense for Cina to build out meaningful capacity in trailing edge chips. <\/strong><\/li>\n\n\n\n
              • Relatedly, China import data showed Q3 semicap equipment imports rose 93% yr\/yr.<\/li>\n<\/ul>\n\n\n\n

                Portfolio view: <\/strong>A stellar set of results, but the Reuters article is undoubtedly unhelpful (note that this isn’t future revenue at risk, given SMIC is well out of numbers). We suspect this relates to shipments well back when SMIC was first added to the restricted list in 2020. <\/p>\n\n\n\n

                Semicap equipment (we own Applied Materials, LAM Research, ASML and KLA) remains a key exposure in the portfolio for us<\/strong>. There are multiple growth drivers which will support revenue growth out to 2030 \u2013technology transitions, geopolitics, AI; and their business models and strong market positions in each of their process areas allow them to sustain strong returns and cash flows even in a relative downturn<\/strong> \u2013Applied’s FCF doubled yr\/yr in the quarter. <\/p>\n\n\n\n

                PC, smartphone and Android<\/strong><\/p>\n\n\n\n