{"id":11788,"date":"2023-08-25T08:23:13","date_gmt":"2023-08-25T07:23:13","guid":{"rendered":"https:\/\/www.gpbullhound.com\/?post_type=article&p=11788"},"modified":"2023-08-25T13:10:18","modified_gmt":"2023-08-25T12:10:18","slug":"tech-thoughts-newsletter-25-august-2023","status":"publish","type":"article","link":"https:\/\/www.gpbullhound.com\/articles\/tech-thoughts-newsletter-25-august-2023\/","title":{"rendered":"Tech Thoughts Newsletter \u2013 25 August 2023."},"content":{"rendered":"\n

Market:<\/strong> Rising rates hurt markets and tech again \u2013 particularly towards the end of the week, with the market in wait-and-see mode ahead of Jerome Powell\u2019s speech on Friday. <\/p>\n\n\n\n

Portfolio: <\/strong>We made no major changes to the portfolio this week. <\/p>\n\n\n\n

The biggest thing to talk about in tech this week is\u00a0Nvidia\u00a0<\/strong>(owned) results \u2013\u00a0another blowout quarter exceeding market expectations, and guiding significantly ahead,<\/strong>\u00a0with management calling out \u201ctremendous demand\u201d for Nvidia\u2019s GPUs for AI platforms.<\/p>\n\n\n\n

Of course, <\/strong>the news on demand wasn\u2019t unexpected. We\u2019ve known Nvidia is currently supply-gated <\/strong>\u2013 we heard from many companies through Q2 earnings calls that they were waiting on a supply of Nvidia GPUs, which are currently sold out. <\/strong>The cloud hyperscalers have all announced huge AI-driven capex plans, and enterprise software businesses have all added AI capabilities to their offerings<\/strong>, we know every CEO in every industry is trying in some way to either build or gain access to Nvidia GPU capacity. <\/p>\n\n\n\n

Our above-consensus view into the quarter (please ask us for our June research note<\/em>) was focused on what we believed was supply upside coming from node shifts and freed-up capacity at TSMC. <\/strong>We think this, as well as better pricing, largely played out, and see further upside as TSMC continues to build out advanced packaging (CoWoS) capacity. <\/strong><\/p>\n\n\n\n

The earnings call was overall very bullish on supply, demand visibility, and the view that we\u2019re still very early in the innings of this infrastructure shift.<\/p>\n\n\n\n

Capacity will increase sequentially each quarter into next year <\/strong>\u2013 we think mainly TSMC adding CoWoS capacity, but note as well that Nvidia recently announced its L40S GPU<\/strong>, which doesn\u2019t require advanced packaging and can likely address some unfulfilled demand and accelerate supply in the short term.<\/strong><\/p>\n\n\n\n

Margins also stood out<\/strong>. Pricing was a significant lever (we think particularly in China) \u2013 with gross margin beating. Strong top-line growth came with very little opex increase <\/strong>\u2013 in Q2, $6.8bn incremental revenue yr\/yr came with less than $100m incremental opex \u2013 it\u2019s the beauty of Nvidia\u2019s fabless business model.<\/p>\n\n\n\n

The longer-term question is: \u201cIs this level of demand sustainable?\u201d<\/strong> <\/p>\n\n\n\n

The opportunity was framed by CEO Jensen Huang as\u00a0$1trn of infrastructure spend ($250bn a year) which needs to shift to accelerated compute (to GPUs, to faster CPUs, to lower latency higher throughput networking equipment)\u00a0<\/strong>and if you think about Nvidia\u2019s current revenue (its data centre business will be ~$13bn in revenue next quarter) \u2013 it\u2019s clear that we\u2019re still in the very early innings of this shift.<\/strong><\/p>\n\n\n\n

On the competitive moat, there is little doubt that Nvidia has the best chips to build on and the best ecosystem to support it. Nvidia has the advantage of its architecture, CUDA <\/strong>\u2013 the software ecosystem that sits on top of Nvidia<\/strong>. Writing parallelisable software is much more difficult than writing for CPU \u2013 you need to re-engineer everything from the chip to systems to the system software. CUDA gets rid of a lot of the complexity in writing parallelised software and makes it much easier to implement programs that run on Nvidia GPUs. It\u2019s a large part of what has allowed Nvidia to integrate itself into most of the world\u2019s AI frameworks.<\/p>\n\n\n\n

And it has an installed base \u2013\u00a0there are 4 million developers, 3,000 applications, 40 million CUDA downloads, and 15,000 start-ups built on Nvidia. Nvidia has become the de facto standard for software developers around GPUs and accelerated computing<\/strong>\u00a0\u2013 that software and developer ecosystem is a very effective moat (you can make a parallel with Apple\u2019s business model).<\/p>\n\n\n\n

What do Nvidia results mean for the rest of the tech industry?<\/strong><\/p>\n\n\n\n

Fundamentally, this set of results gives us more conviction that the AI infrastructure build is an area of the semis market you want to be exposed to, with a long runway of growth and supply constraints that will likely continue to support healthy pricing in the mid-term.\u00a0<\/p>\n\n\n\n

Nvidia\u2019s position as the leader in GPU is clear, but this will also benefit players across the AI value chain.\u00a0<\/strong>We\u2019ve commented before that the extent to which Nvidia is currently supply-constrained is very helpful for\u00a0AMD\u2019s\u00a0<\/strong>(which we own) entry into the GPU space, and there is no doubt that no customer will want to be entirely tied into one powerful provider (it\u2019s no secret that Nvidia chips are very expensive).\u00a0TSMC\u00a0<\/strong>make both Nvidia and AMD chips, the networking infrastructure around it (we own\u00a0Cisco and Arista Networks<\/strong>) and the semicap equipment makers will ultimately benefit \u2013 these chips will all be made on advanced nodes.\u00a0<\/p>\n\n\n\n

On to the rest of the results: <\/strong><\/p>\n\n\n\n

Best-in-class software showing growth and profitability <\/strong><\/p>\n\n\n\n