{"id":14460,"date":"2023-11-17T14:38:56","date_gmt":"2023-11-17T13:38:56","guid":{"rendered":"https:\/\/www.gpbullhound.com\/?post_type=article&p=14460"},"modified":"2023-11-17T15:57:46","modified_gmt":"2023-11-17T14:57:46","slug":"tech-thoughts-newsletter-17-november-2023","status":"publish","type":"article","link":"https:\/\/www.gpbullhound.com\/articles\/tech-thoughts-newsletter-17-november-2023\/","title":{"rendered":"Tech Thoughts Newsletter \u2013 17 November 2023."},"content":{"rendered":"\n
Market: <\/strong>It was all about <\/strong>inflation again this week, with better CPI numbers driving the market higher early in the week. Worth noting Walmart’s comments on its earnings call, as the biggest retailer in the world: “In the US, we may be managing through a period of deflation in the months to come. And while that would put more unit pressure on us, we welcome it because it’s better for our customers.”<\/em> Might it be true? Was the Fed listening in? Good for tech sentiment if so. <\/p>\n\n\n\n Portfolio: <\/strong>We made no major changes to the portfolio this week.\u00a0<\/p>\n\n\n\n First up, Nvidia news<\/strong> (ahead of its results next week): In October, we noted that Nvidia’s latest Investor presentation included this slide detailing its latest roadmap and moving to a one-year “cadence” with its H200 chip (the “successor” to its current H100 “sold out” AI chip) coming in early 2024 and its subsequent B100 following later in the year.<\/p>\n\n\n\n This week, Nvidia officially launched<\/strong> its H200, the first time we’d seen the chip’s full specs. The highlight is the inclusion of HBM3e (high bandwidth memory). <\/strong>We’ve spoken before about the importance of memory in the AI world <\/strong>\u2013 given the need to store and retrieve large amounts of data<\/strong>. The H200 is further evidence of that (see the performance and efficiency benefits of the extra memory below). <\/p>\n\n\n\n As a bit of background, AMD originally developed HBM for gaming, as they found that scaling memory to match increased GPU performance required more and more power to be diverted away from the GPU, impacting performance. HBM was developed to be a new memory chip with low power consumption.<\/strong> What was a problem for gaming has turned into much more of an issue in datacentres and AI,<\/strong> given the amount of data and speed necessary to store and retrieve in both training and inference. <\/p>\n\n\n\n That’s also what makes the advanced packaging (CoWoS – chip-on-wafer-on-substrate) so important \u2013 HBM and CoWoS go hand in hand, <\/strong>effectively enabling short and dense connections between the logic and HBM (more so than possible on a PCB) in turn driving up GPU utilisation and performance. <\/strong><\/p>\n\n\n\n The H100 includes 96GB of HBM3 memory, whereas the H200 has 141GB of faster HBM3e memory. Beyond that, the chips are very similar (both are built on TSMC’s 5nm node). That makes the performance improvements (driven by the memory content) impressive.<\/p>\n\n\n\n It’s important because <\/strong><\/p>\n\n\n\n Onto more newsflow and results: <\/p>\n\n\n\n Microsoft’s infrastructure build-out in focus at Ignite<\/strong><\/p>\n\n\n\n Portfolio view:<\/strong> We own Microsoft \u2013 outside of the chip companies benefiting directly downstream of capex, it will be the first company to see meaningful revenue directly from AI thanks to CoPilot. <\/strong>Its competitive moat \u2013 which was already high thanks to sticky B2B customers in its Office software business \u2013 gets sustained in the move to AI. We commented last week on Microsoft’s advantage in optimising GPU utilisation across its full stack of software products. <\/strong>That scale advantage also means it can optimise its chip stack too, including building its own to reduce further the cost of compute. It’s a perfect flywheel <\/strong>\u2013 scale that enables it to offer AI compute at lower prices, driving more innovation on top of its infrastructure and more AI use cases (a la OpenAI). <\/strong><\/p>\n\n\n\n But there is no change to our view on Nvidia and AMD (both owned) \u2013 as we said at the start, they dominate a constrained supply chain and (worth noting that Microsoft’s chips will also be built at TSMC 5nm). <\/p>\n\n\n\n China and Nvidia chips <\/strong><\/p>\n\n\n\n Portfolio view:<\/strong> There is a potential risk that you’ve had a large pull forward in demand from China that isn’t sustainable – Nvidia knew more restrictions were coming \u2013 we suspect they put a lot of China demand to the top of the queue and front-loaded those orders to get them through. What makes us relatively comfortable is that (1) we know \u2013 from Dell\/Super Micro etc \u2013 that there is still a very significant backlog of Western demand, and (2) The H20 chip likely sustains that pattern of behaviour<\/strong> (order lots in case they get restricted too). It’s something we need to keep watching in the commentary, but for now, we think Western demand (for H100 and now H200) takes us to Q3\/Q4 next year, which then takes us to the B100 refresh cycle. Given the political tail risk, we don’t own any Chinese companies, which makes the downside (to both the multiple and earnings) difficult to frame.<\/p>\n\n\n\n EV shift and semiconductor content increases playing out \u2013 all about China <\/strong><\/p>\n\n\n\n Portfolio view: <\/strong>Auto, along with AI, is a bright spot in semis end demand, with the structural increase in power semis in the move to EV. We continue to think the competitive environment for the global auto manufacturers is challenging \u2013 Tesla price reductions speak to that \u2013 and we don’t own any auto OEMs (manufacturers). But there are only a handful of auto semiconductor suppliers, which are designed over long cycles, and which can maintain pricing power \u2013 this makes it an attractive place to be in the value chain. <\/p>\n\n\n\n “Cost of money” and billings vs revenue <\/strong><\/p>\n\n\n\n Portfolio view: Remove the billings guide <\/strong>(which doesn’t impact P&L or cash); the company is still the standout cyber stock regarding growth and profitability and is still a stock we want to own. For now, we believe them, though this shows they’re not immune from some of the broader spending trends we’ve seen. Indeed, this is something to watch for in the broader software space (where the same billing dynamic exists).<\/p>\n\n\n\n AI lifting network requirements but shorter-term enterprise spend caution?<\/strong><\/p>\n\n\n\n Portfolio view: <\/strong>We own Cisco and Arista and see AI increasing networking requirements in the back end. In the same way that we expect hyperscalers not to want to be tied entirely to Nvidia chips, they will also want an alternative to Infiniband. Therefore, we expect ethernet to be adopted for some AI workloads,<\/strong> which will benefit both Cisco and Arista. <\/p>\n\n\n\n Semicap resilience, China export restrictions a headache<\/strong><\/p>\n\n\n\n Portfolio view: <\/strong>A stellar set of results, but the Reuters article is undoubtedly unhelpful (note that this isn’t future revenue at risk, given SMIC is well out of numbers). We suspect this relates to shipments well back when SMIC was first added to the restricted list in 2020. <\/p>\n\n\n\n Semicap equipment (we own Applied Materials, LAM Research, ASML and KLA) remains a key exposure in the portfolio for us<\/strong>. There are multiple growth drivers which will support revenue growth out to 2030 \u2013technology transitions, geopolitics, AI; and their business models and strong market positions in each of their process areas allow them to sustain strong returns and cash flows even in a relative downturn<\/strong> \u2013Applied’s FCF doubled yr\/yr in the quarter. <\/p>\n\n\n\n PC, smartphone and Android<\/strong><\/p>\n\n\n\n Portfolio view:<\/strong> The smartphone and PC semis content stories have played out over time (there is still premiumisation but at nowhere near the same level as 10-20 years ago). This means it’s all (mostly) about unit numbers, which is one of the reasons we don’t own ARM. As the October Notebook shipments show \u2013 while it looks like inventory has cleared and the cycle is bottoming \u2013 the recovery might not be straight. We look for markets with a structural growth buffer around semis content. As we noted above, auto units will grow 1% next year, Infineon will still grow double-digit, and auto units were up 8% this year. Infineon grew its auto business by 26% \u2013 that’s because the driver is more about increased semis content \u2013 meaning you’re much less exposed to potentially volatile unit numbers. <\/p>\n\n\n\n Finally, Google DoJ antitrust trial <\/strong>\u2013 who’s being sued again? Cookies and competition<\/strong><\/p>\n\n\n\n Portfolio view<\/strong>: We stay away from politics. Any remedies implemented in the tech sector have had minimal tangible impact. We keep watching, but nothing materially impacts the status quo for now. <\/p>\n\n\n\n For enquiries, please contact: About GP Bullhound\n
\n
\n
\n
\n
\n
\n
\n
\n
<\/figure>\n<\/figure>\n\n\n\n
Inge Heydorn, Partner, at inge.heydorn@gpbullhound.com<\/a>
Jenny Hardy, Portfolio Manager, at jenny.hardy@gpbullhound.com<\/a>
Nejla-Selma Salkovic, Analyst, at nejla-selma.salkovic@gpbullhound.com<\/a><\/p>\n\n\n\n
<\/strong>GP Bullhound is a leading technology advisory and investment firm, providing transaction advice and capital to the world\u2019s best entrepreneurs and founders. Founded in 1999 in London and Menlo Park, the firm today has 14 offices spanning Europe, the US and Asia.<\/p>\n","protected":false},"featured_media":14470,"template":"","categories":[49,64],"sector":[37],"region":[43],"class_list":["post-14460","article","type-article","status-publish","has-post-thumbnail","hentry","category-insights","category-tech-thoughts","sector-software","region-global"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/article\/14460"}],"collection":[{"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/types\/article"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/media\/14470"}],"wp:attachment":[{"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/media?parent=14460"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/categories?post=14460"},{"taxonomy":"sector","embeddable":true,"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/sector?post=14460"},{"taxonomy":"region","embeddable":true,"href":"https:\/\/www.gpbullhound.com\/wp-json\/wp\/v2\/region?post=14460"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}