NVIDIA’s jump and Microsoft Copilot everywhere

by Vested Team
May 29, 2023
9 min read
NVIDIA’s jump and Microsoft Copilot everywhere

In a previous edition, we talked about the reshuffling of the AI value chain. There were two key takeaways from that analysis:

  • There is still a lack of credible competitive alternatives to NVIDIA’s GPUs. For the most part, AI hardware is still monopolized by NVIDIA (with roughly 95% market share).
  • AI features will be weaved into existing software products. As such, the incumbents, who have massive distribution, may have the edge.

This week, we follow up by discussing how NVIDIA’s monopoly translates to sales and profitability growth, and how Microsoft will use its distribution advantage to infuse AI everywhere.

NVIDIA’s strong quarterly earnings

NVIDIA showed how powerful it is to be the only one selling pickaxes in a gold rush. The company announced its latest earnings last week, and it was one for the ages. 

At a high level, the company generated $7.2 billion in quarterly revenue, down 13% year-over-year, but up 19% quarter-over-quarter, while maintaining 65% operating margins.  Their earnings report from this past quarter is actually not that strong. While the company appears to be returning to growth, after four quarters of sequential decline from the peak in March 2022 quarter, most of its businesses are still declining on a year-over-year basis (see Figure 1 below for the quarterly revenue trend, broken down by market platform). But the segment that investors are paying attention to the most, the Data Center segment, is accelerating. 


Figure 1: NVIDIA’s quarterly revenue trend by market platform

Here are two key highlights on the top two largest revenue generating platforms:

  • The Gaming segment generated $2.2 billion in revenue (or 31% of total revenue), which is a -38% decline in growth compared to the same time last year. The slowdown in the PC market and high inventory levels from previous quarters are hurting this segment.  
  • The Data Center segment generated $4.2 billion in revenue (or 60% of total sales), which is a 14% growth increase. This is impressive considering the tough comparison from the previous year.

As we mentioned above, it is the growth of the Data Center segment that has investors excited about NVIDIA. As the primary seller of hardware for AI acceleration, NVIDIA has a near monopoly on AI workloads for both training and inference. And as large enterprises and startups are building AI features, the large cloud providers are racing to add more GPU capacity. This explosion of demand results in NVIDIA projecting an increase of more than 64% jump in sales for the current quarter (which is impressive considering this is a projection for sales within the current 90 day window!). This upgraded guidance resulted in more than a 25% rally during after-hours trading, and ignited a rally in other AI-adjacent companies. 

Here’s a summary of the outlook for the upcoming quarter:

  • Revenue is expected to be $11 billion, plus or minus 2% (far above Wall Street consensus which was $7.2 billion for the upcoming quarter).
  • GAAP and non-GAAP gross margins are expected to be 68.6% and 70.0% (up from 65% in the current quarter) respectively, plus or minus 50 basis points.
  • Capital expenditures are expected to be approximately $300 million to $350 million (a small amount considering the size of its revenue). 

One must wonder, if the business is so valuable, in a field that is growing rapidly, is this defensible? How long before NVIDIA’s moat is breached? 

Full Stack AI Hardware 

Although we referred to NVIDIA’s Data Center product as GPUs, in reality, the company has vertically integrated its hardware and software solution to help accelerate AI workload, something it has called accelerated computing. Here is a quote from Jensen Huang — President and Chief Executive Officer of NVIDIA from the earnings call (emphasis ours added throughout): 

You have to engineer all of the software and all the libraries and all the algorithms, integrate them into and optimize the frameworks and optimize it for the architecture of not just one chip but the architecture of an entire data center all the way into the frameworks, all the way into the models. And the amount of engineering and distributed computing — fundamental computer science work is really quite extraordinary. It is the hardest computing as we know. And so, number one, it’s a full stack challenge, and you have to optimize it across the whole thing and across just a mind-blowing number of stacks.

We have 400 acceleration libraries. As you know, the amount of libraries and frameworks that we accelerate is pretty mind-blowing. The second part is that generative AI is a large-scale problem and it’s a data center scale problem. It’s another way of thinking that the computer is the data center or the data center is the computer.” — Jensen Huang, CEO of NVIDIA

Because the training and inference workload of AI tasks require a massive amount of data movement and processing (for both pre- and post-), the hardware & software stack expanded beyond the GPU chip to include memory, the CPU, the DPU (Data Processing Unit), and the interconnect. This means that the AI hardware expanded from a single GPU device to a larger module within the data center. Here is Jensen Huang again in his own words:

“It’s not the chip, it’s the data center. And it’s never happened like this before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches, and the computing systems, the computing fabric, that entire system is your computer. And that’s what you’re trying to operate.” — Jensen Huang, CEO of NVIDIA

And because NVIDIA has vertically integrated, it can build in-house and sell these modular products as a complete solution, allowing customers to get up and running much faster:

“Some of the largest supercomputers in the world were installed about 1.5 years ago, and now they’re coming online. And so, it’s not unheard of to see a delivery to operations of about a year. Our delivery to operations is measured in weeks. And that’s — we’ve taken data centers and supercomputers, and we’ve turned it into products.” — Jensen Huang, CEO of NVIDIA

NVIDIA is not in the business of selling GPUs anymore. It is in the business of selling AI solutions.

Your AI opportunity is NVIDIA’s margin

There’s a famous Jeff Bezos aphorismYour margin is my opportunity.” In the current AI-hyped environment, NVIDIA is turning the aphorism on its head. As everyone, from pre-revenue startups to mega cap tech companies, is pursuing AI opportunities, embedding their product with AI features, NVIDIA is transforming the opportunity to fat margins. With no competition in sight and with very strong demand in the next 1-2 years, NVIDIA has pricing power. Its latest H100 cards are about 2x more expensive than the previous generation A100. This shows up in the company’s gross margins (Figure 2 below). 


Figure 2: NVIDIA’s quarterly gross margin trend since 2010

But of course, the large cloud providers are incentivized to replace NVIDIA and build their own hardware stack. OpenAI announced Triton, an open source GPU programming platform to compete with NVIDIA’s CUDA software. There’s a rumor that Microsoft is helping AMD to help it catch up to NVIDIA’s AI capabilities, while at the same time, developing its own in house custom silicon (code name Athena). Not to be outdone, META too, is developing an AI chip in house.

Replacing NVIDIA is easier said than done. Amongst all the tech giants, Google has the most capabilities in terms of building custom AI chips (its TPU is in its 4th generation). But by many accounts, Google is still one of NVIDIA’s largest customers. It has GPU deployment that is larger than its own in house TPU chips. This is because of two key reasons: 

  • First, NVIDIA’s products still offer the best flexibility, customizability, and generalizability. Custom solutions are typically ASIC based, which means they trade off generalizability for speed/lower power. While that is good if the models are fixed (they are not, new developments at the model architecture are still happening), customers might be concerned that they become vendor-locked, and would prefer not to over-optimize on TPUs, which is a Google only solution.   
  • Second, enterprises are still developing their AI models using NVIDIA’s software stack. This means, to train and run inference, NVIDIA’s hardware is still preferable from a performance perspective. 

As a result, cloud providers – Google, Microsoft, and AWS – have to continue to purchase and make available NVIDIA solutions for their customers. The software layer creates a network effect which boosts sales of the hardware. This deepens NVIDIA’s moat. 

But alas, no moat lasts forever – we shall see how long NVIDIA’s lasts.

Note: Astute readers might be thinking, rather than taking a first-derivative approach, one can take a second derivative approach. Rather than investing in the sellers of pickaxes during a gold rush, one can invest in the company that supplies the sellers of pick axes (in this case, TSMC, who makes NVIDIA’s chips). Be mindful that, in 2022, TSMC has significant exposure to the slumping PC & mobile markets and trailing edge chip nodes, which are irrelevant to AI chips. That said, TSMC generates roughly 50% of its revenue from the 5nm and 7nm nodes, the nodes that will be relevant for AI chips. This is why, earlier this year, TSMC’s latest guidance guided for lower revenue growth in Q2 2023. 

How Microsoft will infuse AI everywhere

When there’s a new paradigm in technology, it is typically the case that people apply business models from the previous paradigm. When the world wide web was in its infancy, the business model was not established. People did not even know if it was legal to put ads on the internet. As a new medium that at the time was predominantly text-based, people look to other text based mediums for inspiration. That is how the first banner advertisement on the internet was born, inspired by ads on magazines. 

It is with this same approach that companies are deploying generative AI capabilities. In time perhaps, generative AI will result in creation of a new paradigm and business models, but for now, it will be applied on existing products. A few months ago, we discussed how LLMs are the next evolution in human-computer interface, allowing us to interact with computers in natural language. In its developer conference last week, Microsoft expanded on this vision to become cross-application, showcasing demo after demo. 

Here are a few highlights: 

  • Plugin ecosystem is coming to the Copilot environment imbuing the AI with narrow functionality. From the demo, users of Microsoft Office can add specific legal clauses using a plugin from Thomson Reuters, or create and assign JIRA tickets as easy as writing a message (I can hear the cheers of Product Managers all around the world on this announcement!).
  • Microsoft Copilot will come to the Edge browser. The chat tool will live as a persistent chat window on the side of the browser, which enables integration between content of the webpage with Microsoft Office Apps (excel, outlooks, word, etc).
  • Windows Copilot will allow users to interact with windows in natural language. Users can ask it questions, manipulate documents and change windows settings, all in natural language. This means, using Windows OS can be potentially 10x easier, which in turn can unlock Windows’ power capabilities to more people. 
  • Azure AI Studio is a full lifecycle tool to build, evaluate, and deploy AI models (including open source models), and takes advantage of Microsoft’s safety and provenance tools to allow companies to launch AI models safely (making sure your AI behaves nicely is really hard and if mistakes happens, can carry a large cost).
  • Microsoft Fabric is a unified data architecture which provides storage, pipelines, and analysis capabilities, allowing companies to store and analyze data, as well as easily train AI models.

The above are just a few highlights from the full event, which you can find here. Viewed in totality, you can see Microsoft’s strategy is to press its advantage in distribution. By infusing Windows with AI, it rapidly diffuses its AI capabilities to 1 billion monthly active users

Another second order implication to the Microsoft Copilot everywhere strategy is that the user does not have to leave the chat box to engage with 3rd party applications. Let’s take the JIRA example from above to illustrate what we mean:

  • Currently, if you want to create a new ticket in JIRA, you have to: Go to JIRA’s web application ? Wait (yes, it’s slow) ? Go to the correct project ? Create a new ticket ? And then fill in the details. Overall, a lot of clicks. 
  • In the not too distant future (Windows 11 Copilot preview is coming in June 2023), you can chat with the JIRA plugin via Teams or Windows OS Copilot chat. Describe the issue in natural language and invoke the plugin to create the ticket. 

The user experience with Copilot will be faster and easier. In the words of a Microsoft executive, “it does not disturb your workflow.” Or in the words of a neutral business analyst, Windows OS becomes the center of the user experience again, subsuming other 3rd party applications into potentially undifferentiated plugins. In the example above, JIRA users do not have to login into JIRA’s web application anymore. In the long term, this can weaken the 3rd party relationship with the end consumer. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Alternative Investments made easy