Meta, the AI company

by Parth Parikh
April 10, 2023
7 min read
Meta, the AI company

Most of the discussion around AI has been around the battle between Google and Microsoft, and by extension, OpenAI, and the number of well-funded startups at their heels. Despite dropping numerous state-of-the-art AI models, Meta (formerly known as Facebook) is rarely considered an AI-first company by mainstream investors. It may be because its ill-fated pivot to the metaverse was too effective. 

But in recent weeks, the company has been quietly retreating from the metaverse and doubling its AI efforts, calling it the company’s “single largest investment.” For those who have not been following Meta’s work in AI over the years, it has been expansive. Meta is a prolific contributor to the open-source community, especially for AI applications.  

In today’s deep dive, let’s discuss Meta’s efforts on AI. But first, let’s briefly discuss its share performance.

Meta’s year of efficiency is loved by investors

If you’re not paying attention, you might’ve missed that Meta’s share price increase was one of the critical drivers of the S&P rally in Q1 2023. In the past three months, Meta’s share price has risen more than 65% (Figure 1).

Figure 1: Meta’s three-month share price performance. Source

Much of this rally is due to the expansion of valuation. Over the same time period, Meta’s price-to-earnings ratio (P/E (LTM)) has gone up 110% (from ~12x to 24x, which is roughly in line with the current valuation for the S&P 500), an indication that Meta’s business was undervalued for much of 2022 due to its over fixation on the metaverse expansion. 

Figure 2: Meta’s three-month P/E (LTM) trend. The chart was taken from the new valuation chart feature we recently released on the Vested application. Source

The turnaround narrative for Meta started in November last year when the company announced that it was laying off 11,000 people (at the time, a 15% reduction of its workforce). The share price responded positively because most investors thought that Meta had not been disciplined with its investments, investing too heavily in the metaverse in the face of tougher competition from TikTok, privacy changes from Apple, and deterioration of macroeconomic conditions. So, in February 2023, Zuckerberg announced that 2023 is the “year of efficiency,” attempting to rewrite the narrative. 

As Q1 2023 comes to a close, the “year of efficiency” continues. Meta recently announced a second round of layoffs: the reduction of another 10,000 jobs and the closing of 5,000 open roles yet to be filled. When the dust settles, Meta should have roughly 67,000 employees, bringing its headcount to mid-2021 levels.  

Lost in the narrative of imprudent metaverse investments (the Reality Labs division lost $10 billion in 2022) are capital expenditures from Meta’s foray into AI. In the face of Apple’s ATT (a set of privacy policy changes that have been a significant headwind for the digital ads sector), which has made tracking ad conversion more difficult on iOS devices, causing a $10 billion loss in revenue for Meta in 2022, Meta has invested heavily in using AI-driven probabilistic attribution models. This required significant investments in GPU and specialized data centers.

These capex investments, the AI research know-how it has developed in-house, and Meta’s data advantage over other tech giants make it one of the stronger players to emerge from the generative AI explosion.

The shift to AI

Meta is not just focusing on efficiency, it is also directing its product group to focus on imbuing generative AI features into its products and has steered its top executives to work on AI.

But Meta’s shift to AI and Machine Learning (ML) has been years in the making. Here’s a brief snapshot.

Pytorch – the most popular AI/ML framework for research

One of the most popular machine learning frameworks is PyTorch, an open-source machine learning library created by Meta. It was an internal project started in 2016 that has now become the most popular machine learning library for AI research, supplanting Google’s TensorFlow. In Figure 3 below, we show the number of unique mentions of PyTorch (solid line) vs. TensorFlow (dashed line).

Figure 3: PyTorch vs. TensorFlow – number of unique mentions in academic research. Source

PyTorch became the most popular ML library, taking the crown from TensorFlow, despite Google’s first mover advantage because it:

  • Is easy to use. It’s very similar to NumPy, integrates easily with Python, and has consistent APIs
  • Has fast performance that is comparable to TensorFlow
  • Is CUDA GPU compatible. PyTorch has, over the years, incorporated many of NVIDIA’s CUDA operations, making it easier for developers to extract performance from NVIDIA’s GPUs

While PyTorch does not directly contribute to Meta’s bottom line (it is released as open-source software), the library helps move the field forward. If PyTorch is the standard for machine learning and AI workloads, then this would help Meta achieve more output from its costly data center, make its software stack more portable to other hardware providers, and introduce more competitors on the hardware and machine learning ops stack, which can help reduce costs and external dependencies. This is a classic example of “commoditizing your compliment.” 

PS: In business strategy, there’s an approach utilized by businesses where they would commoditize their products’ complements:

  • If you are the maker of peanut butter, you want the bread to be as cheap and as widely available as possible
  • Microsoft became the most valuable software company in the world (for a time) by commoditizing the PC market
  • Google became one of the most profitable companies in the world by commoditizing access to the internet (Chrome, Gmail, Android OS, etc.)

Meta’s AI data center

With the explosion of generative AI, there’s a shortage of GPUs to run AI workloads on. Even hyper scalers, such as Google and Amazon, are provisioning supply and consolidating internal efforts to conserve computing resources. 

Fortunately, Meta has its own proprietary AI supercomputer cluster, which was completed in 2022 (called the AI Research SuperCluster (RSC)). Additionally, the company appears to continue to invest in AI-centric data centers. It recently halted the development of its data center in Denmark, which was initially designed to support traditional online services, to accommodate more AI-centric workloads. 

Meta’s AI models

Often lost in the whirlwind of AI announcements from OpenAI and Google are updates from Meta’s. So far, Meta has dropped several large language models (LLMs) as open-source projects.

Segment Anything (SAM)

Segment Anything (SAM) is a new AI model from Meta AI that can “cut out” any object, in any image, with a single click. SAM has learned a general notion of what objects are and can carry out zero-shot segmentation of objects and images without prior examples or additional training ?.

Here’s an example where the model is incorporated into a video feed and segments and identifies objects.

Figure 4: SAM can take input prompts from other systems, for example, for taking as input a user’s gaze from an AR/VR headset to identify an object.

You can find the SAM demo here.

LLaMA – an open and efficient foundation language models

In February 2023, Meta released LLaMA, a collection of open-source foundation language models ranging from 7 billion to 65 billion parameters (for comparison, GPT-3.5 has 175 billion parameters). 

What’s astounding about this release is that:

  • It is trained from publicly available data sources
  • Despite the smaller size, LLaMA can outperform other LLMs. For example, LLaMA, with 13 billion parameters, outperformed GPT-3 with 175 billion parameters on most benchmarks. 

PS: Smaller models are extremely useful as they can be run on more local devices. Here’s an instruction on how to run LLaMA 7B and 13B on a 64GB M2 MacBook Pro. This means you do not need to run the inference in the cloud. In other words, running the LLM can be effectively free (other than electricity consumption).

PPS: In the LLaMA paper, Meta researchers estimated that they used 2048 of NVIDIA’s  A100-80GB for ~5 months to develop their models, consuming 2,638 MWh. This is roughly the equivalent energy consumption of 250 US households for one year. That’s a lot of energy consumption! 

Ok – so far, we have discussed Meta’s open-source contribution. But how does that help Meta’s business? 

Meta’s distribution and data advantage

It is clear that Meta has the expertise and the resources to push state-of-the-art forward. So one can imagine what it can do with its own proprietary data on its users and the distribution advantage. Meta does not have to create new businesses. The generative AI tools it creates can be layered on top of existing products to create more engagement. 

Currently, your depth of engagement on Meta’s family of apps (Facebook, Instagram, Messenger, WhatsApp) is driven by the consumption of media (be it text, images, or videos) created by friends, families, and influencers. In the not-too-distant future, the engagement will be further augmented by generative AI creating custom entertainment for you, whether it is:

  • Virtual friends. For an example of this, Character.AI is a startup that lets you chat with AI-generated characters
  • Automatically AI-generated feed of images. An early look at what this could look like was shared in a Twitter feed here
  • AI-generated movies prompted by text. Runway ML is working on a version of this 

Meta will likely have the capability to do all of the above. And with more than two billion users, Meta has the distribution advantage. All they need is a large enough GPU cluster to serve the world. 

Our future AI dystopia might be in the not-too-distant future…

Leave a Comment

Your email address will not be published. Required fields are marked *

Alternative Investments made easy