Ask anybody what Nvidia makes, and so they’re more likely to first say “GPUs.” For many years, the chipmaker has been outlined by superior parallel computing, and the emergence of generative AI and the ensuing surge in demand for GPUs has been a boon for the corporate.
However Nvidia’s latest strikes sign that it’s seeking to lock in additional clients on the much less compute-intensive finish of the AI market—clients who don’t essentially want the beefiest, strongest GPUs to coach AI fashions, however as a substitute are in search of probably the most environment friendly methods to run agentic AI software program. Nvidia lately spent billions to license expertise from a chip startup centered on low-latency AI computing, and it additionally began promoting stand-alone CPUs as a part of its newest superchip system.
Yesterday, Nvidia and Meta announced that the social media big had agreed to purchase billions of {dollars}’ price of Nvidia chips to offer computing energy for its large infrastructure initiatives—with Nvidia’s CPUs as a part of the deal.
The multiyear deal is an enlargement of a comfortable ongoing partnership between the 2 firms. Meta beforehand estimated that by the top of 2024, it might have bought 350,000 H100 chips from Nvidia, and that by the top of 2025 the corporate would have entry to 1.3 million GPUs in total (although it wasn’t clear whether or not these would all be Nvidia chips).
As a part of the most recent announcement, Nvidia stated that Meta would “construct hyperscale information facilities optimized for each coaching and inference in assist of the corporate’s long-term AI infrastructure roadmap.” This features a “large-scale deployment” of Nvidia’s CPUs and “tens of millions of Nvidia Blackwell and Rubin GPUs.”
Notably, Meta is the primary tech big to announce it was making a large-scale buy of Nvidia’s Grace CPU as a stand-alone chip, one thing Nvidia stated could be an choice when it revealed the complete specs of its new Vera Rubin superchip in January. Nvidia has additionally been emphasizing that it presents expertise that connects numerous chips, as a part of its “soup-to-nuts method” to compute energy, as one analyst places it.
Ben Bajarin, CEO and principal analyst on the tech market analysis agency Artistic Methods, says the transfer signaled that Nvidia acknowledges {that a} rising vary of AI software program now must run on CPUs, a lot in the identical method that standard cloud functions do. “The explanation why the trade is so bullish on CPUs inside information facilities proper now could be agentic AI, which places new calls for on general-purpose CPU architectures,” he says.
A recent report from the chip newsletter Semianalysis underscored this level. Analysts famous that CPU utilization is accelerating to assist AI coaching and inference, citing certainly one of Microsoft’s information facilities for OpenAI for example, the place “tens of 1000’s of CPUs are actually wanted to course of and handle the petabytes of knowledge generated by the GPUs, a use case that wouldn’t have in any other case been required with out AI.”
Bajarin notes, although, that CPUs are nonetheless only one part of probably the most superior AI {hardware} programs. The variety of GPUs Meta is buying from Nvidia nonetheless outnumbers the CPUs.
“Should you’re one of many hyperscalers, you’re not going to be operating all of your inference computing on CPUs,” Bajarin says. “You simply want no matter software program you’re operating to be quick sufficient on the CPU to work together with the GPU structure that’s truly the driving drive of that computing. In any other case, the CPU turns into a bottleneck.”
Meta declined to touch upon its expanded take care of Nvidia. Throughout a latest earnings name, the social media big stated that it deliberate to dramatically improve its spending on AI infrastructure this yr to between $115 billion and $135 billion, up from $72.2 billion final yr.

