Jensen Huang, a CEO who has overseen one of the most remarkable corporate ascents in technology history over the past few years, is known for his habit of donning a black leather jacket on stage. This fashion choice began as affectation and has evolved into something more akin to armor. However, Huang’s attitude was different when he stood in front of a group of developers and analysts at this year’s GTC conference in San Jose, processing the news that Meta had been covertly increasing its use of AMD’s MI300X CPUs. not protective. Not nervous. Something more akin to the appearance of someone who has already made up their mind about what will happen next and is just waiting for the right time to reveal it.
The interpretation of the AMD-Meta story, which had been making the rounds in recent weeks, was clear: Meta, one of the world’s biggest purchasers of AI chips, was moving away from Nvidia’s almost complete control of the AI training market. That framing would be unremarkable for any other chip maker. It was an indication that the competitive landscape was shifting for Nvidia, which had been operating in a niche for a number of years with pricing power and delivery schedules that showed little real competition. AMD had developed a chip, the MI300X, that Meta could use on a large scale. That had never occurred in any significant way previously. Huang did not downplay the development. The purpose was to speed up.
| Category | Details |
|---|---|
| Topic | Nvidia’s Competitive Response to AMD’s Meta Partnership |
| Key Person | Jensen Huang (CEO, Nvidia) |
| Nvidia’s Key Architecture | Blackwell / Blackwell Ultra (2025–26), Vera Rubin (2026) |
| AMD Competing Chip | MI300X |
| Meta’s Position | Using AMD chips + multi-gen Nvidia partnership (Blackwell/Rubin) |
| Nvidia’s Software Moat | CUDA ecosystem |
| CoreWeave Investment | $2 Billion |
| Nvidia’s China Strategy | RTX 6000D — compliant chip for Chinese market |
| Key Claim | 7x ROI on AI investment vs. alternatives |
| New Product Cadence | Shifted from 2-year to 1-year architecture cycle |
| Reference Website | nvidia.com |
The most tangible manifestation of that acceleration is the change from a two-year architecture release cycle to what Nvidia is now calling a “one-year rhythm.” The Blackwell platform, which will ship in large quantities during 2025 and 2026, will be followed in 2026 by Vera Rubin, a new architecture with HBM4 memory that is scheduled to launch concurrently with AMD’s own next-generation devices. By the time competitors have developed software and manufacturing capability around one Nvidia generation, the next one is already in production. This is the clear reasoning behind keeping them always behind. Nvidia’s engineering team, supply chain, and manufacturing partners—especially TSMC, which makes Nvidia’s most cutting-edge silicon—are all under tremendous strain as a result of this cycle. Although it is actually unclear if the one-year rhythm can be maintained over several cycles without yield or quality issues, the goal is obvious.
CUDA, the software platform that developers use to create programs that run on Nvidia hardware, has traditionally been the more resilient aspect of Nvidia’s competitive position. This is where AMD faces a more difficult task than the hardware comparison indicates. The developer tooling, optimized libraries, years of software investment, and institutional experience with CUDA do not move to a rival platform without substantial cost, even if an AMD CPU meets or surpasses Nvidia’s performance on particular tasks.
By branching out into agentic AI—the infrastructure for AI systems that perform sequences of actions rather than just producing responses—and by offering the software stack for coordinating those systems, Nvidia has been strengthening this moat. Nvidia’s approach is to take control of the next layer of AI bottleneck before the issue fully arises if it is orchestration rather than raw compute.
The AMD title misrepresented the complexity of the Meta connection. Nvidia and Meta have established a multi-year, multi-generational strategic collaboration in which the social media business would buy millions of Blackwell chips and future Rubin-generation gear. As a vendor diversity strategy rather than a replacement, Meta is concurrently employing AMD CPUs for some applications and staying firmly integrated in the Nvidia environment for others. Nvidia is expanding its footprint in Meta’s data centers beyond the GPU market by offering its Grace CPU and networking stack for standalone use. The relationship is competitive in some workloads and collaborative in others, which is an uncommon dynamic that reflects the scope of Meta’s infrastructure needs and the scale at which it works.
Core’s $2 billion investmentThe move that raised the most pointed questions was Weave, a cloud company that uses Nvidia technology in large quantities. Huang defended it by comparing it to investing in a generational firm at a critical juncture and supporting a business that is developing vital AI infrastructure capability. Critics pointed out that from some perspectives, it appears as though Nvidia is investing in a client to guarantee that the client continues to purchase Nvidia hardware—a vertical integration ploy disguised in terms of venture capital.
Perhaps the outcome is more important than the distinction: Nvidia processors power CoreWeave’s infrastructure, and Nvidia’s funding makes CoreWeave’s further expansion a shared goal. There’s a sense that Huang is not reacting to AMD’s Meta deal with a single move, but rather with a thorough repositioning that was partially underway prior to the headline, as the China strategy develops alongside all of this—the RTX 6000D and other compliant chips intended to sustain revenue in a market complicated by U.S. export restrictions.
