In This Article:
On Monday, Nvidia (NVDA) CEO Jensen Huang took the wraps off of the company’s highly anticipated Blackwell graphics processing unit (GPU) at the company’s annual GTC conference in San Jose, Calif.
The Blackwell is the successor to Nvidia’s already highly coveted H100 and H200 GPUs, and according to the company, it is the world’s most powerful chip. The H100 and H200 chips have become the go-to GPUs for AI applications, helping to rocket Nvidia’s data center revenue over the last few quarters.
In its latest quarter alone, the company reported data center revenue of $18.4 billion. To put the segment’s growth into perspective, Nvidia reported annual revenue of $27 billion for all of 2022.
“For three decades we’ve pursued accelerated computing with the goal of enabling transformative breakthroughs like deep learning and AI,” Huang said in a statement.
“Generative AI is the defining technology of our time. Blackwell GPUs are the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”
Like its prior Hopper GPUs, the Blackwell GPU will be available as a standalone GPU, or two Blackwell GPUs can be combined and paired with Nvidia’s Grace central processing unit to create what it calls its GB200 Superchip.
That setup, the company says, will offer up to a 30x performance increase compared to the Nvidia H100 GPU for large language model inference workloads while using up to 25x less energy. That energy savings is an important part of the story.
Nvidia customers, including Microsoft (MSFT), Amazon (AMZN), Google (GOOG, GOOGL), Meta (META), and Tesla (TSLA), are currently using or actively developing their own in-house AI chips as alternatives to Nvidia’s offerings. Part of the reason for that is so that they don’t have to pay the tens of thousands of dollars Nvidia’s chips are estimated to cost. But the other reason is that Nvidia’s chips are especially power-hungry.
By talking up its energy savings with the Grace Blackwell Superchip, Nvidia is speaking directly to its customers’ concerns.
Nvidia says Amazon, Google, Microsoft, and Oracle (ORCL) will be among the first companies to start offering access to Blackwell chips through their cloud platforms.
In addition to the Blackwell and Grace Blackwell chips, Nvidia also debuted its DGX SuperPOD supercomputer system. The DGX SuperPOD is made up of eight or more DGX Grace Blackwell 200 (GB200) systems, which include 36 Grace Blackwell 200 Superchips paired to run as a single computer. Nvidia says customers can scale up the SuperPOD to support tens of thousands of GB200 Superchips depending on their needs.