I am thrilled by Nvidia’s cute petaflop mini PC wonder, and it’s time for Jensen’s law: it takes 100 months to get equal AI performance for 1/25th of the cost

When you buy through links on our articles, Future and its syndication partners may earn a commission.

 Project DIGITS - front view.
Credit: Storagereview.com

Nobody really expected Nvidia to release something like the GB10. After all, why would a tech company that transformed itself into the most valuable firm ever by selling parts that cost hundreds of thousands of dollars, suddenly decide to sell an entire system for a fraction of the price?

I believe that Nvidia wants to revolutionize computing the way IBM did it almost 45 years ago with the original IBM PC.

It may be time to introduce Jensen’s law to complement Moore’s law: At equal AI performance, it takes about 100 months to cut the price per FLOP by 25.

Project DIGITS, as a reminder, is a fully formed, off-the-shelf super computer built into something the size of a mini PC. It is essentially a smaller version of the DGX-1, the first of its kind launched almost a decade ago, back in April 2016. Then, it sold for $129,000 with a 16-core Intel Xeon CPU and eight P100 GPGPU cards; Digits costs $3,000.

Nvidia confirmed it has an AI performance of 1,000 Teraflops at FP4 precision (dense/sparse?). Although there’s no direct comparison, one can estimate that the diminutive super computer has roughly half the processing power of a fully loaded 8-card Pascal-based DGX-1.

At the heart of Digits is the GB10 SoC, which has 20 Arm Cores (10 Arm Cortex-X925 and 10 Cortex-A725). Other than the confirmed presence of a Blackwell GPU (a lite version of the B100), one can only infer the power consumption (100W) and the bandwidth (825GB/s according to The Register).

You should be able to connect two of these devices (but not more) via Nvidia’s proprietary ConnectX technology to tackle larger LLMs such as Meta's Llama 3.1 405B. Shoving these tiny mini PCs in a 42U rack seems to be a near impossibility for now as it would encroach on Nvidia’s far more lucrative DGX GB200 systems.

All about the moat

Why did Nvidia embark on Project DIGITS? I think it is all about reinforcing its moat. Making your products so sticky that it becomes near impossible to move to the competition is something that worked very well for others: Microsoft and Windows, Google and Gmail, Apple and the iPhone.

The same happened with Nvidia and CUDA - being in the driving seat allowed Nvidia to do things such as shuffling the goal posts and wrongfooting the competition.

The move to FP4 for inference allowed Nvidia to deliver impressive benchmark claims such as “Blackwell delivers 2.5x its predecessor’s performance in FP8 for training, per chip, and 5x with FP4 for inference”. Of course, AMD doesn’t offer FP4 computation in the MI300X/325X series and we will have to wait till later this year for it to roll out in the Instinct MI350X/355X.