You are here: Home > Computer Technology News > NVIDIA's Rubin Already Set to Supersede Blackwell


NVIDIA's Rubin Already Set to Supersede Blackwell



Published: 06-05-2024


Image Source: MidJourney


We’d hardly gotten over our excitement for NVIDIA’s massive Blackwell GPU, when news of something new has been whispered in our ears. The rumors point to a new GPU codenamed “Rubin” and while it doesn’t sound like a major revolution, it might be the polish that will turn Blackwell into something truly special.

Power Consumption Is a Main Focus

Blackwell GPUs have been rumored to top out at 600W which is a lot of heat and power to put inside any computer. Based on the reported rumors it seems that Rubin will address this by using the TSMC-N3 node and HBM4 stacked memory.

The Chip

Based on the rumors so far it seems there will be two actual Rubin chips.the R100 and the GR200. The R100 is likely the first product based on the 'big' Rubin GPU for AI and HPC workloads, while the GR200 is expected to be a refined Rubin GPU similar to the GH200 GPU.

It’s All About AI Baby!

It should come as no surprise that AI technology is going to be a major part of Rubin. In fact, it may be the only thing this chip is designed for. Based on the 4x reticle design and Chip-On-Wafer-On-Substrate-L (CoWoS-L) technology, it suggests a large and expensive chip potentially priced at $50,000! And we thought the 4090 was expensive!


The race is on to offer faster AI processing using less power and for less money. NVIDIA is currently the leader in AI hardware, and Rubin seems to be the first sign that they are aiming for relentless development in this area.

When Do We Peasants Get a Taste?

NVIDIA’s pivot into giant enterprise AI chips has made it hard to figure out when professional users and enthusiasts will get their hands on new silicon.Are chips like Blackwell and Rubin even part of the same lineage as the next generation of workstation and consumer cards?


The truth is we don’t know, but it seems unlikely since NVIDIA will now be making new AI GPUs every year. That’s a way faster cadence than their graphics chips, which are usually 2-3 years apart.

We’re Waiting For the AI Workstation Revolution

As system builders both for desktop systems and data center server racks, we’re excited by any advanced new hardware, rumored or not. We’ve definitely felt the pull of AI applications affecting what your customers want to buy and which features we want to offer in our systems.


With low-energy NPUs (Neural Processing Units) set to become part of consumer CPUs moving forward and high-power AI processors in the cloud, we think there will be plenty of AI silicon in our workstations sooner rather than later too.