NVIDIA has confirmed its beefed-up Blackwell Ultra and next-gen Vera Rubin AI architectures are on track, lining up with recent reports that we’ll get a huge info dump on the company’s new AI GPUs at GTC 2025 in a few weeks time.

VIEW GALLERY – 2 IMAGES
During the company’s recent earnings call, NVIDIA CEO Jensen Huang confirmed with analysts that the recent GB200 AI server yield rates won’t affect the company’s annual release cadence, but the analysts also asked Jensen how NVIDIA would manage Blackwell Ultra and Rubin at similar release periods.
Jensen said: “Yes. Blackwell Ultra is second half. As you know, the first Blackwell was we had a hiccup that probably cost us a couple of months. We’re fully recovered, of course. The team did an amazing job recovering and all of our supply chain partners and just so many people helped us recover at the speed of light“.
He continued: “And so, now we’ve successfully ramped up production of Blackwell. But that doesn’t stop the next train. The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories, and of course, new processors, and all of that is coming online“.
Jensen talked about the company’s next-gen Vera Rubin AI architecture: “And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we’re going to provide a big, huge step-up”.
NVIDIA’s next-gen Rubin AI GPUs will use the new bleeding-edge HBM4 memory standard, which SK hynix, Samsung, and Micron are all hard at work now. HBM3E is being used on B200 and GB200, while B300 and GB300 will also use HBM3E but bump up the memory capacity per GPU.