OpenAI has just received one of the first engineering builds of the NVIDIA DGX B200 AI server, posting a picture of their new delivery on X:
Inside, the NVIDIA DGX B200 is a unified AI platform for training, fine-tuning, and inference using NVIDIA’s new Blackwell B200 AI GPUs. Each DGX B200 system has 8 x B200 AI GPUs with up to 1.4TB of HBM3 memory and up to 64TB/sec of memory bandwidth. NVIDIA’s new DGX B200 AI server can pump out 72 petaFLOPS of training performance, and 144 petaFLOPS of inference performance.
OpenAI Sam Altman is well aware of the advancements of NVIDIA’s new Blackwell GPU architecture, recently saying: “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute“.