Ex-Intel CEO Gelsinger: AI GPUs Are 10,000X Overpriced And NVIDIA Got Lucky

hero nvidia acquired gtc

When you read that headline, it probably sounds like former Intel CTO and CEO Gelsinger’s got a case of sour grapes. Of course, the man’s far more mature and experienced than that. His comments came while speaking to the Acquired podcast as an invited guest at NVIDIA’s GTC 2025 conference, where the GPU vendor unveiled Blackwell Ultra GB300. Gelsinger, speaking in the pre-show podcast, said this to the hosts:

Jensen and I had numerous conversations about ‘throughput computing’—today we refer to it as ‘accelerated computing’—versus scalar computing. You know: branch prediction, and short latency pipelines versus, “hey, who cares how long the pipeline is? just maximize throughput and create the programmability.” And obviously at the time, the CPU was the king of the hill, and I applaud Jensen for his tenacity in just saying, “No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics” and, then, it became this broader view, and then he got lucky, right? with AI.

It’s pretty clear, in the context of Gelsinger’s full statements, what he means by this. Intel and NVIDIA both championed their own approach to supercomputing: Intel with its powerful CPUs that offer strong single-threaded and low-latency compute performance, and NVIDIA with its massive GPUs that excel in multi-threaded throughput. What Gelsinger is saying is that NVIDIA and Jensen really got lucky in that the green team had already laid the foundations for exactly the kind of computing that AI needs, meaning GeForce and CUDA essentially became the default platform for AI processing.
gelsinger speaking

In the interview, Gelsinger goes on to say this after another question from the hosts:

Today, if we think about, for instance, the training workload, okay—but that’s got to to give way to something much more optimized for inferencing. You know, a GPU is way too expensive; I argue it is 10,000x too expensive to fully realize what we want to do with the deployment of inferencing for AI, and then, of course, what’s beyond that?

Is he saying that NVIDIA is overcharging for its AI GPUs? Not exactly. Gelsinger’s talking about the near future of the tech world, where, just as Jensen has stated, we’re going to be spending a lot more compute and power on AI inference rather than training. What Gelsinger is pointing out is that, while you absolutely do need one (or more) of NVIDIA’s massive GPUs to do AI training in a reasonable amount of time, inference is much simpler and doesn’t need that kind of hardware.

In other words, if AI is going to proliferate the way the tech giants want it to, there’s going to have to be devices that are capable of rapid AI inference at low power, and crucially, low cost. Gelsinger stops short of saying what type of processors those might be, but he could be talking about “NPUs”—slimmed-down ASICs dedicated to the task of performing AI inference.

The whole interview is pretty interesting and worth watching. There are many other guests from across the industry who have relevant insights on the future of technology and AI.