Until relatively recently, most of these frameworks targeted NVIDIA’s CUDA API and its GPU hardware. However, thanks to efforts from both the open source community as well as developers working for Intel, AMD, and other companies, a great many of these frameworks can be run on just about any hardware you want now. Indeed, that’s the topic of a release that Intel just published titled “More Than 500 AI Models Run Optimized on Intel Core Ultra Processors.”
Intel calls out OpenVINO, PyTorch, ONNX, and the Hugging Face model repository as valid targets for running AI on its hardware. Indeed, these four targets comprise the vast majority of locally-hosted AI available today. With support for even just these four—and there’s more than that—you can host and run all sorts of AI models, including large language models, image diffusers and upscalers, object detection and computer vision, image classification, recommendation engines, and more.
Of course, Chipzilla has a lot more AI-capable hardware than just the Core Ultra processors, but the point is that you don’t have to target discrete GPUs if you want to run AI on client systems. Intel wants to make sure the word is out: AI is democratized, and it can run just about anywhere you want at this point. Just make sure your target system has enough RAM, and you’re probably good to go.