When it comes to AI training and dealing with large data sets and increasingly complex large language models (LLMs) like Llama 70b, it’s not simply a matter of being able to throw GPU horsepower at the problem until you find a solution. At least, it shouldn’t be.
VIEW GALLERY – 4 IMAGES
Phison’s aiDAPTIV+ is a hybrid software and hardware solution for LLM training. It integrates Phison’s Pascari A100 M.2 SSDs into a complete solution with linear scaling. Impressive! According to Phison, it unlocks access to run workloads previously reserved for data centers on a single workstation or server – supporting up to Llama-3 70B and Falcon 180B.
Phison’s chart showcases the capabilities of aiDAPTIV+. Above, you can see a single system with the same configuration: four RTX 6000 Ada GPUs, 192GB of GDDR6 memory, and an additional 512GB of RAM. Phison’s aiDAPTIVLink middleware extends this GPU memory capacity with two 2TB SSDs, paving the way for massive model support with low latency.
It makes perfect sense because one reason so many GPUs are required for large model training is GPU memory speed and limited capacity. It sounds like a simple solution, but you get the sense that the SSD, memory, and storage experts at Phison had to perform some digital sorcery to get it all working.
The best part is that it’s a cost-effective drop-in solution without making fundamental changes or modifications to existing AI applications. “You can effortlessly reuse existing hardware or add nodes as needed,” Phison writes. “System integrators have access to AI100E SSD, middleware library licenses, and full Phison support to facilitate smooth system integration.”
Supported models include the following, with more on the way.
- Llama, Llama-2, Llama-3, CodeLlama
- Vicuna, Falcon, Whisper, Clip Large
- Metaformer, Resnet, Deit base, Mistral, TAID
Now, all we need is for Phison to create a similar solution for PC gaming, where aiDAPTIV+ leverages SSD storage to turn any 8GB video card into a 128GB beast that’ll never run out of memory.