GigaIO Launches SuperNODE with AMD MI300X GPUs

Print Friendly, PDF & Email

Carlsbad, California – GigaIO, provider of open workload-defined infrastructure for AI and accelerated computing, today announced that its flagship product, the 32-GPU single node server SuperNODE, is now shipping with AMD Instinct MI300X accelerators. MI300X series accelerators are designed for generative AI workloads and HPC applications. The MI300X’s massively memory capacity, combined with the SuperNODE’s ability to put 32 GPUs into a single server, allows users to accommodate and train larger AI models for data-intensive Generative AI applications.

GigaIO’s trailblazing FabreX AI memory fabric, working with AMD’s innovative Infinity Fabric technology, equips the SuperNODE MI300X with industry-leading low latency, high bandwidth, and advanced congestion control capabilities. This unrivaled performance empowers the system to seamlessly handle the most demanding tensor parallelism workloads for next-generation AI model training.

“The SuperNODE means less time messing with infrastructure and faster time to running and optimizing LLMs,” said Greg Diamos, Co-founder & CTO of Lamini. The Enterprise AI platform recently raised $25M for enterprises to turn proprietary expertise into the next generation of LLM capabilities, and has been utilizing SuperNODEs in the TensorWave cloud.

The SuperNODE significantly simplifies the process of deploying and managing AI infrastructure. Traditional setups often involve intricate networking and the synchronization of several servers, which can be both technically challenging and time consuming. In contrast, the SuperNODE streamlines this process with 32 GPUS in a single server. This simplicity accelerates deployment times and reduces technical barriers, allowing organizations to focus more on innovation and less on infrastructure complexities.

“We’re consistently hearing from customers that AI infrastructure is hard, but SuperNODE makes it easy, with no performance penalty and no need to change a single line of code,” said Alan Benjamin, CEO of GigaIO. “SuperNODE is all about ease of use and performance — it streamlines the process of getting AI models up and running compared to dealing with multiple complex InfiniBand-networked server configurations, and provides better performance than any other solution available today.”

GigaIO SuperNODEs with AMD MI300X GPUs are available now. Stop by the AMD/GigaIO booth (#F19) at ISC High Performance 2024 for a live demo showing the SuperNODE with AMD MI300X GPUs running LLMs on a single node, or learn more here.

Speak Your Mind

*