SANTA CLARA, Calif., Oct. 10, 2023 — AMD (NASDAQ: AMD) today announced the signing of a definitive agreement to acquire Nod.ai, intended to expand the company’s open AI software capabilities.
AMD said the addition of Nod.ai “will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs to AMD. The agreement strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.”
“The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware,” said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. “The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai’s technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today.”
“At Nod.ai, we are a team of engineers focused on problem solving — quickly – and moving at pace in an industry of constant change to develop solutions for the next set of problems,” said Anush Elangovan, co-founder and CEO, Nod.ai. “Our journey as a company has cemented our role as the primary maintainer and major contributor to some of the world’s most important AI repositories, including SHARK, Torch-MLIR and OpenXLA/IREE code generation technology. By joining forces with AMD, we will bring this expertise to a broader range of customers on a global scale.”
Nod.ai delivers optimized AI solutions to top hyperscalers, enterprises and startups. The compiler-based automation software capabilities of Nod.ai’s SHARK software reduce the need for manual optimization and the time required to deploy highly performant AI models to run across a broad portfolio of data center, edge and client platforms powered by AMD CDNA™, XDNA™, RDNA™ and “Zen” architectures.