An AI startup co-founded by a Princeton University professor has won an $18.6 million DOD grant to develop an in-memory chip built to deliver faster, more efficient AI inference processing.
AI technology company EnCharge AI has announced a partnership with Princeton University supported by the U.S. Defense Advanced Research Projects Agency (DARPA) for developing processors that are more efficient “than previously thought possible,” according to the company. DARPA’S Optimum Processing Technology Inside Memory Arrays (OPTIMA) is a $78 million program to develop scalable compute-in-memory accelerators.
The goal is to alleviate rising AI processing requirement, addressed now by massive server farms. OPTIMA’s objective is to support processing breakthroughs rather than optimization of existing GPUs and other accelerators. DARPA specifically sought to fund “innovative approaches that enable revolutionary advances in science, devices, and systems,” while using existing VLSI manufacturing techniques and specifically excluding “research that primarily results in evolutionary improvements to the existing state of practice.”
Dr. Naveen Verma is a Princeton professor of electrical and computer engineering and co-founder and CEO of EnCharge AI. Many of the developments the OPTIMA project seeks to further advance were developed in his Princeton lab, in part with previous funding by DARPA and DoD, the company said.
“The future is about decentralizing AI inference, unleashing it from the data center, and bringing it to phones, laptops, vehicles and factories,” Verma said. “While EnCharge is bringing the first generation of this technology to market now, we are excited for DARPA’s support in accelerating the next generation to see how far we can take performance and efficiency.”
The announcement comes on the heels of EnCharge AI’s recently announced $22.6 million funding round involving VentureTech Alliance, RTX Ventures and ACVC Partners to develop full-stack AI compute solutions.
The project will explore end-to-end workload execution of AI applications using advanced switched-capacitor analog in-memory computing chips developed by Dr. Verma’s lab and commercialized by EnCharge AI. The company said this technology has been proven over several generations of silicon developed at Princeton to deliver improvements in compute efficiency compared to digital accelerators while retaining improved precision and scalability not possible with electrical current-based analog computing approaches. More information on switched-capacitor in-memory computing can be found in a series of foundational peer-reviewed published papers co-authored by Dr. Verma as well as at www.enchargeai.com/technology.
EnCharge AI CTO Dr. Kailash Gopalakrishnan noted that constraints posed by current AI processing technologies motivated EnCharge AI to participate in OPTIMA.
“EnCharge brings together leaders from Princeton, IBM, Nvidia, Intel, AMD, Meta, Google and other companies that have led computing into the modern era. At some point, many of us realized that innovation within existing computing architectures as well as silicon technology-node scaling was slowing at exactly the time where AI was creating massive new demands for computing power and efficiency. While GPUs are the best available tool today, we are excited that DARPA is supporting development of a new generation of chips to unlock the potential of AI.”