AI inference Archives - High-Performance Computing News Analysis | insideHPC https://insidehpc.com/tag/ai-inference/ At the Convergence of HPC, AI and Quantum Tue, 16 Apr 2024 18:17:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 57143778 NeuReality Launches Developer Portal for NR1 AI Inference Platform  https://insidehpc.com/2024/04/neureality-launches-developer-portal-for-nr1-ai-inference-platform/ https://insidehpc.com/2024/04/neureality-launches-developer-portal-for-nr1-ai-inference-platform/#respond Tue, 16 Apr 2024 18:16:53 +0000 https://insidehpc.com/?p=93852

SAN JOSE — April 16, 2024 — NeuReality, an AI infrastructure technology company, announced today the release of a software developer portal and demo for installation of its software stack and APIs. The company said the announcement marks a milestone since delivery of its 7nm AI inference server-on-a-chip, the NR1 NAPU, and bring up of […]

The post NeuReality Launches Developer Portal for NR1 AI Inference Platform  appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
https://insidehpc.com/2024/04/neureality-launches-developer-portal-for-nr1-ai-inference-platform/feed/ 0 93852
HPC News Bytes 20240226: Intel Foundry Bash, Nvidia Earnings and AI Inference, HPC in Space, ISC 2024 https://insidehpc.com/2024/02/hpc-news-bytes-20240226-intel-foundry-bash-nvidia-earnings-and-ai-inference-hpc-in-space-isc-2024/ Mon, 26 Feb 2024 15:18:49 +0000 https://insidehpc.com/?p=93532

A happy Monday of Leap Year Week to you! We offer a rapid run-through of the latest in HPC-AI, including: Intel Foundry bash, Gelsinger talks up the “Systems Foundry Era," Wall Street hangs on Nvidia earnings, AI Training vs Inference, Digitial In-Memory Computing for inference efficiency, HPC in space, ISC 2024.

The post HPC News Bytes 20240226: Intel Foundry Bash, Nvidia Earnings and AI Inference, HPC in Space, ISC 2024 appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
93532
In-Memory Computing Could Be an AI Inference Breakthrough https://insidehpc.com/2024/02/in-memory-computing-could-be-the-inference-breakthrough-ai-needs/ Thu, 22 Feb 2024 15:58:41 +0000 https://insidehpc.com/?p=93514

[CONTRIBUTED THOUGHT PIECE] In-memory computing promises to revolutionize AI inference. Given the rapid adoption of generative AI, it makes sense to pursue a new approach to reduce cost and power consumption by bringing compute in memory and improving performance.

The post In-Memory Computing Could Be an AI Inference Breakthrough appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
93514
Accelerating AI Inference for High-throughput Experiments https://insidehpc.com/2023/11/accelerating-ai-inference-for-high-throughput-experiments/ Thu, 16 Nov 2023 15:36:20 +0000 https://insidehpc.com/?p=92940

An upgrade to the ALCF AI Testbed will help accelerate data-intensive experimental research. The Argonne Leadership Computing Facility’s (ALCF) AI Testbed—which aims to help evaluate the usability and performance of machine learning-based high-performance computing (HPC) applications on next-generation accelerators—has been upgraded to include Groq’s inference-driven AI systems, designed to accelerate the time-to-solution for complex science problems. […]

The post Accelerating AI Inference for High-throughput Experiments appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
92940
Kickstart Your Business to the Next Level with AI Inferencing https://insidehpc.com/2023/10/kickstart-your-business-to-the-next-level-with-ai-inferencing/ Mon, 30 Oct 2023 10:00:52 +0000 https://insidehpc.com/?p=92741

{SPONSORED GUEST ARTICLE] Check out this article form HPE (with NVIDIA.) The need to accelerate AI initiatives is real and widespread across all industries. The ability to integrate and deploy AI inferencing with pre-trained models can reduce development time with scalable secure solutions....

The post Kickstart Your Business to the Next Level with AI Inferencing appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
92741
Mythic Raises $13M for Edge AI Inference https://insidehpc.com/2023/03/mythic-raises-13m-for-edge-ai-inference/ Thu, 09 Mar 2023 17:15:54 +0000 https://insidehpc.com/?p=91143

Austin – March 9, 2023 – AI processing company Mythic has raised $13 million in a new round of funding. Mythic’s existing investors Atreides Management, DCVC, and Lux Capital contributed to the round, along with new investors Catapult Ventures and Hermann Hauser Investment (which is led by Hermann Hauser, one of the founders of Acorn Computers and […]

The post Mythic Raises $13M for Edge AI Inference appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
91143
AI Inference Company d-Matrix Announces Collaboration with Microsoft https://insidehpc.com/2022/11/ai-inference-company-d-matrix-announces-collaboration-with-microsoft/ Thu, 17 Nov 2022 15:10:24 +0000 https://insidehpc.com/?p=90544 SANTA CLARA – Today, d-Matrix, a AI-compute and inference company, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix’s  digital in-memory compute (DIMC) products. The Project Bonsai platform accelerates time-to-value, with a product-ready solution designed to cut down on development efforts using an […]

The post AI Inference Company d-Matrix Announces Collaboration with Microsoft appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
90544
MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances https://insidehpc.com/2022/09/mlcommoncs-latest-mlperf-ai-benchmark-results-show-machine-learning-inference-advances/ Thu, 08 Sep 2022 17:00:18 +0000 https://insidehpc.com/?p=90136

SAN FRANCISCO – September 8, 2022 – Today, the open engineering consortium MLCommons announced results from MLPerf Inference v2.1, which analyzes the performance of inference — the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. Here are the results and […]

The post MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
90136
Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense https://insidehpc.com/2022/08/improving-ai-inference-performance-with-gpu-acceleration-in-aerospace-and-defense/ Thu, 25 Aug 2022 13:00:19 +0000 https://insidehpc.com/?p=90049

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

The post Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
90049
Untether AI Unveils At-Memory Compute Architecture at Hot Chips https://insidehpc.com/2022/08/untether-ai-unveils-at-memory-compute-architecture-at-hot-chips/ Wed, 24 Aug 2022 18:41:10 +0000 https://insidehpc.com/?p=90063

PALO ALTO — Untether AI, an at-memory computation company for artificial intelligence (AI) workloads, today announced at the HOT CHIPS 2022 conference its next-generation architecture for accelerating AI inference workloads called speedAI devices, with an internal codename “Boqueria.” At 30 TeraFlops per watt (TFlops/W) and 2 PetaFlops of performance, the speedAI architecture sets a new […]

The post Untether AI Unveils At-Memory Compute Architecture at Hot Chips appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
90063