London, 27 March 2024: Oriole Networks – a startup using light to train LLMs faster with less power – has raised £10 million in seed funding to improve AI performance and adoption, and solve AI’s energy problem. The round, which the company said is one of the UK’s largest seed raises in recent years, was co-led […]
The post Oriole Networks Raises £10m for Faster LLM Training appeared first on High-Performance Computing News Analysis | insideHPC.
]]>oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.
The post Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research appeared first on High-Performance Computing News Analysis | insideHPC.
]]>Researchers at Los Alamos National Laboratory today announced a quantum machine learning “proof” they say shows that training a quantum neural network requires only a small amount of data, “(upending) previous assumptions stemming from classical computing’s huge appetite for data in machine learning, or artificial intelligence.” The lab said the theorem has direct applications, including […]
The post Los Alamos Claims Quantum Machine Learning Breakthrough: Training with Small Amounts of Data appeared first on High-Performance Computing News Analysis | insideHPC.
]]>Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….” As it has done with previous […]
The post MLPerf: Latest Results Highlight ‘More Capable ML Training’ appeared first on High-Performance Computing News Analysis | insideHPC.
]]>In this sponsored post, Tim Miller, Vice President, Product Marketing, One Stop Systems, discusses autonomous trucking and that to achieve AI Level 4 (no driver) in the vehicles, powerful AI inference hardware supporting many different inferencing engines operating and coordinating simultaneously is required.
The post Scalable Inferencing for Autonomous Trucking appeared first on High-Performance Computing News Analysis | insideHPC.
]]>The post Azure Adopts AMD Instinct MI200 GPU for Large-Scale AI Training appeared first on High-Performance Computing News Analysis | insideHPC.
]]>Reports are circulating in AI circles that researchers from Rice University claim a breakthrough in AI model training acceleration – without using accelerators. Running AI software on commodity x86 CPUs, the Rice computer science team say neural networks can be trained 15x faster than platforms utilizing GPUs. If valid, the new approach would be a double boon for organizations implementing AI strategies: faster model training using less costly microprocessors.
The post Rice Univ. Researchers Claim 15x AI Model Training Speed-up Using CPUs appeared first on High-Performance Computing News Analysis | insideHPC.
]]>