Today Mellanox introduced Mellanox Skyway, a 200 gigabit HDR InfiniBand to Ethernet gateway appliance. Mellanox Skyway enables a scalable and efficient way to connect the high-performance, low-latency InfiniBand data center to external Ethernet infrastructures or connectivity. Mellanox Skyway is the next generation of the existing 56 gigabit FDR InfiniBand to 40 gigabit Ethernet gateway system, deployed in multiple data centers around the world.
Mellanox Announces HDR InfiniBand-to-Ethernet Gateway Appliance for High Performance Data Centers
GPU-Powered Turbocharger coming to JUWELS Supercomputer at Jülich
The Jülich Supercomputing Centre is adding a high-powered booster module to their JUWELS supercomputer. Designed in cooperation with Atos, ParTec, Mellanox, and NVIDIA, the booster module is equipped with several thousand GPUs designed for extreme computing power and artificial intelligence tasks. “With the launch of the booster in 2020, the computing power of JUWELS will be increased from currently 12 to over 70 petaflops.”
Dell EMC to Deploy World’s Largest Industrial Supercomputer at Eni
Today Eni announced plans to deploy the world’s largest industrial supercomputer at its Green Data Center in Italy. Called “HPC5,” the new system from Dell EMC will triple the computing power of their existing HPC4 system. The combined machines will have a total peak power of 70 Petaflops. “HPC5 will be made up of 1,820 Dell EMC PowerEdge C4140 servers, each with two Intel Gold 6252 24-core processors and four NVIDIA V100 GPU accelerators. The servers will be connected through an InfiniBand Mellanox HDR ultra-high-performance network.”
Video: InfiniBand In-Network Computing Technology and Roadmap
Rich Graham from Mellanox gave this talk at the UK HPC Conference. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements – Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. This session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”
Harvard Names New Lenovo HPC Cluster after Astronomer Annie Jump Cannon
Harvard has deployed a liquid-cooled supercomputer from Lenovo at it’s FASRC computing center. The system, named “Cannon” in honor of astronomer Annie Jump Cannon, is a large-scale HPC cluster supporting scientific modeling and simulation for thousands of Harvard researchers. “This new cluster will have 30,000 cores of Intel 8268 “Cascade Lake” processors. Each node will have 48 cores and 192 GB of RAM.”
Video: InfiniBand In-Network Computing Technology and Roadmap
Gilad Shainer from Mellanox gave this talk at the MVAPICH User Group. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. These technologies are in use at some of the recent large scale supercomputers around the world, including the top TOP500 platforms. The session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”
Fujitsu to Deploy Gadi Supercomputer at NCI in Australia
Today Fujitsu announced a contract to upgrade the Australia’s fastest supercomputer at NCI. Called “Gadi,” the new supercomputer will replace the NCI’s current supercomputer, Raijin, which was also provided by Fujitsu back in 2012. “The upgrade of this critical infrastructure will see Australia continue to play a leading role in addressing some of our greatest global challenges. This new machine will keep Australian research and the 5,000 researchers who use it at the cutting-edge.”
Video: Mellanox HDR InfiniBand makes inroads on the TOP500
In this video from ISC 2019, Gilad Shainer from Mellanox describes how HDR InfiniBand technology is proliferating across the TOP500 list of the world’s most powerful supercomputers. “HDR 200G InfiniBand made its debut on the list, accelerating four supercomputers worldwide, including the fifth top-ranked supercomputer in the world located at the Texas Advanced Computing Center, which also represents the fastest supercomputer built in 2019.”
Mellanox Rocks the TOP500 with Ethernet and InfiniBand
Today Mellanox announced that the company’s InfiniBand solutions accelerate six of the top ten HPC and AI Supercomputers on the June TOP500 list. The six systems Mellanox accelerates include the top three, and four of the top five: The fastest supercomputer in the world at Oak Ridge National Laboratory, #2 at Lawrence Livermore National Laboratory, #3 at Wuxi Supercomputing Center in China, #5 at Texas Advanced Computing Center, #8 at Japan’s Advanced Industrial Science and Technology, and #10 at Lawrence Livermore National Laboratory. “HDR 200G InfiniBand, the fastest and most advanced interconnect technology, makes its debut on the list, accelerating four supercomputers worldwide, including the fifth top-ranked supercomputer in the world located at the Texas Advanced Computing Center, which also represents the fastest supercomputer built in 2019.”
AMD Powers Corona Cluster for HPC Analytics at Livermore
Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.”