If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs the mainstream, baseline processor used in HPC server clusters? Again, in part it gets […]
Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers
ARM-based Fugaku Supercomputer on Summit of New Top500 – Surpasses Exaflops on AI Benchmark
The new no. 1 system on the updated ranking of the TOP500 list of the world’s most powerful supercomputers, released this morning, is Fugaku, a machine built at the Riken Center for Computational Science in Kobe, Japan. The new top system turned in a High Performance LINPACK (HPL) result of 415.5 petaflops (nearly half an exascale), outperforming Summit, the former no. 1 system housed at the U.S. Dept. of Energy’s Oak Ridge National Lab, by a factor of 2.8x. Fugaku, powered by Fujitsu’s 48-core A64FX SoC, is the first ARM-based system to take the TOP500 top spot.
Jack Dongarra presents: Adaptive Linear Solvers and Eigensolvers
Jack Dongarra from UT Knoxville gave this talk at ATPESC 2019. “Success in large-scale scientific computations often depends on algorithm design. Even the fastest machine may prove to be inadequate if insufficient attention is paid to the way in which the computation is organized. We have used several problems from computational physics to illustrate the importance of good algorithms, and we offer some very general principles for designing algorithms.”
SC19 Cluster Competition: Students Pack LINs and HPCG’s
In this special guest feature, Dan Olds from OrionX.net continues his series of stories on the SC19 Student Cluster Competition. Held as part of the Students@SC program, the competition is designed to introduce the next generation of students to the high-performance computing community. “Nanyang Tech took home the Highest LINPACK Award at the recently concluded SC19 Student Cluster Competition. The team, also known as the Pride of Singapore (at least to me), easily topped the rest of the field with their score of 51.74 Tflop/s.”
ISC 2019 Student Cluster Competition: Day-by-Day Drama, Winners Revealed!
In this special guest feature, Dan Olds from OrionX continues his first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. “The ISC19 Student Cluster Competition in Frankfurt, Germany had one of the closest and most exciting finishes in cluster competition history. The overall winner was decided by just over two percentage points and the margin between third and fourth place was less than a single percentage point.”
ISC19 Student Cluster Competition: LINs Packed & Conjugates Gradient-ed
In this special guest feature, Dan Olds from OrionX shares first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. “The benchmark results from the recently concluded ISC19 Student Cluster Competition have been compiled, sliced, diced, and analyzed senseless. As you cluster comp fanatics know, this year the student teams are required to run LINPACK, HPCG, and HPCC as part of the ISC19 competition.”
Summit Supercomputer Triples Performance Record on new HPL-AI Benchmark
“Using HPL-AI, a new approach to benchmarking AI supercomputers, ORNL’s Summit system has achieved unprecedented performance levels of 445 petaflops or nearly half an exaflops. That compares with the system’s official performance of 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.”
Video: Mellanox HDR InfiniBand makes inroads on the TOP500
In this video from ISC 2019, Gilad Shainer from Mellanox describes how HDR InfiniBand technology is proliferating across the TOP500 list of the world’s most powerful supercomputers. “HDR 200G InfiniBand made its debut on the list, accelerating four supercomputers worldwide, including the fifth top-ranked supercomputer in the world located at the Texas Advanced Computing Center, which also represents the fastest supercomputer built in 2019.”
The Green HPCG List and the Road to Exascale
In this special guest post, Axel Huebl looks at the TOP500 and HPCG with an eye on power efficiency trends to watch on the road to Exascale. “This post will focus one efficiency, in terms of performance per Watt, simply because system power envelope is a major constrain for upcoming Exascale systems. With the great numbers from TOP500, we try to extend theoretical estimates from theoretical Flop/Ws of individual compute hardware to system scale.”
HPCG Benchmark offers a alternative way to rank Top Computers
“The LINPACK program used to represent a broad spectrum of the core computations that needed to be performed, but things have changed,” said Sandia researcher Mike Heroux, who created and developed the HPCG program. “The LINPACK program performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today’s applications often use sparse data structures, and computations are leaner.”