The US Department of Energy’s Exascale Computing Project (ECP) has announced the following staff changes within the Software Technology group. Lois Curfman McInnes from Argonne will replace Jonathan Carter as Deputy Director for Software Technology. Meanwhile Sherry Li is now team lead for Math Libraries. “We are fortunate to have such an incredibly seasoned, knowledgeable, and respected staff to help us lead the ECP efforts in bringing the nation’s first exascale computing software environment to fruition,” said Mike Heroux from Sandia National Labs.
Exascale Computing Project Announces Staff Changes Within Software Technology Group
Stepping up Qubit research at the DOE
To use quantum computers on a large scale, we need to improve the technology at their heart – qubits. Qubits are the quantum version of conventional computers’ most basic form of information, bits. The DOE’s Office of Science is supporting research into developing the ingredients and recipes to build these challenging qubits.
Podcast: Rewriting NWChem for Exascale
In this Let’s Talk Exascale podcast, researchers from the NWChemEx project team describe how they are readying the popular code for Exascale. The NWChemEx team’s most significant success so far has been to scale coupled-cluster calculations to a much larger number of processors. “In NWChem we had the global arrays as a toolkit to be able to build parallel applications.”
Podcast: Supercomputing the Human Microbiome
In this Let’s Talk Exascale podcast, Kathy Yelick and Lenny Oliker from LBNL describe how the ExaBiome project is developing computational tools to analyze microbial species—bacteria or viruses that typically live in communities of hundreds of different species. “Pushing past the traditional shared-memory-system approach, the ExaBiome team has developed efficient distributed memory implementations and analyzed some of the largest datasets in the metagenomics community.”
Sandia and LBNL to lead Quantum Information Edge Strategic Alliance
A nationwide alliance of national labs, universities, and industry launched today to advance the frontiers of quantum computing systems designed to solve urgent scientific challenges and maintain U.S. leadership in next-generation information technology. “The Quantum Information Edge will accelerate quantum R&D by simultaneously pursuing solutions across a broad range of science and technology areas, and integrating these efforts to build working quantum computing systems that benefit the nation and science.”
Deep Learning on Summit Supercomputer Powers Insights for Nuclear Waste Remediation
A research collaboration between LBNL, PNNL, Brown University, and NVIDIA has achieved exaflop (half-precision) performance on the Summit supercomputer with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. Their achievement, which will be presented during the “Deep Learning on Supercomputers” workshop at SC19, demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems.
Podcast: ExaStar Project Seeks Answers in Cosmos
In this podcast, Daniel Kasen from LBNL and Bronson Messer of ORNL discuss advancing cosmology through EXASTAR, part of the Exascale Computing Project. “We want to figure out how space and time get warped by gravitational waves, how neutrinos and other subatomic particles were produced in these explosions, and how they sort of lead us down to a chain of events that finally produced us.”
John Shalf from LBNL on Computing Challenges Beyond Moore’s Law
In this special guest feature from Scientific Computing World, Robert Roe interviews John Shalf from LBNL on the development of digital computing in the post Moore’s law era. “In his keynote speech at the ISC conference in Frankfurt, Shalf described the lab-wide project at Berkeley and the DOE’s efforts to overcome these challenges through the development acceleration of the design of new computing technologies.”
Video: Exascale Deep Learning for Climate Analytics
Thorsten Kurth Josh Romero gave this talk at the GPU Technology Conference. “We’ll discuss how we scaled the training of a single deep learning model to 27,360 V100 GPUs (4,560 nodes) on the OLCF Summit HPC System using the high-productivity TensorFlow framework. This talk is targeted at deep learning practitioners who are interested in learning what optimizations are necessary for training their models efficiently at massive scale.”
DOE powers Aluminum and Steelmaking Research through HPC4Manufacturing Program
Today the HPC4Manufacturing Program announced four federal funding awards for solving key manufacturing challenges in steelmaking and aluminum production through supercomputing. “Primary metals industries are significant energy users, so opportunities to reduce energy consumption in this area is of great interest to our sponsors,” said HPC4Manufacturing Director Robin Miles of LLNL. “Additionally, this program is helping U.S. steel makers produce the higher strength steels vital to light weighting the next generation of automobiles.”