In this special guest feature, Coury Turczyn from ORNL tells the untold story of what happens to high end supercomputers like Titan after they have been decommissioned. “Thankfully, it did not include a trip to the landfill. Instead, Titan was carefully removed, trucked across the country to one of the largest IT asset conversion companies in the world, and disassembled for recycling in compliance with the international Responsible Recycling (R2) Standard. This huge undertaking required diligent planning and execution by ORNL, Cray (a Hewlett Packard Enterprise company), and Regency Technologies.”
Supercomputing Structures of Intrinsically Disordered Proteins
Researchers using the Titan supercomputer at ORNL have created the most accurate 3D model yet of an intrinsically disordered protein, revealing the ensemble of its atomic-level structures. The combination of neutron scattering experiments and simulation is very powerful,” Petridis said. “Validation of the simulations by comparison to neutron scattering experiments is essential to have confidence in the simulation results. The validated simulations can then provide detailed information that is not directly obtained by experiments.”
Supercomputing Galactic Winds with Cholla
Using the Titan supercomputer at Oak Ridge National Laboratory, a team of astrophysicists created a set of galactic wind simulations of the highest resolution ever performed. The simulations will allow researchers to gather and interpret more accurate, detailed data that elucidates how galactic winds affect the formation and evolution of galaxies.
AI Approach Points to Bright Future for Fusion Energy
Researchers are using Deep Learning techniques on DOE supercomputers to help develop fusion energy. “Unlike classical machine learning methods, FRNN—the first deep learning code applied to disruption prediction—can analyze data with many different variables such as the plasma current, temperature, and density. Using a combination of recurrent neural networks and convolutional neural networks, FRNN observes thousands of experimental runs called “shots,” both those that led to disruptions and those that did not, to determine which factors cause disruptions.”
Podcast: Quantum Applications are Always Hybrid
In this podcast, the Radio Free HPC team looks at inherently hybrid nature of quantum computing applications. “If you’re always going to have to mix classical code with quantum code then you need an environment that is built for that workflow, and thus we see a lot of attention given to that in the QIS (Quantum Information Science) area. This is reminiscent of OpenGL for graphics accelerators and OpenCL/CUDA for compute accelerators.”
Case Study: Supercomputing Natural Gas Turbine Generators for Huge Boosts in Efficiency
Hyperion Research has published a new case study on how General Electric engineers were able to nearly double the efficiency of gas turbines with the help of supercomputing simulation. “With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”
Understanding Behaviors in the Extreme Environment of Natural Gas Turbine Generators
“With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”
Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios
Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”
Video: Thomas Zacharia from ORNL Testifies at House Hearing on the Need for Supercomputing
In this video, Thomas Zacharia from ORNL testifies before the House Energy and Commerce hearing on DOE Modernization. “At the OLCF, we are deploying a system that may well be the world’s most powerful supercomputer when it begins operating later this year. Summit will be at least five times as powerful as Titan. It will also be an exceptional resource for deep learning, with the potential to address challenging data analytics problems in a number of scientific domains. Summit is among the products of CORAL, the Collaboration of Oak Ridge, Argonne, and Livermore.”
Using the Titan Supercomputer to Accelerate Deep Learning Networks
A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.