SW/HW co-design for near-term quantum computing

Yunong Shi from the University of Chicago gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: FPGAs and Machine Learning

James Moawad and Greg Nash from Intel gave this talk at ATPESC 2019. “FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design.”

Video: The Parallel Computing Revolution Is Only Half Over

In this video from ATPESC 2019, Rob Schreiber from Cerebras Systems looks back at historical computing advancements, Moore’s Law, and what happens next. “A recent report by OpenAI showed that, between 2012 and 2018, the compute used to train the largest models increased by 300,000X. In other words, AI computing is growing 25,000X faster than Moore’s law at its peak. To meet the growing computational requirements of AI, Cerebras has designed and manufactured the largest chip ever built.”

Theta and the Future of Accelerator Programming at Argonne

Scott Parker from Argonne gave this talk at ATPESC 2019. “Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”

Video: I/O Architectures and Technology

Glenn Lockwood from NERSC gave this talk at ATPESC 2019. “Systems are very different, but the APIs you use shouldn’t be. Understanding performance is easier when you know what’s behind the API. What really happens when you read or write some data?”

The Coming Age of Extreme Heterogeneity in HPC

Jeffrey Vetter from ORNL gave this talk at ATPESC 2019. “In this talk, I’m going to cover some of the high-level trends guiding our industry. Moore’s Law as we know it is definitely ending for either economic or technical reasons by 2025. Our community must aggressively explore emerging technologies now!”

NNSA Explorations: ARM for Supercomputing

Howard Pritchard from LANL and Simon Hammond from Sandia gave this talk at the Argonne Training Program on Extreme-Scale Computing 2019. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”

Apply now for Argonne Training Program on Extreme-Scale Computing 2018

Computational scientists now have the opportunity to apply for the upcoming Argonne Training Program on Extreme-Scale Computing (ATPESC). The event takes place from July 29-August 10, 2018 in greater Chicago. “With the challenges posed by the architecture and software environments of today’s most powerful supercomputers, and even greater complexity on the horizon from next-generation and exascale systems, there is a critical need for specialized, in-depth training for the computational scientists poised to facilitate breakthrough science and engineering using these amazing resources.”

Video: The Legion Programming Model

“Developed by Stanford University, Legion is a data-centric programming model for writing high-performance applications for distributed heterogeneous architectures. Legion provides a common framework for implementing applications which can achieve portable performance across a range of architectures. The target class of users dictates that productivity in Legion will always be a second-class design constraint behind performance. Instead Legion is designed to be extensible and to support higher-level productivity languages and libraries.”

Video: System Interconnects for HPC

In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”