Composable systems are becoming essential for data centers that need to respond to changing workloads from their users. Composable systems enable the dynamic allocation of processor, memory, storage, and network resources to fit the needs of a particular workload. Composable infrastructure helps organizations solve the problem of inflexible resources that lead to excess capacity and sub-optimal utilization. These also can help improve system performance by allowing bare-metal server and storage systems to be configured on the fly as needs change.
With its new Expanse supercomputer, San Diego Supercomputer Center (SDSC) is pioneering composable HPC systems to enable the dynamic allocation of resources tailored to individual workloads. One of the critical innovations in the SDSC’s new Expanse supercomputer from Dell Technologies, is the ability to support composable systems with dynamic capabilities.
In this Q&A, SDSC Chief Data Science Officer Ilkay Altintas explains the rationale for composable systems and the approach taken with the new Expanse supercomputer.
Q: The new Expanse system will allow resources to be composed to meet the demands of different projects and workloads. Why is this important?
Ilkay Altintas: While Expanse will easily support traditional batch scheduled HPC applications, breakthrough research is increasingly dependent upon carrying out complex workflows. This includes near real-time remote sensor data ingestion and big data analysis, interactive data exploration and visualization as well as large-scale computation.
One of the critical innovations in Expanse is its ability to support composable systems at the continuum of computing with dynamic capabilities. Using tools such as Kubernetes and workflow software we have developed, Expanse will extend the boundaries of what is possible by integration with the broader computational and data ecosystem.
Q: How do you keep an eye on the future while accommodating a wide variety of different types of workloads with the underlying technologies?
IA: Container technologies and tools, including Kubernetes, are a huge enabler. These tools enable us to allocate resources through dynamic scheduling.
The trends over the last ten years have brought more capabilities in big data and cloud computing. We have many different stacks and many different ways to tackle data-driven problems. We can process big data on the fly. We can apply that to steer a problem, a workflow, or a process toward a solution to a problem, quite quickly.
Whether you’re talking about managing wildfires, doing personalized medicine, or enabling a smart city, you are opening the door to dynamic data-driven solutions. Many different applications can benefit from quickly composing resources together.
Q: How do you do this now and in the future?
IA: Using measurements in middleware, we can ask for the right amount of resources and enable users to get access to these resources. We can provide dynamic access to these resources. We have made sure that the Expanse system can dynamically allocate some part of the resource through Kubernetes.
Whatever you run on these systems becomes services that we can compose as an application and coordinate. We also created a software ecosystem so we could run services and heterogeneous workflows on top of the systems that we composed together.
Right now, we have a limited set of resources, but over time these will grow. We are building a layer in with Kubernetes that will be available for resources that can be dynamically allocated. We will also be able to add clusters or nodes outside of Expanse.
If I have an application that I am running on Expanse, and it connects to another application that I need to run on an FPGA. We will be able to declare them as resources so that one workflow can have access to and coordinate without having to stop and go somewhere, instead of going back and forth between these nodes.
Q: Do you see Expanse connecting to other NSF Extreme Science and Engineering Discovery Environment (XSEDE) systems, and maybe edge and cloud systems?
IA: This is going to be a fairly new capability for XSEDE. The vision is that we will see more types of alternative resources, for edge computing, for IoT devices, for facilities, for the cloud.
We are building a way to embrace this continuum. I think we will see new software tools and capabilities that will carry it to the next level. Our goal is to demonstrate this application, and to be able to allocate usage or users’ time on the composable part of Expanse, instead of just the traditional supercomputing capability.
Q: If you had one thing that you wanted people to remember about SDSC and composable systems, what would it be?
IA: I think it would be the heterogeneous workflow capability. Expanse is a system that can support new up-and-coming varied workflows by giving them the right data and the right infrastructure to build upon. The ultimate goal is integrated applications and being able to do dynamic data-driven computing.
To learn more
For a closer look at the Expanse system and the work of the San Diego Supercomputer Center, read the Dell Technologies case study “Computing without boundaries,” watch the case study video, and share this Q&A.