<SPONSORED CONTENT> This insideHPC technology guide, insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data, discusses how organizations need to adopt a Fusion Computing Model to meet the needs of processing, analyzing, and storing the data to no longer be static.
Executive Summary
Change is driven by the huge amount of data growth, with organizations now running workflows typically seen in High Performance Computing (HPC). Compute Systems, whether at the core, the cloud, or the edge are now being built and used across businesses and organizations to boost specific data-intensive needs such as HPC, Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Internet of Things (IoT), Industrial Internet of Things (IIoT), and Big Data.
As organizations collect more data, more computational horsepower is needed to sift that data as it grows geometrically, especially in real-time. Companies and governments need to transform digitally, or they will be left behind in this digital race. Intelligent capture, digestion, and data analytic technologies using HPC and Al systems underpin the ability to advance processes from mere digitization to fundamental transformation.
This paper:
- Provides an overview of the Fusion Computing Model.
- Describes how Seagate Technology PLC (Seagate) and Intel Corporation technologies can meet
fusion computing needs. - Introduces World Wide Technology (WWT) and describes how WWT’s Advanced Technology Center (ATC) High Performance Architecture (HPA) Framework and Laboratory can test infrastructure solutions to help organizations meet the needs of fusion computing.
Customer Challenges
Data centers face many challenges when dealing with customer needs, including omnichannel adaptation, failing or inadequate analytics, and lagging legacy business models. Legacy data centers often have old legacy compute and storage systems that may not handle HPC compute and storage needs. Besides, organizations often have HPC and business systems separated by more than just a firewall. Teams are still siloed with limited function or consistency to implement developing technical architectures or standards.
Data centers need to secure disparate data and move static (at rest) data into dynamic (in motion) data within a given data lifecycle. Modern workflow diversity requires architectural diversity; no single architecture or framework is best for every workflow to extract sustained, high performance. The data fountain concept and the Fusion Computing Model allow organizations to meet these needs by integrating the ever-changing ecosystem of big data platforms and data sources, including ML/DL and the Edge.
Fusion Computing drives the High Performance Architecture
Massive amounts of data are driving a transformation across high performance computing (HPC), artificial intelligence (AI), data analytics (HPDA), and the high performance architecture (HPA). An HPC + Al + Big Data architecture is the foundation of the fusion computing services framework. This framework represents a convergence of the HPC and the data-driven Al communities, as they run similar data and compute intensive workflows. Just as data security and business models like the cloud (public, private, and hybrid) have evolved, the high performance architecture for HPC + Al + Big Data has evolved to meet the agile demands of businesses and governments.
The Fusion Computing Model: Harnessing the Data Fountain
According to Earl J. Dodd, WWT, Global HPC Business Practice Leader, “Data are at the core of the Fusion Computing Model. Think of data as a data fountain that is always flowing that requires a reference architecture solution to move and use the data constantly. A data fountain is the opposite of a data warehouse or data lake where data sits at rest and is statically stored.” In the concept of the data fountain, data always remains in motion. As workflows are being run, data moves as it is written and then read—data are always an active part of the business. An example of this has data always available to data analytics in ML or analytics in IoT/IIoT and Edge settings for manufacturing quality control or best practices. The ability to use the data from the data fountain requires a High Performance Architecture (HPA).”
Over the next few weeks we will explore these topics:
- Executive Summary, Customer Challenges, Fusion Computing drives the High Performance Architecture, The Fusion Computing Model: Harnessing the Data Fountain
- Fusion Computing Reference Architecture, HPC and AI Memory Considerations, Importance of HPC Chips and Accelerators, HPC Storage is Critical to Fusion Computing
- Archiving Data Case Study, What to Consider when Selecting Storage for the HPA Reference Architecture, The Role of Storage in Fusion Computing, Seagate Storage and Intel Compute Enable Fusion Computing, WWT’s Advanced Technology Center (ATC), Conclusion
Download the complete insideHPC Guide to HPC Fusion Computing Model – A Reference Architecture for Liberating Data courtesy of Seagate.