In today’s climate-aware era, there is an increased focus on sustainability for business decision-makers. With technology being the catalyst for growth in many organizations, IT leaders have a crucial role in making decisions that will positively impact future generations. In this whitepaper, written by International Data Corporation (IDC), the premier global provider of market intelligence, […]
Why IT Must Have an Influential Role in Strategic Decisions About Sustainability
It’s Time to Resolve the Root Cause of Congestion
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.
Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.
In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:
– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency
Azure HBv3 VMs and Excelero NVMesh Performance Results
Azure offers Virtual Machines (VMs) with local NVMe drives that deliver a tremendous amount of performance. These local NVMe drives are ephemeral, so if the VM fails or is deallocated, the data on the drives will no longer be available. Excelero NVMesh provides a means of protecting and sharing data on these drives, making their performance readily available, without risking data longevity. This eBook from Microsoft Azure and AMD in coordination with Excelero provides in-depth technical information about the performance and scalability of volumes generated on Azure HBv3 VMs with this software-defined-storage layer.
HPE Reference Architecture for SAS 9.4 on HPE Superdome Flex 280 and HPE Primera Storage
This Reference Architecture highlights the key findings and demonstrated scalability when running SAS® 9.4 using the Mixed Analytics Workload running on HPE Superdome Flex 280 Server and HPE Primera Storage. These results demonstrate that the combination of the HPE Superdome Flex 280 Server and HPE Primera Storage with SAS 9.4 delivers up to 20GB/s of sustained throughput, up to a 2x performance improvement from the previous server and storage generation testing.
How to Integrate GPUs into your Business Analytics Ecosystem
This whitepaper discusses how GPU technology can augment data analytics performance, enabling data warehouses and other solutions to better respond to new, yet common, database limitations that are the result of increasing data set sizes, increasing user concurrency and demand, and increased use of interactive analytics. The way in which the analytics market has evolved […]
Things to Know When Assessing, Piloting, and Deploying GPUs
In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.
Simplifying Persistent Container Storage for the Open Hybrid Cloud
This ESG Technical Validation documents remote testing of Red Hat OpenShift Container Storage with a focus on the ease of use and breadth of data services. Containers have become an important part of data center modernization. They simplify building, packaging, and deploying applications, and are hardware agnostic and designed for agility—they can run on physical, virtual, or cloud infrastructure and can be moved around as needed.
Massive Scalable Cloud Storage for Cloud Native Applications
In this comprehensive technology white paper, written by Evaluator Group, Inc. on behalf of Lenovo, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utliized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.
insideHPC Guide to HPC/AI for Energy
In this technology guide, we take a deep dive into how the team of Dell Technologies and AMD is working to provide solutions for a wide array of needs for more strategic cultivation of oil and gas energy reserves. We’ll start with a series of compelling use-case examples, and then introduce a number of important pain-points solved with HPC and AI. We’ll continue with some specific solutions for the energy industry by Dell and AMD. Then we’ll take a look at a case study examining how geophysical services and equipment company CGG successfully deployed HPC technology for competitive advantage. Finally, we’ll leave you with a short-list of valuable resources available from Dell to help guide you along the path with HPC and AI.
The Race for a Unified Analytics Warehouse
This white paper from our friends over at Vertica discusses how the race for a unified analytics warehouse is on. The data warehouse has been around for almost three decades. Shortly after big data platforms were introduced in the late 2000s, there was talk that the data warehouse was dead—but it never went away. When big data platform vendors realized that the data warehouse was here to stay, they started building databases on top of their file system and conceptualizing a data lake that would replace the data warehouse. It never did.