[SPONSORED CONTENT] The past few years have brought many changes into the HPC storage world, both with technology like non-volatile memory express (NVMe) or persistent memory, and the growth of software defined storage solutions. Gone are the days when IBM Spectrum Scale (secretly, we know we all still call it GPFS) or Lustre were the only real choices in the market. In retrospect, the choice was easy: you picked one of the two and away you went. And nobody ever wondered if they made the right choice.
IBM Launches Storage for Data Management across Hybrid Clouds
IBM has announced innovations across its storage product line designed to improve management of data across complex hybrid cloud environments with the goal of improving data availability and resilience. The company announced plans to launch container-native software defined storage (SDS), IBM Spectrum Fusion, in the second half of 2021 designed to combine IBM’s parallel file system […]
Lenovo to deploy 17 Petaflop supercomputer at KIT in Germany
Today Lenovo announced a contract for a 17 petaflop supercomputer at Karlsruhe Institute of Technology (KIT) in Germany. Called HoreKa, the system will come online this Fall and will be handed over to the scientific communities by summer 2021. The procurement contract is reportedly on the order of EUR 15 million. “The result is an innovative hybrid system with almost 60.000 next-generation Intel Xeon Scalable Processor cores and 220 terabytes of main memory as well as 740 NVIDIA A100 Tensor Core GPUs. A non-blocking NVIDIA Mellanox InfiniBand HDR network with 200 GBit/s per port is used for communication between the nodes. Two Spectrum Scale parallel file systems offer a total storage capacity of more than 15 petabytes.”
NERSC Rolls Out New Community File System for Next-Gen HPC
NERSC recently unveiled their new Community File System (CFS), a long-term data storage tier developed in collaboration with IBM that is optimized for capacity and manageability. “In the next few years, the explosive growth in data coming from exascale simulations and next-generation experimental detectors will enable new data-driven science across virtually every domain. At the same time, new nonvolatile storage technologies are entering the market in volume and upending long-held principles used to design the storage hierarchy.”
Enabling Oracle Cloud Infrastructure with IBM Spectrum Scale
In this video, Doug O’Flaherty from IBM describes how Spectrum Scale Storage (GPFS) helps Oracle Cloud Infrastructure delivers high performance for HPC applications. “To deliver insights, an organization’s underlying storage must support new-era big data and artificial intelligence workloads along with traditional applications while ensuring security, reliability and high performance. IBM Spectrum Scale meets these challenges as a high-performance solution for managing data at scale with the distinctive ability to perform archive and analytics in place.”
OSC Doubles Down on IBM HPC Storage
The Ohio Supercomputer Center is working with IBM to expand the center’s HPC storage capacity by 8.6 petabytes. Slated for completion in December, the new storage system will expand capacity for scratch and project storage, but also allow OSC to offer data encryption and full file-system audit capabilities that can support secure storage of sensitive data, such as medical data or other personally identifiable information.
Lenovo’s Niagara Cluster Upgrade makes it Fastest Supercomputer in Canada
Today Lenovo unveiled the addition of 1,500 ultra-dense Lenovo ThinkSystem SD530 high-performance compute nodes for Niagara – Canada’s most-powerful research supercomputer. As the demand for high performance computing in quantitative research increases rapidly, the 4.6 Petaflop supercomputer will help Canadian researchers achieve meaningful results in artificial intelligence, astrophysics, climate change, oceanic research and other disciplines using big data.
Video: Australian Bureau of Meteorology moves to a new Data Production Service
Tim Pugh from the Australian Bureau of Meteorology gave this talk at the DDN User Group in Denver. “The Bureau of Meteorology, Australia’s national weather, climate and water agency, relies on DDN’s GRIDScaler Enterprise NAS storage appliance to handle its massive volumes of research data to deliver reliable forecasts, warnings, monitoring and advice spanning the Australian region and Antarctic territory.”
WekaIO Unveils Industry’s First Cloud-native Scalable File System
Today WekaIO, a venture backed high-performance cloud storage software company, today emerged from stealth to introduce the industry’s first cloud-native scalable file system that delivers unprecedented performance to applications, scaling to Exabytes of data in a single namespace. Headquartered in San Jose, CA, WekaIO has developed the first software platform that harnesses flash technology to create a high-performance parallel scale out file storage solution for both on-premises servers and public clouds.
Data is at the heart of every business but many industries are hurt by the performance limitations of their storage infrastructure,” said Michael Raam, president and CEO of WekaIO. “We are heralding a new era of storage, having developed a true scale-out data infrastructure that puts independent, on-demand capacity and performance control into the hands of our customers. It’s exciting to be part of a company that delivers a true revolution for the storage industry.”
NEC’s Aurora Vector Engine & Advanced Storage Speed HPC & Machine Learning at ISC 2017
In this video from ISC 2017, Oliver Tennert from NEC Deutschland GmbH introduces the company’s advanced technologies for HPC and Machine Learning. “Today NEC Corporation announced that it has developed data processing technology that accelerates the execution of machine learning on vector computers by more than 50 times in comparison to Apache Spark technologies.”