Seoul, April 19, 2024 – Memory company SK hynix Inc. announced today a memorandum of understanding with TSMC to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass produced […]
SK hynix to Invest $3.9B in Indiana HBM Fab and R&D with Purdue
Memory chip company SK hynix announced it will invest $3.87 billion in West Lafayette, Indiana to build an advanced packaging fabrication and R&D facility for AI products. The project, which the company said is the first of its kind in the U.S., will be an advanced….
GUC Tapes Out AI/HPC/Networking Platform on TSMC CoWoS Technology with 7.2 Gbps HBM3 Controller and PHY, GLink-2.5D and 112G-LR SerDes IPs
Hsinchu, Taiwan – June 8, 2021 – Global Unichip Corp. (GUC), the Advanced ASIC Leader, announced today that it has successfully taped out AI/HPC/Networking CoWoS Platform with 7.2 Gbps HBM3 Controller and PHY, GLink-2.5D and third-party 112G-LR SerDes IPs. The main die of the platform contains the world’s first HBM3 Controller and PHY IP with […]
ISC 2019 Recap from Glenn Lockwood
In this special guest feature, Glenn Lockwood from NERSC shares his impressions of ISC 2019 from an I/O perspective. “I was fortunate enough to attend the ISC HPC conference this year, and it was a delightful experience from which I learned quite a lot. For the benefit of anyone interested in what they have missed, I took the opportunity on the eleven-hour flight from Frankfurt to compile my notes and thoughts over the week.”
Creating Balance in HPC on the Piz Daint Supercomputer
The flagship supercomputer at the Swiss National Supercomputing Centre (CSCS), Piz Daint, named after a mountain in the Alps, currently delivers 7.8 petaflops of compute performance, or 7.8 quadrillion mathematical calculations per second. A recently announced upgrade will double its peak performance, thanks to a refresh using the latest Intel Xeon CPUs and 4,500 Nvidia Tesla P100 GPUs.
Video: AMD’s next Generation GPU and High Bandwidth Memory Architecture
“HBM is a new type of CPU/GPU memory (“RAM”) that vertically stacks memory chips, like floors in a skyscraper. In doing so, it shortens your information commute. Those towers connect to the CPU or GPU through an ultra-fast interconnect called the “interposer.” Several stacks of HBM are plugged into the interposer alongside a CPU or GPU, and that assembled module connects to a circuit board. Though these HBM stacks are not physically integrated with the CPU or GPU, they are so closely and quickly connected via the interposer that HBM’s characteristics are nearly indistinguishable from on-chip integrated RAM.”