Editor’s Note: This video is no longer available.
“Truly accelerating the pace of deep learning and addressing the broad needs of the datacenter requires a combination of high performance compute and GPU acceleration optimized for handling massive amounts of data with lots of floating point computation that can be spread across many cores. Large system designers today also need the ability design efficient systems with the flexibility and openness to configure systems that meet the challenge of today’s very demanding workloads. AMD is empowering designers with those capabilities, allowing them to raise the bar on achievable compute densities by enabling optimized server designs with higher performance, reduced latencies and improved efficiencies in an open, flexible environment. With the introduction of new EPYC processor based servers with Radeon Instinct GPU accelerators, combined with our ROCm open software platform, AMD is ushering in a new era of heterogeneous compute for HPC and Deep Learning.”
Jay Owen is Director of External Research at AMD.