AMD Introduces Its MI200 Series GPUs: An Exascale-Class GPU Accelerators For Deep Learning

Scientific endeavors such as weather forecasting, climate modeling, analyzing new energy sources, drug discovery, training AIs, or running any form of large-scale simulation require a massive amount of computational power. The next big step in High-Performance Computing (HPC) field is exascale computing, having a computational power of 10^18 FLOPS(Floating Point Operations per Second).

Keeping the above goals in mind, AMD has recently introduced its MI200 server accelerator series Instinct MI250x and MI250 GPUs, which outperform AMD’s previous Instinct MI100 by a considerable margin and competes with Nvidia’s A100 GPUs. The Instinct MI 200 GPUs are based on AMD’s latest CDNA-2 architecture which provides 880 matrix cores and new FP64 Matrix operations supporting a wide range of HPC and AI applications.

Along with the GPUs, AMD’s latest release of the Radeon Open Compute (aka ROCm) provides developers, scientists, and researchers the flexibility to maximize their performance with available computational resources. With the massive advancement in computing power provided by the AMD Instinct MI200 GPUs, the development of other APIs and frameworks over to ROCm is also expected in the future. AMD has also introduced Infinity Hub, which provides a collection of tuned GPU software containers and deployment guides for HPC and AI applications over AMD’s accelerators.

To conclude, the new AMD CDNA architecture based Instinct MI200 series data center accelerators provide 4.9x improvement in HPC compared to the latest competing data center accelerators. This boost in super-computing power will enable researchers to tackle grand challenges and speed up their scientific discovery. The Instinct MI200 series GPUs are expected to reach cloud data centers by 2022


  • White paper: