AMD Instinct MI200 Speculated to Utilize 110 Compute Units Per MCM GPU

Website Coelacanth’s Dream located a Github commit that may signal a future configuration to the approaching AMD Aldebaran GPU-based Instinct accelerator. The new GPU, codenamed ‘GFX90A,” will utilize the CDNA2 architecture, a derivative of the GFX 9th Family structure (Vega structure).

AMD Instinct MI200 Could Feature Two 110 Compute Units CDNA 2 GPU Dies

There are three codes, GFX906_60, GFX908_120, and GFX90A_110, each one specific to a different source. The GFX906_60 is speculated to refer to the Instinct MI60, the GFX908_120 is the Instinct MI100, and the GFX90A_110 may be used for the newer-generation AMD accelerator. With each code, the third part refers to computational units.

JPR: PC GPU Market Hits $11.8 Billion, NVIDIA Lost 1% Share To AMD But Still Owns 80% Of The GPU Market

For instance, the MI60 will utilize 60 compute units, the MI100 will use 120 units, and the last is to utilize 110 compute units. What is interesting is that the next-gen accelerator from AMD uses fewer computational units than the MI100.

Source: VideoCardz

It is stated the Aldebaran GPU will showcase 128 compute units, which does not match with the information received about the next-gen code for the new AMD accelerator. However, any GPU typically will deactivate some of the clusters, which if this is correct, would drop it down to 110 active compute units.

Considering the settings of different Shader Engine and CU, Aldebaran / MI200 is an MCM configuration with 2 GPU dies, so if the setting is symmetric for each die instead of Shader Engine, each die will have 4 SEs. It is possible to have (56 CUs), and disable each one of them to make a total of 110 CUs.

— Coelacanth’s Dream

Website VideoCardz states,

It is unclear if AMD is planning to double the FP32 core count on CDNA2 architecture, but assuming that they do, with a theoretical 1500 MHz GPU clock the accelerator would offer have a single-precision compute performance of 42.2 TFLOPS, 1.82x more than MI100. If that isn’t the case, then MI200 would have to have at least a 1650 MHz clock to reach the same FP32 throughput of 23 TFLOPs.

In the case of HPC accelerators such as MI200, the FP64 performance is far more important. According to previous leaks, MI200 is to feature full-rate FP64 performance, which means either doubling or quadrupling the performance over MI100, depending on the architecture.

AMD’s MI200 is set to release before the end of 2021. It is their revolutionary multi-chip graphics processor that is constructed with two active dies and 128 gigabytes of HBM2e memory.

Upgraded HEATKILLER V Waterblock By Watercool For AMD RX 6800/6900XT GPUs Released

Here’s What To Expect From AMD Instinct MI200 ‘CDNA 2’ GPU Accelerator

Inside the AMD Instinct MI200 is an Aldebaran GPU featuring two dies, a secondary and a primary. It has two dies with each consisting of 8 shader engines for a total of 16 SE’s. Each Shader Engine packs 16 CUs with full-rate FP64, packed FP32 & a 2nd Generation Matrix Engine for FP16 & BF16 operations. Each die, as such, is composed of 128 compute units or 8192 stream processors. This rounds up to a total of 220 compute units or 14,080 stream processors for the entire chip. The Aldebaran GPU is also powered by a new XGMI interconnect. Each chiplet features a VCN 2.6 engine and the main IO controller.

The block diagram of AMD’s CDNA 2 powered Aldebaran GPU which will power the Instinct MI200 HPC accelerator has been visualized. (Image Credits: Locuza)

As for  DRAM, AMD has gone with an 8-channel interface consisting of 1024-bit interfaces for an 8192-bit wide bus interface. Each interface can support 2GB HBM2e DRAM modules. This should give us up to 16 GB of HBM2e memory capacity per stack and since there are eight stacks in total, the total amount of capacity would be a whopping 128 GB. That’s 48 GB more than the A100 which houses 80 GB HBM2e memory. The full visualization of the Aldebaran GPU on the Instinct MI200 is available here.

AMD Radeon Instinct Accelerators 2020

Accelerator Name AMD Radeon Instinct MI6 AMD Radeon Instinct MI8 AMD Radeon Instinct MI25 AMD Radeon Instinct MI50 AMD Radeon Instinct MI60 AMD Instinct MI100 AMD Instinct MI200
GPU Architecture Polaris 10 Fiji XT Vega 10 Vega 20 Vega 20 Arcturus (CDNA 1) Aldebaran (CDNA 2)
GPU Process Node 14nm FinFET 28nm 14nm FinFET 7nm FinFET 7nm FinFET 7nm FinFET Advanced Process Node
GPU Cores 2304 4096 4096 3840 4096 7680 14,080?
GPU Clock Speed 1237 MHz 1000 MHz 1500 MHz 1725 MHz 1800 MHz ~1500 MHz TBA
FP16 Compute 5.7 TFLOPs 8.2 TFLOPs 24.6 TFLOPs 26.5 TFLOPs 29.5 TFLOPs 185 TFLOPs TBA
FP32 Compute 5.7 TFLOPs 8.2 TFLOPs 12.3 TFLOPs 13.3 TFLOPs 14.7 TFLOPs 23.1 TFLOPs TBA
FP64 Compute 384 GFLOPs 512 GFLOPs 768 GFLOPs 6.6 TFLOPs 7.4 TFLOPs 11.5 TFLOPs TBA
VRAM 16 GB GDDR5 4 GB HBM1 16 GB HBM2 16 GB HBM2 32 GB HBM2 32 GB HBM2 64/128 GB HBM2e?
Memory Clock 1750 MHz 500 MHz 945 MHz 1000 MHz 1000 MHz 1200 MHz TBA
Memory Bus 256-bit bus 4096-bit bus 2048-bit bus 4096-bit bus 4096-bit bus 4096-bit bus 8192-bit
Memory Bandwidth 224 GB/s 512 GB/s 484 GB/s 1 TB/s 1 TB/s 1.23 TB/s ~2 TB/s?
Form Factor Single Slot, Full Length Dual Slot, Half Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length / OAM
Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling
TDP 150W 175W 300W 300W 300W 300W TBA

Source: VideoCardz, ROCm Github, Coelacanth’s Dream



[ad_2]