AMD recently introduced the MI300A, its latest accelerated processing unit (APU) employing the Zen 4 architecture. Set to rival Nvidia, this APU caters to the surging demand for components fueling AI workloads.
Pioneering in data centers, AI, and high-performance computing (HPC), the MI300A features 24 threaded CPU cores and a formidable GPU with 228 CDNA 3 compute units.
Distinguishing itself from its predecessor, the third-generation APU, this unit incorporates a unified 128GB high-bandwidth memory (HBM), distributed across eight stacks of HBM3. Unlike before, where dedicated memory units existed for CPU and GPU, this design shift signifies a notable improvement.
The unique packaging method involves 13 chiplets arranged in a 3.5D configuration, making it AMD’s most extensive chip to date, boasting an impressive 153 billion transistors. Adding to its prowess, the APU incorporates a central 256MB Infinity Cache, optimizing bandwidth and latency for seamless data flow.
AMD’s MI300 Series and Instinct MI300A Accelerator
AMD’s latest chip, coupled with the MI300X AI accelerator, challenges Nvidia’s dominance. The MI300 series, equipped with HBM3 memory and CDNA 3 GPU chiplets, signals a transformative era.
In the competitive arena against Nvidia’s H100 GPU and GH200 chip, AMD’s Instinct MI300A aims to redefine chip capabilities. Test results showcase double the theoretical peak HPC performance compared to the H100 SMX, with four times the performance in specific workloads. It also touts twice the peak performance per watt against the GH200, nearing or matching Nvidia’s H100 in AI performance.
Supported by key partners like HPE, Eviden Gigabyte, and Supermicro, AMD’s venture gains industry-wide backing. Notably, its integration into the El Capitan supercomputer is poised to mark a groundbreaking milestone as the world’s first two-exaflop supercomputer upon activation next year.