NewsAMD Helios finally gets commercial support from HPE

AMD Helios finally gets commercial support from HPE

  • HPE will deliver 72 GPU racks globally equipped with next-generation AMD Instinct Accelerators
  • Venice processors paired with GPUs aim for exascale AI performance for the racket
  • In terms of thermal management, the Helios relies on liquid cooling and a double-wide chassis.

HPE has announced plans to integrate AMD’s rack-scale Helios AI architecture into its product line starting in 2026.

This partnership gives Helios its first major OEM partner and enables HPE to offer full 72-GPU AI racks based on AMD’s next-generation Instinct MI455X accelerators.

These racks are paired with EPYC Venice processors and use a scalable, Ethernet-based design developed in collaboration with Broadcom.

Rack layout and performance targets

The move creates a clear commercial path for Helios and puts the architecture in direct competition with Nvidia’s rack platforms already in use.

Helio’s reference design is based on Meta’s Open Rack Wide standard.

It uses a double-width liquid-cooled chassis to house the MI450 series GPUs, Venice processors and Pensando networking hardware.

With the MI455X generation, AMD is targeting a maximum of 2.9 exaFLOPS of FP4 computing power per rack, along with 31 TB of HBM4 storage.

The system represents each GPU as part of a single module, allowing the workload to be distributed across all accelerators without creating local bottlenecks.

A custom-designed HPE Juniper switch that supports Ultra Accelerator Link over Ethernet provides a high-bandwidth GPU connection.

It offers an alternative to Nvidia’s NVLink-focused approach.

The high-performance data center in Stuttgart has chosen HPE’s Cray GX5000 platform for its upcoming flagship system called Herder.

Herder will use MI430X GPUs and Venice CPUs in direct liquid-cooled blades, replacing the current Hunter system in 2027.

HPE said waste heat from GX5000 outlets will heat campus buildings, reflecting performance goals and environmental considerations.

AMD and HPE plan to make Helios-based systems available globally next year, expanding access to rack-scale AI hardware for research institutions and enterprises.

Helios uses an Ethernet framework to connect GPUs and processors, unlike Nvidia’s NVLink approach.

The use of Ultra Accelerator Link over Ethernet and Ultra Ethernet Consortium-centric hardware supports scalable design within an open standards framework.

While this approach theoretically enables a GPU count comparable to other high-end AI racks, performance under sustained multi-node workloads remains untested.

However, relying on a single Ethernet layer can lead to latency or bandwidth limitations in real-world applications.

However, these specifications do not predict actual performance, which depends on cooling efficiency, network traffic management and software optimization.

IN blank material

More From NewForTech

AI likely to replace jobs, warns Bank of England governor.

Artificial intelligence vs. Industrial revolution: Jobs will change. They...

IOS 26: brilliant update or big mistake? What users think is is by This.

Apple has started encouraging iOS 18 users to upgrade...

No storefront required: Squarespace’s new tool lets you get paid via direct messages and QR codes.

Squarespace launched Pay Links. Allowing freelancers. Small businesses to...