- Meta’s ORW design units new course for information heart openness
 - AMD pushes to open silicon to the rack, though {industry} neutrality stays unsure
 - Liquid cooling and Ethernet construction spotlight the system’s service focus
 
At the latest Open Compute Project (OCP) 2025 World Summit in San Jose, amd launched its “Helios” rack-scale platform, constructed on Meta’s just lately launched Open Rack Wide (ORW) normal.
The design was described as a double-width open body that goals to enhance the vitality effectivity, cooling and serviceability of synthetic intelligence programs.
amd positions “Helios” is a crucial step in direction of an open and interoperable information heart infrastructure, nevertheless it stays to be seen to what extent this openness interprets into sensible industry-wide adoption.
Meta’s position in shaping the brand new shelving design
Meta contributed the ORW specification to the OCP neighborhood and described it as the inspiration for large-scale AI information facilities.
The new kind issue was developed to handle the rising demand for standardized {hardware} architectures.
AMD’s “Helios” seems to function a take a look at case for this idea, combining Meta’s open rack rules with AMD’s personal {hardware}.
This collaboration alerts a transfer away from proprietary programs, though dependence on main gamers like Meta raises questions in regards to the neutrality and accessibility of the usual.
The “Helios” system is powered by AMD Instinct MI450 GPUs based mostly on the CDNA structure, together with EPYC CPUs and Pensando networking.
Each MI450 reportedly affords as much as 432 GB of high-bandwidth reminiscence and 19.6 TB/s of bandwidth, offering capability for data-intensive AI instruments.
At full scale, a “Helios” rack outfitted with 72 of those GPUs is anticipated to realize 1.4 exaFLOPS at FP8 and a pair of.9 exaFLOPS at FP4 precision, backed by 31 TB of HBM4 reminiscence and 1.4 PB/s of whole bandwidth.
It additionally offers as much as 260 TB/s of inside interconnect efficiency and 43 TB/s of Ethernet-based scaling functionality.
AMD estimates as much as 17.9 instances the efficiency of its earlier era and roughly 50% extra reminiscence capability and bandwidth than Nvidia’s Vera Rubin system.
AMD claims this represents a leap in AI coaching and inference functionality, however these are engineering estimates and never area outcomes.
The open rack design incorporates OCP DC-MHS, UALink and Ultra Ethernet Consortium frameworks, permitting for each scalable and expandable deployment.
It additionally contains liquid cooling and standards-based Ethernet for added reliability.
The “Helios” design extends the corporate’s push for openness from the chip stage to the rack stage, and Oracle’s preliminary dedication to deploy 50,000 AMD GPUs suggests industrial curiosity.
However, broad ecosystem help will decide whether or not “Helios” turns into a shared {industry} normal or stays one other branded interpretation of open infrastructure.
As AMD targets 2026 for quantity launch, the extent to which opponents and companions undertake ORW will reveal whether or not openness in AI {hardware} can transfer past idea to real follow.
“Open collaboration is vital to scaling AI effectively,” mentioned Forrest Norrod, government vice chairman and basic supervisor of AMD’s Data Center Solutions Group.
“With ‘Helios’, we’re turning open requirements into actual deployable programs, combining AMD Instinct GPUs, EPYC CPUs and open frameworks to supply the {industry} with a versatile, high-performance platform designed for the subsequent era of AI workloads.”
- Oracle unveiled the “world’s largest AI supercomputer,” and it actually is large
 - We have listed the very best cloud internet hosting suppliers.
 - Check out the very best AI instruments and the very best AI writers
 
