NewsThe news site's Metal server has rewritten the rules for extreme digital...

The news site’s Metal server has rewritten the rules for extreme digital computing

  • StorageReview’s physical server computed 314 trillion numbers without distributed cloud infrastructure
  • The entire calculation was performed continuously for 110 days.
  • Power consumption is significantly reduced compared to previous cluster-based Pi recordings.

The calculation of 314 trillion digits of pi in a single local system set a new benchmark for large-scale numerical computation.

The race was conducted by Evaluation of memoryThat surpasses previous cloud-based efforts, including Google Cloud’s estimate of 100 billion jobs by 2022.

Unlike hyperscale approaches that relied on massively distributed resources, this record was achieved on a physical server with tightly controlled hardware and software options.

Runtime and system stability

The calculation was made continuously for 110 days, significantly shorter than the approximately 225 days required for the previous large-scale recording, although the earlier attempt produced fewer numbers.

Continued operation was attributed to operating system stability and limited background activity.

It also depends on a balanced NUMA topology and careful optimization of memory and storage, tailored to the behavior of the Y-Cruncher application.

The workload was treated less as a demonstration and more as a comprehensive stress test of production-ready systems.

At the heart of the work was a Dell PowerEdge R7725 system equipped with two AMD EPYC 9965 processors, which provide 384 CPU cores and 1.5 TB of DDR5 memory.

The storage consisted of forty Micron 6550 Ion NVMe drives of 61.44 TB, good for about 2.1 PB of gross capacity.

Thirty-four of these drives were assigned to the Y-Cruncher workstation in a JBOD configuration, while the remaining drives formed a software RAID volume to protect the end result.

This setting prioritized performance and energy efficiency over full data resiliency during computation.

The digital workload generated significant disk activity during the run, including approximately 132 PB logical reads and 112 PB logical writes.

The maximum logical disk usage reached approximately 1.43 PiB, while the largest checkpoint exceeded 774 TiB.

SSD wear measurements showed about 7.3 PB written per drive, for a total of about 249 PB on the swap drives.

Internal benchmarks have shown that sequential read and write performance has more than doubled compared to the previous platform with 202 trillion digits.

This configuration would consume about 1,600 watts of power, with a total power consumption of about 4,305 kWh, or 13.70 kWh per billion digits.

This number is well below previous record estimates of 300 billion based on clusters, which would have consumed more than 33,000 kWh.

The results suggest that carefully optimized servers and workstations are needed for specific workloads. can surpass the efficiency of cloud infrastructure.

However, this assessment applies strictly to this class of calculations and does not automatically apply to all scientific or commercial use cases.

More From NewForTech

What do people really want from their smartphones? A battery that lasts a long time!

Many smartphone launches have been mentioned (almost too many...

Oracle gives one last hurray with new cloud platforms

New Oracle A4 instances use AmpereOne M silicon in...