Multi-gigawatt AI campuses consume billions of dollars annually

Arvind Krishna
3 minutes
  • Equipping a single 1 gigawatt AI system will cost nearly $80 billion.
  • The expected AI capacity for the entire sector could reach 100 GW
  • High performance GPU hardware should be replaced every five years without needing to be updated

IBM CEO Arvind Krishna questions whether the current pace and scale of AI-driven data center expansion can remain financially sustainable under current assumptions.

He estimates that it is now worth about $80 billion to equip a site with 1 GW of hardware.

Although public and private plans call for nearly 100 GW of future training capacity, the implied financial risk is $8 trillion.

Economic impact of next-generation AI websites

Krishna connects this trajectory directly to the upgrade cycle that powers today’s accelerator fleets.

Most of the high-end GPU hardware used in these centers loses its value after about five years.

At the end of this period, the operators will not expand the equipment, but replace it completely. The result is not a one-time loss, but rather a recurring debt that increases over time.

CPU resources are also part of these deployments, but are no longer the focus of spending decisions.

The balance has shifted to specialized accelerators that deliver massively parallel workloads at speeds unmatched by general purpose processors.

This shift has significantly changed the definition of scalability for modern AI deployments and increased capital requirements beyond what traditional enterprise data centers ever needed.

Krishna argues that devaluation is the most misunderstood factor among market participants.

The pace of architectural change means performance leaps are happening faster than financial depreciation can easily absorb them.

Equipment that is still functional becomes financially obsolete long before the end of its physical life.

Investors such as Michael Burry express similar doubts about the cloud giants’ ability to continue to extend the life of their assets as model sizes and training needs increase.

Economically, the burden no longer lies in energy consumption or land acquisition, but in the forced migration of increasingly expensive hardware.

Similar update dynamics already exist in desktop environments, but scalability is fundamentally different in large-scale sites.

Krishna estimates that covering the capital costs of these multi-gigawatt campuses would require hundreds of billions of dollars in annual profits to remain neutral.

This requirement is based on current hardware economics and not on long-term speculative efficiency gains.

These predictions come at a time when major tech companies are announcing increasingly large AI campuses, ranging from megawatts to tens of gigawatts.

Some of these proposals already compete with entire countries’ electricity needs, raising concerns about grid capacity and long-term energy prices.

Krishna suspects that without a fundamental shift in knowledge integration, the likelihood that today’s LLMs will gain general information about next-generation hardware is virtually nil.

This assessment assumes that the wave of investment is driven by competitive pressures and not by confirmed technological inevitability.

Interpretation is difficult to avoid. The expansion assumes that future revenue will match record spending.

This is happening even as recovery cycles shorten and electricity restrictions occur in several regions.

The risk is that financial expectations exceed the financial mechanisms required to sustain them throughout the life cycle of these assets.

IN blank material