- As cloud providers compete for capacity, META is exploring new hardware approaches.
- Google is positioning TPU as a reliable option for large-scale deployments.
- Data center operators are facing rising costs across various categories of hardware components.
Meta is in talks to acquire most of Google’s custom AI hardware for future development, the people said.
Negotiations are focused on leasing tensor processing units (TPUs) from Google Cloud in 2026, with a transition to direct purchase expected in 2027.
This will be a turning point for both companies. Previously, Google provided its own TPUs for internal use, and the Meta relied on various processors and graphics cards from different manufacturers.
Meta is very interested in Google TPU.
Meta is also considering other hardware options, notably RISC-V-based Revos processors, reflecting its desire to diversify its computing base.
The prospect of a billion-dollar deal caused an immediate reaction in the market. Alphabet’s value rose to nearly $4 trillion, and Meta’s share price also soared after the disclosure.
NVIDIA’s share price fell several points as investors questioned the long-term impact of changes to other architectures from major cloud players.
Some Google Cloud executives estimate that such a deal could give Google a significant share of Nvidia’s data center revenues, which currently exceed $50 billion per quarter.
The huge demand for artificial intelligence devices is creating fierce competition for supply and raising questions about how new hardware alliances will affect the sustainability of the industry.
Even if the deal closes as planned, its market reach will be limited due to existing production capacity and a very tight rollout schedule.
Data center operators are reporting continued shortages of GPUs and memory modules, and prices are expected to rise next year.
Current trends suggest that the rapid expansion of AI infrastructure is likely to put pressure on supply chains and increase purchasing pressure on components as companies look to secure long-term equipment contracts.
These factors create uncertainty regarding the actual impact of the contract, as general supply constraints may limit production regardless of financial investment.
Analysts warn that the future performance of these various structures remains uncertain.
Google maintains an annual TPU release cycle, while Nvidia repeats releases at the same rate.
Thus, the competitive landscape will likely remain evolving until the first large-scale hardware offerings for the meta emerge.
Additionally, it remains to be seen whether alternative frameworks can provide greater operational value than current GPUs.
As the use of artificial intelligence continues to evolve rapidly, the importance of its components may change suddenly. This explains why companies keep diversifying their computing strategies and testing multiple architectures.
For volume device
- Apple is downsizing its team (again). Complete removal work is underway.
- “We’re looking at something completely different” – NVIDIA CEO Jensen Huang reflects on the speculative bubble threatening AI
