
Asked on Thursday about the fact that capex is expected to approach the $1 trillion mark in 2026, he said it is somewhat surprising. “Last year, I thought it would take at least three years to get to that trillion dollar mark,” he said. “It seems increases are supported by the result of larger models needed for training infrastructure, and in turn, you need inference as well. You also need a supporting infrastructure in storage, networking, power, and cooling.”
AI, he said, has become “the tide that lift all boats, meaning that in addition to the core accelerated compute, AI also positively impacts complementary infrastructure, such as storage, networking, and physical infrastructure.”
Fung added that while much of the achievement of projected spending estimates will depend on whether or not this growth is sustainable, he pointed out, “it seems like the large hyperscalers have a lot of weight in optimizing cash flow and cost structures. They’re trying to get as creative as possible, generally moving towards a more vertical, integrated stack with their own custom networking and external financing, which would help [create] more sustainable deployments and operations.”
Enterprises thinking of expanding their own infrastructure can learn from this growth. In a recent article on the hyper spending of hyperscalers, Greyhound Research chief analyst Sanchit Vir Gogia said their capex spending levels can help pinpoint where the hyperscalers are expecting bottlenecks, which is useful information for enterprises planning their own cloud strategy across multiple geographies.
These and other factors can help enterprises plan their own execution timelines, he said.
This article originally appeared on CIO.com.
