
Alexander Harrowell, principal analyst for advanced computing at Omdia, said AMD’s approach reflects a parallel development to Nvidia, which still serves the market with air-cooled GPUs and traditional servers via OEM partners, in addition to its rack-scale platforms.
Enterprise buying implications
For IT leaders deciding on their next AI investment, these developments suggest a shift in the market.
Analysts note that while Nvidia remains the dominant player, buyer criteria are becoming more pragmatic. The focus is shifting beyond peak performance to include practical considerations such as reliable supply chains, predictable pricing, and easier integration into existing data center environments.
“AMD is positioning itself as a reliable second source at a time when Nvidia faces supply constraints and very high prices,” said Pareekh Jain, CEO at Pareekh Consulting. “AMD chips are typically 20 to 30 percent cheaper, which matters for enterprise buyers. Enterprises are increasingly cautious about putting too much money into today’s AI hardware when depreciation cycles are getting shorter.”
That caution is also shaping where enterprises deploy AI infrastructure, with on-premises environments emerging as a key focus for AMD’s latest offerings.
“MI440X appears positioned as a time-to-value option for enterprises dealing with regulated data, data residency mandates and latency-sensitive inference, where keeping workloads on-prem is a business requirement rather than a technology choice,” said Rachita Rao, senior analyst at Everest Group. “That said, the chip’s dependence on HBM introduces constraints around latency and networking, which could limit performance consistency as deployments scale.”
