
AMD recently held its first financial analyst’s day, where CEO Lisa Su AMD said the vendor now sees a total addressable AI market could be over $1 trillion by 2030, doubling last year’s stated target of $500 billion by 2028.
Su on this week told analysts the company is seeing insatiable AI demand, and added that revenue growth could climb to 35% per year over the next three to five years because of that need.
In addition, Su said she expects to see its data center revenue increase 60% over the next three to five years, up from $16 billion in 2025, and added she sees the total addressable market for AI data centers increasing to $1 trillion over the next five years. That number includes all silicon, from GPUs and CPUs to networking equipment.
AMD has scored some big data center customers recently, including a 6-gigawatt deal with OpenAI and a plan to provide Oracle with 50,000 chips. It has also secured agreements to build two more high performance supercomputers at the Oak Ridge National Labs.
Focusing on the data center, Dan McNamara, the senior vice president of the server CPU business said three years ago, AMD set out a “bold vision” for its EPYC server CPUs, aiming to disrupt the market with advanced architecture, packaging, and process technologies. Since then, AMD has launched two new generations of EPYC products, expanded its customer and partner ecosystem, and maintained a focus on execution.
These efforts have paid off: AMD now claims the top spot in server CPU market share, hovering around 40%. He didn’t mention that it helps when your chief competitor is self-destructing. McNamara said that while AMD has enjoyed successes in HPC, much of that translates to the enterprise.
“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.”
One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms.
One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara.
And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.”
AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors.
Zen 6 processors will be built on 2 nanometer design generated by (you guessed it) TSMC. Zen 6 CPUs are expected to be socket-compatible with existing AM5 motherboards, ensuring backward compatibility for desktop user, but we’re not sure about the servers also being backwards compatible.
It is expected to use advanced packaging technologies, such as fanout interconnect and a new Infinity Fabric interconnect method, which could improve throughput speeds. Zen 6 will bring improved Instructions Per Cycle (IPC) for higher performance across both desktop and server platforms.
The architecture will feature expanded AI capabilities, building on the AI features introduced in previous generations though details of that plan are still sketchy.
AMD also took the time to detail Instinct GPU accelerator plans. It announced the Instinct MI400 series based on the CDNA 5 architecture, The same technology used in its Radeon GPU cards. Scheduled for release in 2026, the MI400 aims to double the compute performance of the MI350, offering 20 PFLOPs of FP8 compute power and offering 432GB of HBM4 memory, up from 288 GB of HBM3 memory in the previous generation. Bandwidth goes from 8 TB/s on the 300 generation to 19.6 TB/s.
There will be two variants of the MI400 series to start. The Instinct MI455X is designed for large-scale AI training and cloud deployment, while the MI430X is for high-performance computing and government-focused AI initiatives. The MI430X integrates native FP64 processing units and hybrid CPU+GPU support.
And if that’s not enough, AMD already announced that the Instinct MI500 series is already in advanced design and is expected to launch in 2027. Nothing is known about the new design.
