
These architectural improvements underpin Cobalt 200’s claimed increase in performance, which, according to Stephen Sopko, analyst at HyperFRAME Research, will lead to a reduction in total cost of ownership (TCO) compared to its predecessor. As a result, enterprise customers can benefit from consolidating workloads onto fewer machines.
“For example, a 1k-instance cluster can see up to 30-40% TCO gains,” Sopko said, adding that this also helps enterprises free up resources to allocate to other workloads or projects.
Moor Strategy and Insights principal analyst Matt Kimball noted that the claimed improvements in throughput-per-watt could be beneficial for compute-intensive workloads such as AI inferencing, microservices, and large-scale data processing.
Some of Microsoft’s customers are already using Cobalt 100 virtual machines (VMs) for large-scale data processing workloads, and the chips are deployed across 32 Azure data centers, the company said.
With Cobalt 200, the company will directly compete with AWS’s Graviton series and Google’s recently announced Axion processors, both of which leverage Arm architecture to deliver better price-performance for cloud workloads.
Microsoft and other hyperscalers have been forced to design their own chips for data centers due to the skyrocketing costs for AI and cloud infrastructure, supply constraints around GPUs, and the need for energy-efficient yet customizable architectures to optimize workloads.
