Microsoft rolls out next-generation AI chips, targets Nvidia’s software edge

Microsoft unveils its Maia 200 AI accelerator, promising triple the performance of rival chips and lower costs for Azure as it joins Google and Amazon in challenging Nvidia’s dominance

|
Microsoft on Monday unveiled the second generation of its in-house artificial intelligence chip, along with a suite of software tools aimed at eroding one of Nvidia’s biggest competitive advantages: its developer ecosystem.
The new Maia 200 chip is already running workloads at a Microsoft data center near Des Moines, Iowa, with a second deployment planned for a facility near Phoenix, Arizona. It is the successor to the Maia 100 chip Microsoft first introduced in late 2023.
2 View gallery
שבב ה-AI החדש של מיקרוסופט
שבב ה-AI החדש של מיקרוסופט
Microsoft's Maia 200 AI chip
(Photo: Microsoft)
The launch comes as major cloud providers, including Microsoft, Google and Amazon Web Services, increasingly design their own AI chips rather than relying solely on Nvidia, even as they remain among its largest customers. The move reflects growing concern over the cost of running large-scale AI systems and the desire for tighter control over performance and pricing.
Microsoft said Maia 200 is its most powerful first-party silicon to date. Built by Taiwan Semiconductor Manufacturing Co. using a 3-nanometer process, the chip contains more than 100 billion transistors and is designed primarily for inference, the process of running AI models after training.
According to Microsoft, Maia 200 delivers three times the performance of Amazon’s latest Trainium chip on certain benchmarks and outperforms Google’s most recent tensor processing unit on others. The company said it also offers about 30 percent better performance per dollar than the latest hardware currently deployed across its data centers.
Scott Guthrie, executive vice president of Microsoft’s cloud and AI group, said the chip can “effortlessly run today’s largest models, with plenty of headroom for even bigger models in the future.”
Maia 200 is already powering OpenAI’s GPT-5.2 models, Microsoft 365 Copilot and internal projects from Microsoft’s Superintelligence team. Microsoft said the chip uses high-bandwidth memory, though from an older generation than Nvidia’s forthcoming flagship processors.
2 View gallery
מנכ"ל מיקרוסופט סאטיה נאדלה
מנכ"ל מיקרוסופט סאטיה נאדלה
Microsoft CEO Satya Nadella
(Photo: Fabrice Coffrini/ AFP)
Alongside the hardware, Microsoft is rolling out new software tools to make the chip easier to program. Chief among them is Triton, an open-source tool with major contributions from OpenAI that performs similar functions to Nvidia’s CUDA platform, which many analysts see as Nvidia’s strongest moat.
Microsoft also announced a software development kit that will allow external developers, startups and academic researchers to optimize their models for Maia 200. An early preview of the SDK opens Monday.
The push mirrors similar efforts by rivals. Google has spent nearly a decade refining its TPU chips, while Amazon’s Trainium line is now in its third generation, with a fourth already announced. Google has also attracted interest from major Nvidia customers, including Meta Platforms, by narrowing software gaps between its chips and Nvidia’s offerings.
Microsoft acknowledged it entered the custom silicon race later than some competitors, but argued that its close integration of chips, AI models and applications such as Copilot gives it an advantage as demand for AI inference continues to surge.
As AI workloads scale to millions of users, Microsoft and its peers are betting that custom-designed chips tailored to their own platforms will ultimately be cheaper and more efficient than relying exclusively on Nvidia’s increasingly expensive hardware.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""