WikiBit 2026-01-27 21:52TLDR Microsoft unveiled Maia 200, its second-generation AI chip built on TSMC’s 3nm process for inference tasks The chip features 216GB HBM3e memory
Microsoft Corporation, MSFT
TSMC manufactured Maia 200 using its 3nm process technology. The chip packs 216GB of HBM3e memory and 272MB of on-chip SRAM into a 750W thermal design.
Performance numbers show the chip can handle over 10 petaFLOPS at 4-bit precision. At 8-bit precision, it delivers more than 5 petaFLOPS of computing power.
Microsoft claims this makes Maia 200 faster than Amazon‘s Trainium 3 and Google’s TPU v7. The chip will run OpenAI‘s GPT-5.2 models and power Microsoft’s Copilot services.
Nvidias Stock Barely Moves
Nvidia stock dropped 0.64% after Microsofts announcement. The small decline shows investors arent panicking about competition yet.
Demand for Nvidias chips continues to outstrip supply. The company still dominates the market for training large language models and complex AI systems.
Microsoft isnt trying to replace Nvidia entirely. The strategy focuses on building alternatives for specific workloads where custom silicon makes financial sense.
Cloud providers face rising power costs as AI usage grows. Purpose-built chips help control those expenses while maintaining profit margins.
Why Custom Chips Matter for Big Tech
Microsoft joins a growing list of tech giants building their own AI processors. Google has deployed Tensor Processing Units for years across its cloud infrastructure.
Amazon uses Trainium and Inferentia chips throughout AWS for similar purposes. These custom designs handle specific tasks more efficiently than general-purpose hardware.
The approach gives cloud providers more leverage when negotiating with chip suppliers. It also reduces dependence on a single vendor for critical infrastructure components.
Microsoft is releasing a Maia SDK alongside the chip. The toolkit helps developers optimize their models to run better on the new hardware.
Maia 200 is already processing live workloads in Microsofts Iowa datacenter. The company plans to expand deployment to additional regions throughout 2026.
The chip handles inference work while Nvidia hardware still powers the training side. This dual approach lets Microsoft optimize costs without sacrificing capability for demanding AI tasks.
Custom silicon won‘t eliminate the need for Nvidia’s products anytime soon. But it gives tech companies more options as they build out massive AI infrastructure investments.
Microsofts move reflects a broader industry trend toward vertical integration in AI hardware. As computing demands grow, controlling more of the stack becomes increasingly valuable for major cloud providers.
The post Microsoft (MSFT) Stock: New AI Chip Targets Nvidias Cloud Computing Empire appeared first on Blockonomi.
Disclaimer:
The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.
0.00