Microsoft plans to base its artificial intelligence infrastructure primarily on internally developed chips in the long run. The announcement was made Wednesday by the company’s Chief Technology Officer, Kevin Scott, during the Italian Tech Week, according to CNBC, cited by News.ro.
This strategy would reduce the American giant’s reliance on suppliers such as Nvidia and AMD, who currently dominate the data center processor market.
Read also: Nvidia Expands: Massive Investments in the U.S. and Europe to Control the Future of AI
Maia and Cobalt chips to power Microsoft’s future data centers
Although Microsoft currently relies largely on Nvidia and AMD GPUs, the company has already started developing and implementing its own hardware solutions. In 2023, it launched the Azure Maia AI accelerator and the Cobalt processor, and is now working on new generations of chips along with “microfluidic” cooling technologies designed to manage equipment overheating.
Kevin Scott confirmed that the long-term goal is for Microsoft’s own chips to be used predominantly in its data centers. The strategy targets not only semiconductors but also the complete design of systems — from networking to cooling solutions — to optimize computing power based on the needs of AI applications.
Like its rivals Google and Amazon, Microsoft is developing custom chips to cut costs, personalize its infrastructure, and better respond to the rapidly growing demand for artificial intelligence.
The company has made significant investments to expand its capabilities but acknowledges that the “computing power crisis” persists, as demand continues to exceed even the most ambitious forecasts since the launch of ChatGPT.
Photo: investmentmonitor.ai


