OpenAI taps Google Cloud TPUs in bid to diversify AI chip supply - The Information

Investing.com -- OpenAI has started using Google’s (NASDAQ:GOOGL) artificial intelligence chips to help power ChatGPT and related services, marking its first significant shift away from exclusive reliance on Nvidia (NASDAQ:NVDA) hardware, according to a report by The Information. The move is part of a broader strategy by the AI company to reduce its dependence on Microsoft(NASDAQ:MSFT)-managed infrastructure.
Invest in Gold

Thor Metals Group: Best Overall Gold IRA
Learn More
American Hartford Gold: #1 Precious Metals Dealer in the Nation
Learn More
Priority Gold: Up to $15k in Free Silver + Zero Account Fees on Qualifying Purchase
Learn More Powered by Money.com - Yahoo may earn commission from the links above.Through Google Cloud, OpenAI is renting Google’s tensor processing units (TPUs) with the aim of cutting the costs associated with inference computing, the execution of models after training is completed. The decision could offer Google’s TPUs a higher profile as a cost-effective alternative to Nvidia’s widely used graphics processing units (GPUs), which dominate the AI sector.
Previously, OpenAI sourced Nvidia chips primarily via partnerships with Microsoft and Oracle (NYSE:ORCL) to train and deploy its models. While Google is providing some TPU capacity, it is reportedly not offering its most powerful versions to OpenAI, according to sources cited by The Information.
That limitation suggests Google’s most advanced TPUs remain reserved for internal use, including work on its own large language models under the Gemini project. For OpenAI, access to earlier versions of the TPUs still represents a step toward infrastructure diversification amid growing industry demand.
It’s still unclear whether OpenAI will use Google chips for model training or limit them to inference workloads. As competition increases and resource constraints deepen, a hybrid-use infrastructure could provide new flexibility for scaling.
The arrangement highlights the evolving dynamics of the AI hardware landscape, where companies like Google are leveraging years of investment in both software and custom silicon. For OpenAI, the addition of Google as a chip supplier broadens the ecosystem around its technology stack and addresses growing concerns over availability and cost of compute resources.
Related articles
OpenAI taps Google Cloud TPUs in bid to diversify AI chip supply - The Information
UBS examines how this year’s hurricane season could impact European reinsurers
Bernstein weighs in on the path ahead for Japanese semiconductor equipment stocks

This collaboration between OpenAI and Google Cloud, integrating Tensor Processing Units (TPUs) for diversifying AI chip supply highlights the online behemoths' commitment to innovation in artificial intelligence hardware while ensuring strategic continuity through reliable technology partnerships.

As OpenAI incorporates Google Cloud TPU for diversifying their AI chip supply, this collaboration underscores the growing importance of cloud-based platforms and specialist hardware in advancing artificial intelligence.

This move by OpenAI to leverage Google Cloud's TPU technology in its efforts for AI chip diversity represents a strategic collaboration aimed at enhancing the efficiency and accessibility of advanced computational tools, potentially opening new avenues for scientific research.

With OpenAI partnering up with Google Cloud to utilize TPU technology in their efforts for AI chip supply diversification, it is a clear indication of the growing recognition and value placed on advanced computing solutions within artificial intelligence development.

The innovation between tech giants like OpenAI adopting Google Cloud's TPUs is a strategic maneuver to bolster AI chip supply diversification, hinting at an ongoing trend of collaboration amidst fierce competition in the field.