OpenAI is gearing up to release its first in-house artificial intelligence (AI) chip next year in partnership with US semiconductor giant Broadcom, according to a reports. The move marks a significant step for the ChatGPT maker as it looks to reduce reliance on Nvidia and strengthen its computing infrastructure.
Why OpenAI Is Building Its Own AI Chip
OpenAI’s generative AI models, including ChatGPT, require massive computing power for training and deployment. Until now, the company has largely depended on Nvidia’s GPUs alongside chips from AMD. However, as demand for AI infrastructure skyrockets, OpenAI has been exploring ways to diversify chip supply and cut costs.
Last year, reports surfaced that OpenAI was collaborating with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to design and fabricate its first custom AI silicon. The chip is expected to be used exclusively for OpenAI’s internal operations and not sold to external clients.
Also Read: Samsung Unveils Galaxy Tab S11 Ultra: A New Era in Multitasking and Creativity
Broadcom’s Role and Market Outlook
Broadcom CEO Hock Tan highlighted during the company’s earnings call that AI-related revenue is expected to rise sharply in fiscal 2026. The company recently secured over $10 billion in AI infrastructure orders from a new unnamed customer, which industry observers speculate could be OpenAI.
Tan also revealed that Broadcom is working closely with multiple companies to develop custom chips, signaling a growing industry trend where leading AI firms are moving towards in-house hardware solutions.
The Bigger Picture: AI Firms Designing Custom Chips
OpenAI’s strategy mirrors moves made by Google, Amazon, and Meta, all of which have already developed their own specialized AI processors. Custom silicon allows companies to optimize performance, manage costs, and reduce dependency on third-party chipmakers at a time when demand for GPUs is at an all-time high.
By investing in its own chips, OpenAI is positioning itself for greater control over its infrastructure, potentially boosting efficiency for future large-scale AI models.