How Would A Possible OpenAI Chip Impact The Semiconductor Industry?

OpenAI will release its first AI chip next year. The company has been one of Nvidia’s biggest customers but demand for computing power has grown so fast that it is moving into its own production line.

Sam Altman, chief executive of OpenAI, said last month that the company will double its compute fleet over the next 5 months to serve GPT-5. This growth has strained supply chains and made it evident that relying on Nvidia alone would not be enough.

Reuters reported earlier this year that the processor design will be finalised in the coming months and built at Taiwan Semiconductor Manufacturing Company, the largest contract chip maker. The chip will be produced on TSMC’s 3-nanometre process and is expected to use a systolic array architecture with high-bandwidth memory and networking, similar to Nvidia’s chips.

 

Who Are The Partners In The Project?

 

OpenAI is working closely with Broadcom. On a recent analyst call, Broadcom chief executive Hock Tan said that a mystery client had committed $10bn in orders. Sources later confirmed that this client was OpenAI, though neither company commented publicly.

Broadcom began its early work with OpenAI last year, and now the scale has grown. Tan told analysts that the new client had lifted the company’s growth outlook, saying chip shipments would begin strongly from next year. Shares in Broadcom have climbed more than 30% this year and rose 9% in pre-market trading after news of the deal.

The chip design initiative is being led inside OpenAI by Richard Ho, who previously directed Google’s custom chip work. His team has doubled in size during the Broadcom partnership but remains smaller than those inside Amazon and Google, which have been making their own processors for longer.

 

How Much Will The Project Cost?

 

Developing a chip is expensive. Reuters said that one version of a chip design could cost $500m, and that the bill could double once software and supporting systems are added. To run a programme on the scale of Google or Amazon, OpenAI would need hundreds more engineers.

The costs are high but the reward is there. Having its own chip means OpenAI can cut its dependence on outside suppliers and run its models without the same supply delays that have hit the AI market.

 

 

What Does This Mean For Nvidia?

 

Nvidia is still the market leader for AI hardware. Its processors are still in heavy demand from technology companies, even though its growth has slowed compared with the early days of the AI boom.

OpenAI was one of Nvidia’s earliest customers and continues to use large amounts of its hardware. But building its own chip brings OpenAI into line with Google, Amazon and Meta, who all designed in-house processors to train and run their AI systems.

 

What Will This Mean For Semiconductor Industry?

 

Three experts have shared what they think an OpenAI chip will do to the industry at large.

 

Jonathan Garini, CEO and Enterprise AI strategist at fifthelement, said:

“If OpenAI were to release a chip of its own, it would definitely have a ripple effect on the semiconductor industry, affecting the competitive landscape, pricing and supply chain structure. The most direct effect would likely be competitive pressure for incumbents such as NVIDIA, AMD and even cloud vendors that create custom silicon.

“A specialised chip solely for AI would also help DRIVE DOWN PRICES on some types of compute jobs, but more importantly, it would establish new performance standards, and ACCELERATE competitions’ innovation cycles. It could be a bit like Apple’s move into silicon with the M1 chip: While we already had a sense of what power efficiency and performance per watt meant, the M1 helped reset those expectations..and I think similarly, an OpenAI chip could be that recalibration for AI infrastructure.

“The broader implications go beyond technological ones. That could bring a new level of regulatory scrutiny, especially if a single company emerged as the major force controlling both the software and hardware stack of AI. Export controls and IP laws would apply as well, since advanced chips are so politically sensitive these days.

“I’d say that the strategic takeaway for businesses is simple: DIVERSIFICATION will be the name of the game. It is important that they do not get trapped into one vendor’s hardware ecosystem, but create hybrid strategies that blend proprietary chips and general-purpose GPUs. This will absolutely guarantee flexibility and stability in a dynamic environment.”

 

Colin Cooper, Co-Founder and Co-CEO at Illuminate XR, said:

“OpenAI building its own chip isn’t just a power move, it’s a shot across Nvidia’s bow.

“This is about cutting dependency, taking control of compute, and rewriting the rules of who holds the keys to AI scale.

“What used to be a cloud provider’s game is now shifting into AI labs themselves. It fragments the hardware market, spikes pricing pressure, and will force Nvidia to rethink its stranglehold on supply.

“Expect a domino effect, with more labs following, and we’ll see a sharp rise in custom silicon, tighter vertical stacks, and increased scrutiny from regulators who suddenly realise these labs aren’t just training models; they’re shaping infrastructure.”

 

Hommer Zhao, Director at Wiringo, said:

“If OpenAI develops its own chip, it will certainly ruffle the semiconductor market big time. Nvidia dominates the market at the moment and dictates a lot of the prices. An OpenAI new chip would bring genuine competition, which would cut prices and enable more companies to take advantage of sophisticated AI tools.

“The problem is production. These chips require the world’s highest-tech factories, which are already highly utilised. That may translate into delays or shortages unless additional capacity is brought online.”