Musk Confirms xAI Used OpenAI To Train Its Models

Elon Musk spent Thursday in a California courtroom defending his lawsuit against OpenAI. During cross examination, he also made a striking admission about his own artificial intelligence company, xAI.

According to The New York Times, an OpenAI lawyer asked Musk if xAI had ever “distilled” technology from OpenAI. Musk replied, “Generally A.I. companies distill other A.I. companies.” He also said xAI had “partly” used OpenAI technology to train its own AI models.

Reuters reported that Musk later told the court, “It is standard practice to use other AIs to validate your AI.”

The comments landed awkwardly because Musk is suing OpenAI over what he says was a betrayal of its nonprofit mission. Reuters reported that Musk claims OpenAI founders secured his $38 million in donations and personal help with promises that the company would prioritise safe AI development before becoming a profit driven business.

Musk told the court, “I don’t think you should turn a nonprofit into a for-profit. There’s nothing wrong with having a for-profit organization, you just can’t steal a charity.”

OpenAI has accused Musk of trying to strengthen xAI while attacking a company that became more successful after he left its board in 2018. Reuters reported that OpenAI says Musk is motivated by a desire to control the company.

The courtroom exchange added fuel to an already heated legal fight between Musk and OpenAI chief executive Sam Altman, who watched much of the testimony from inside the courtroom.

 

What Exactly Is AI Distillation?

 

Distillation sounds technical, but the idea is quite simple: a larger AI model teaches a smaller model through its outputs and responses.

Forbes explained that the process allows smaller models to operate with less computing power and lower development costs. Instead of building a system entirely from scratch, companies can use an advanced model to help train a cheaper and faster alternative.

Impact Newswire reported that Musk admitted xAI had “partly” relied on OpenAI systems while building Grok, the chatbot developed through xAI.

The financial gap between building original models and using distillation has become a major issue across the AI sector. Forbes reported that training advanced models such as ChatGPT and Google Gemini can cost more than $100 million.

The report also said Chinese startup DeepSeek claimed it spent only $294,000 training its R1 model, which quickly gained attention after matching performance levels from much larger AI companies.

That price difference explains why leading AI companies have become increasingly protective of their systems and training methods.

 

Why Has Distillation Become So Controversial?

 

The dispute centres on cost and ownership in relation to competition.

OpenAI’s terms of service prohibit using outputs from its systems to train competing AI models. Forbes reported that OpenAI banned accounts earlier this year over suspicions that DeepSeek had used OpenAI technology to train its own systems.

Anthropic also accused DeepSeek-, Moonshot AI and MiniMax of running what it described as “industrial-scale campaigns” using roughly 24,000 fraudulent accounts and 16 million exchanges with its Claude model.

Impact Newswire described distillation as a legal and ethical grey area. The practice may not always break the law, but it can violate platform rules if companies use another firm’s AI system without permission.

 

 

The concern is especially intense because AI companies spend enormous sums building computing infrastructure, hiring researchers and collecting training data. Distillation allows rivals to catch up far more cheaply.

Anthropic also raised security concerns earlier this year. Forbes reported that the company warned distilled models can lack safeguards designed to prevent cyber attacks or biological weapon research.

Musk himself mocked Anthropic over its own legal troubles after those accusations surfaced. Forbes reported that he referred to Anthropic’s $1.5 billion settlement tied to claims the company trained AI systems using pirated books.

 

What Could This Mean For OpenAI And xAI?

 

The trial now comes with consequences far beyond Musk’s original lawsuit.

Reuters reported that OpenAI has grown from a non-profit research lab founded in Greg Brockman’s apartment into a company valued at more than $850 billion. The company is also preparing for a possible public listing.

Musk wants OpenAI returned to non-profit control and is seeking $150 billion in damages. Reuters reported that he also wants Altman and Brockman removed from leadership positions.

During testimony, Musk said, “The for-profit is overwhelmingly where the value is. The for-profit has taken the super majority of the value of the nonprofit.”

OpenAI pushed back hard during the trial. Reuters reported that the company argued Musk ignored safety concerns while involved with OpenAI and now operates xAI in the same market.

Judge Yvonne Gonzalez Rogers even questioned the contradiction directly. Reuters reported that she told Musk’s legal team, “I think it’s ironic that your client, despite these risks, is creating a company that’s in the exact same space.”

Impact Newswire reported that Musk also described xAI as a relatively small player with only a few hundred employees, even as it tries to compete against OpenAI, Google and Anthropic.

The case could influence future legal rules around AI training and ownership. Courts may eventually decide how far companies can go when using rival systems to build competing products.