OpenAI Blocked by Anthropic Over Alleged Misuse of AI Coding Technology

Amidst the excitement surrounding the impending release of ChatGPT-5 in the near future (or so we’ve been told), OpenAI has received a pretty serious slap on the wrist from Anthropic, one of its biggest competitors in the industry.

Indeed, towards the end of last week, executives at Anthropic made a very public statement saying that not only would OpenAI be prevented from using its coding tools and AI models, but this was happening as a result of a direct violation of its terms and services.

The commercial terms of service in question are pretty simple. They state that users are prohibited from using the software and its tools in order to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. According to Anthropic, OpenAI employees have been doing exactly this. Christopher Nulty, Anthropic’s official spokesperson, told Wired that “OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5.”

There are a few things to unpack here, including, first and foremost, whether or not the accusation has merit; the timing of the GPT-5 launch; the issue of giving competitors access to APIs (is this just smoke and mirrors?); and, of course, what does OpenAI have to say about this?

 

Two Sides To the Story: What Does OpenAI Have To Say?

 

Anthropic’s move to ban OpenAI from using its AI coding tools – and to do so in a very public manner – is pretty bold, to say the least. The accusation is harsh and OpenAI’s response hasn’t exactly been to deny it outright, but they have changed the focus of the narrative. Whether that was an intentional PR spin or a genuine concern (or both), it raises very important questions about the future of collaboration between companies like OpenAI and Anthropic.

Indeed, the official statement from OpenAI was centred on providing competitors with access to APIs. Hannah Wong, Chief Communications Officer at OpenAI, told media that, “it’s industry standard to evaluate other AI systems”. Wong implied that while this access isn’t enforced by regulation, it’s been more about professional etiquette and encouraging an atmosphere of trust and openness within a competitive landscape”.

It’s a great point, there’s no doubt about it. Up until now, this shared access has been a cornerstone of the industry, and it’s something these companies have referred to as both a way to keep everyone in the know about the direction of the technology as well as how to protect consumers. As Wong put it, providing access to APIs has been a method that’s allowed industry players to “benchmark progress and improve safety”. Of course, the implication is that with this change, these two things will be at risk.

But before we go there, it feels like many people have brushed over the initial issue – does Anthropic’s accusation actually have merit?

 

 

Did OpenAI Employees Actually Use Claude Code to Work On GPT-5?

 

Well, at this point, nobody from OpenAI has officially denied it. Of course, that doesn’t mean that they didn’t do it, nor does it mean that they did. Also, even if they did, and even if their actions did breach Anthropic’s code of services, employees weren’t neccessarily instructed to do so by OpenAI higher ups. It’s possible that employees that were working on the new GPT-5 model were using Claude as a tool in their work and didn’t intend for it be used to directly contribute to OpenAI’s newest model.

Now, that doesn’t make it okay or a good enough excuse. They should know better, and seperation between the GPT-5 (or any OpenAI) project and anything else they’re doing should be intentionally clear and defined. But still, although Anthropic didn’t do as much as come out and say, “OpenAI used Claude to builed GPT-5!”, it was implied. And along with that is the sentiment that, well, if GPT-5 is great and successful (or even better than Claude), Anthropic can say, “but Claude was used to build the model”.

So, is Anthropic merely using this as an opportunity to get in on the GPT-5 hype and potentially dampen its glow a little? Or even try and claim some of its ingenuity as its own by proxy? Who knows – it may be a little bit of both, it may be a very straightforward reaction to a breach of conduct or, perhaps, it’s more about the latter, but the former is a little bit of a bonus. Wong did add a little comment that, in my opinion, adds a little extra fuel to the fire, when she said that despite all this happening, “it’s disappointing considering our APIs remain available to them”. Now, this certainly adds a little more spice to the conversation, but it also brings to the forefront the whole conversation around sharing APIs.

 

Sharing APIs In AI

 

Many experts and industry professional view this practice as an essential part of keeping things clean and the game “fair”. Operating in an industry that holds a lot of unknowns, there’s good reason to try to keep a handle on things and not simply allow competitors to go crazy to the detriment of both the industry and potentially the world at large, in the most serious hypothetical.

Thus, in the past, major players have provided each other with access to their models and APIs for the sake both safety and keeping tabs on the progress of this very new type of technology. Without this sharing, each company’s frame of reference would be limited to their own innovation, and while that may be normal in other industries, AI is operating in a completely new ecosystem, so allowing this access is actually generally seent to be mutually beneficial.

Well, up until now. The most synical of observers of this Anthropic-OpenAI debacle claim that Anthropic is just using this situation as an excuse to revoke OpenAI’s access to its APIs and AI models.

Why would they do that? Well, if, hypothetically, that was their primary aim, there may be a few reasons for doing this:

 

  • Competitive Advantage: Perhaps OpenAI feels as if they’re giving away too much insight into their technical process and, consequently, losing competitive advantage. Thus, cutting off access protects propietary advancement and encourages product differentiation.

 

  • Security and Intellectual Property: A major reason would be if Anthropic suspects that theri APIs are being used to reverse engineering or model extraction attacks. If Anthropic suspects OpenAI is doing this – intentionally or otherwise – it would be a strong motive to restrict access to protect their IP. But it’s a bold claim!

 

  • Strategic Signalling: In an increasingly competitive and high-stakes AI arms race, cutting off access could also be a political or strategic move. It signals independence, reinforces product boundaries and asserts control over who gets to use or study Anthropic’s tools. It may also set the tone for future partnerships (or lack thereof) in the industry.

 

  • Misalignment of Ethics: Anthropic has positioned itself as safety-focused and somewhat cautious in its development philosophy. If the company perceives OpenAI’s use of its APIs as inconsistent with its values – for example, using Claude models in high-risk applications or ignoring safety guardrails – Anthropic may view revoking access as a responsible decision and the “right thing to do”.

 

  • No More Mutual Benefit: In the most basic sense, maybe Anthropic has simply decided that this relatinoship is no longer mutually beneficial. In the early days of AI development, open access and shared APIs allowed organisations to co-evolve and learn from each other. But as the industry matures and players become more secretive, the benefits of that openness shrink. If OpenAI doesn’t provide similar access to its own APIs, Anthropic might see the relationship as unbalanced and opt out.

 

Of course, this is all speculation – we have no specific reason to believe that any of these things are neccessarily the case. And, the other thing to consider is that it’s only OpenAI that Anthropic has imposed this ban on. So, will it stop here, or are other competitors next in line to be vanquished from the world of free API access, courtesy of Anthropic?