The Clash AI Startups: How Are Competitors Backed By The Same Companies?

OpenAI has begun working with Amazon Web Services to make its new open weight models available to millions of AWS customers through Amazon Bedrock and Amazon SageMaker AI. This comes just days after Anthropic, a company backed by AWS, cut OpenAI’s access to its Claude API.

The break with Anthropic happened after Anthropic accused OpenAI of violating its terms of service. Anthropic spokesperson Christopher Nulty told news platform Wired that OpenAI staff had been using Claude’s coding tools ahead of the expected GPT-5 launch, which went against the rules forbidding the use of the service to build competing products. OpenAI said it was disappointed but called such evaluations standard practice in the industry.

Now, despite that bust up, AWS has moved to work directly with OpenAI. Customers can access the new gpt-oss-120b and gpt-oss-20b models on AWS. These are designed for coding, scientific analysis and other advanced tasks, and AWS says they can be up to 18 times more price efficient than rival models such as DeepSeek-R1.

 

How Does This Affect Anthropic’s Ties To AWS?

 

Anthropic has been closely linked to AWS since 2024, when Amazon invested $4 billion in the company and became its primary cloud provider. In late 2024, the partnership deepened, with AWS becoming Anthropic’s main training partner and providing its Trainium and Inferentia chips for Claude’s development.

This means AWS is now working with 2 rival AI developers at the same time. On one hand, Anthropic gets exclusive benefits on AWS, such as early access to fine tuning with customer data on Claude models. On the other hand, OpenAI now has its models running natively on AWS with access to the same security and scalability features.

For AWS, the advantage is not hard to see… it increases the range of models available to its customers and keeps both of the most talked about AI companies in its ecosystem. For Anthropic, the situation may be more complicated, as it could face stronger competition for enterprise clients directly within the AWS platform.

 

What Could This Mean For Competition In AI?

 

Tech companies have a long history of restricting competitor access to their platforms. The Claude API ban comes after a similar pattern to Facebook’s move against Vine or Salesforce limiting competitor access to Slack data. Now, the difference is that rivals can still end up working with the same corporate partners in other ways.

In this case, both OpenAI and Anthropic are tied to AWS in different capacities. That then makes one wonder about how corporate loyalty and exclusivity work in an industry where partnerships can change quickly. Customers might benefit from having multiple model providers under the same cloud service, but it could also intensify rivalry between those providers.

The presence of competing startups on the same infrastructure could lead to more direct benchmarking, faster product cycles and more aggressive pricing. For AWS, it strengthens its position as a hub for AI development. For OpenAI and Anthropic, it sets the stage for head-to-head competition on the same platform that both depend on.

Lauryn Warnick Founder & CEO at Villain Branding, said, “When competitors are backed by the same big players, the tech might be different but the money smells the same. That is when brand becomes the only real moat. If AWS is powering both you and your rival, your edge is not in the infrastructure. It’s in the precision of your narrative, the sharpness of your positioning, and the conviction you project into the market.

“Shared investors and overlapping partnerships can make everything feel beige. The companies that break through are the ones who use brand to turn high stakes moments like funding rounds, partnerships, and launches into clear market dominance. In a sea of sameness, which GenAI is driving us further into, the clearest and boldest voice wins.”

 

What Does Expert Michelle Johnson Say?

 

We have gotten insights from Michelle Johnson, an MIT-certified AI expert, MBA (Distinction) graduate and PMP-certified project leader. Michelle has over 20 years’ experience driving digital transformation and operational excellence in fintech and high-growth sectors.

As founder of Ideal Intelligence, she helps SMEs and scaling businesses cut through AI hype with practical training and low-cost automation, making advanced technology accessible and actionable for female founders and entrepreneurs.

 

 

Here’s what she says…

“Once upon a time, Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever and others got together and created something called OpenAI. Its mission was to ensure that the benefits of artificial general intelligence extended to all humanity. They set up OpenAI as a non-profit. It was collaborative, it had an open research ethos, and it was funded with over a billion dollars in donations, Musk included.

“At that time, OpenAI published research openly and shared models freely. It operated like the non-profit research organisation it set out to be. A while later, there were conflicts over direction and leadership, and Musk departed, possibly influenced by Tesla’s own AI ambitions.

“A year later, OpenAI moved further from its initial ethos, striking a $1 billion partnership with Microsoft. It adopted a capped-profit model, closed its research loop, and stopped sharing outputs as openly. More recently, it has released some smaller open-weight models that can run on a laptop — notable, but a far cry from the full openness of its early days.

“Not long after the Microsoft deal, Dario Amodei, then a VP at OpenAI, left to found Anthropic, maker of Claude. His departure was driven by concerns over safety priorities, governance, and the direction of travel. The perception was that commercial pressures from major partnerships were pulling OpenAI away from its original mission.

“Anthropic was structured as a public benefit corporation with a goal of building steerable, interpretable, reliable AI.

“The alignments that emerged are now well known. OpenAI and Microsoft are deeply tied, with Microsoft holding exclusive commercial rights to OpenAI models and investing around $13 billion. OpenAI is focused on productisation — ChatGPT in all its forms, API access for business, and enterprise solutions. It is the big consumer-facing brand in AI; everyone knows ChatGPT.

“Anthropic, meanwhile, has teamed up with major cloud providers to deliver and train its models. Whether it’s Claude or GPT-4, all large language models require vast amounts of compute. These systems are extremely resource-hungry, not just in terms of GPU time, but in power, cooling, and physical infrastructure. You can’t run this from a modest in-house lab. The scale and cost mean partnerships with hyperscale data centre operators are almost inevitable.

“It’s worth remembering that the surrounding costs of compute are as significant as the chips themselves. GPUs remain in short supply. Power and cooling costs are rising. Data centres require vast footprints of land. Training a state-of-the-art model such as GPT-4 costs tens of millions of dollars; running ChatGPT costs millions per day.

“OpenAI and Anthropic are no longer lightweight software companies. They are deeply dependent on physical infrastructure. For OpenAI, with a capped-profit model, relying solely on one vendor is risky — particularly when that vendor is competing for the same chips. This has led to diversification: deals with Google Cloud, Oracle, and AI-specialist provider CoreWeave; infrastructure hiring; and participation in major projects such as Stargate, a planned large-scale AI data centre network. The aim is to bring more of the supply chain in-house.

“Relations between the two companies have been strained. Anthropic recently cut OpenAI’s access to Claude, alleging terms-of-service breaches. OpenAI disputed this, saying it was standard benchmarking. No one has alleged source code theft, but trust has clearly frayed.

“Both claim to be committed to safety and trust, but their interpretations diverge. OpenAI feels increasingly product-driven and fast-moving, with a tendency to pass more responsibility for model governance onto end users, particularly with the release of open-weight models. Anthropic emphasises constitutional AI and interpretability, and is more transparent in publishing its alignment work — though it still keeps full control of model weights.

“OpenAI today looks and acts like a commercial SaaS giant. It is releasing products at speed, selling into the enterprise, and positioning for long-term market dominance. Anthropic presents as an academically-minded, safety-oriented player, though its deep commercial partnerships mean it is not immune to market forces. It is a clash of cultures: rapid commercial scale versus cautious, principle-led positioning.

“Regulation is far behind. There is little effective oversight at either the model or infrastructure level, yet it is the combination of those two that will determine market structure and access. The AI arms race continues to accelerate, driving up demand for compute and further consolidating control of the infrastructure that underpins it. This will shape who has access to the most capable AI, at what price, and under whose terms.

“For now, the competitive landscape is being defined less by algorithms and more by who controls the infrastructure. That may prove to be the deciding factor in which players lead the next era of AI.”