Site icon TechRound

The Clash AI Startups: How Are Competitors Backed By The Same Companies?

OpenAI has begun working with Amazon Web Services to make its new open weight models available to millions of AWS customers through Amazon Bedrock and Amazon SageMaker AI. This comes just days after Anthropic, a company backed by AWS, cut OpenAI’s access to its Claude API.

The break with Anthropic happened after Anthropic accused OpenAI of violating its terms of service. Anthropic spokesperson Christopher Nulty told news platform Wired that OpenAI staff had been using Claude’s coding tools ahead of the expected GPT-5 launch, which went against the rules forbidding the use of the service to build competing products. OpenAI said it was disappointed but called such evaluations standard practice in the industry.

Now, despite that bust up, AWS has moved to work directly with OpenAI. Customers can access the new gpt-oss-120b and gpt-oss-20b models on AWS. These are designed for coding, scientific analysis and other advanced tasks, and AWS says they can be up to 18 times more price efficient than rival models such as DeepSeek-R1.

 

How Does This Affect Anthropic’s Ties To AWS?

 

Anthropic has been closely linked to AWS since 2024, when Amazon invested $4 billion in the company and became its primary cloud provider. In late 2024, the partnership deepened, with AWS becoming Anthropic’s main training partner and providing its Trainium and Inferentia chips for Claude’s development.

This means AWS is now working with 2 rival AI developers at the same time. On one hand, Anthropic gets exclusive benefits on AWS, such as early access to fine tuning with customer data on Claude models. On the other hand, OpenAI now has its models running natively on AWS with access to the same security and scalability features.

For AWS, the advantage is not hard to see… it increases the range of models available to its customers and keeps both of the most talked about AI companies in its ecosystem. For Anthropic, the situation may be more complicated, as it could face stronger competition for enterprise clients directly within the AWS platform.

 

What Could This Mean For Competition In AI?

 

Tech companies have a long history of restricting competitor access to their platforms. The Claude API ban comes after a similar pattern to Facebook’s move against Vine or Salesforce limiting competitor access to Slack data. Now, the difference is that rivals can still end up working with the same corporate partners in other ways.

In this case, both OpenAI and Anthropic are tied to AWS in different capacities. That then makes one wonder about how corporate loyalty and exclusivity work in an industry where partnerships can change quickly. Customers might benefit from having multiple model providers under the same cloud service, but it could also intensify rivalry between those providers.

The presence of competing startups on the same infrastructure could lead to more direct benchmarking, faster product cycles and more aggressive pricing. For AWS, it strengthens its position as a hub for AI development. For OpenAI and Anthropic, it sets the stage for head-to-head competition on the same platform that both depend on.

Lauryn Warnick Founder & CEO at Villain Branding, said, “When competitors are backed by the same big players, the tech might be different but the money smells the same. That is when brand becomes the only real moat. If AWS is powering both you and your rival, your edge is not in the infrastructure. It’s in the precision of your narrative, the sharpness of your positioning, and the conviction you project into the market.

“Shared investors and overlapping partnerships can make everything feel beige. The companies that break through are the ones who use brand to turn high stakes moments like funding rounds, partnerships, and launches into clear market dominance. In a sea of sameness, which GenAI is driving us further into, the clearest and boldest voice wins.”

 

What Does Expert Michelle Johnson Say?

 

We have gotten insights from Michelle Johnson, an MIT-certified AI expert, MBA (Distinction) graduate and PMP-certified project leader. Michelle has over 20 years’ experience driving digital transformation and operational excellence in fintech and high-growth sectors.

As founder of Ideal Intelligence, she helps SMEs and scaling businesses cut through AI hype with practical training and low-cost automation, making advanced technology accessible and actionable for female founders and entrepreneurs.

 

 

Here’s what she says…

“Once upon a time, Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever and others got together and created something called OpenAI. Its mission was to ensure that the benefits of artificial general intelligence extended to all humanity. They set up OpenAI as a non-profit. It was collaborative, it had an open research ethos, and they funded it with over a billion dollars in donations, Musk threw in quite a lot himself.

“At that point, OpenAI was publishing things extremely openly, sharing things, and just kind of operating as the non-profit research organisation that it was. A while later, there were conflicts over direction and leadership that caused people like Musk to depart. Musk may also have had an influence in Tesla AI that caused him to walk early.

“A year later, OpenAI continued to kind of drift a bit away from its initial ethos and started a partnership with Microsoft worth $1 billion. Moving away from the non-profit model, they shifted to something called a capped-profit model and started to close down their research loop and quit sharing things quite as openly.

“A little while later, Dario Amodei, who used to work at OpenAI, decided to leave and found his own company, which is called Anthropic. Anthropic makes Claude. The cited reason for Amodei to leave OpenAI was safety concerns. He didn’t think OpenAI’s safety priorities, governance, or corporate direction really aligned with the initial ethos.

“And because of that partnership with Microsoft, not saying that Bill Gates is the Great Satan or anything, but commercial pressure, as it does, started to undermine OpenAI’s original kind of altruistic mission.

“So Anthropic was structured as a public benefit corporation and their goal was to build steerable, interpretable, reliable AI, which is interesting. Up until now, there were a couple of alignments. OpenAI and Microsoft. Microsoft provided funding and in return has exclusive rights to OpenAI models and about $13 billion invested in the partnership. Off the back of that, OpenAI is now largely a closed-source shop.”

 

 

Michelle continued, “They’re not sharing their research, they’re not sharing their models openly, except for the news yesterday that they had some downloadable OSS models that could be run on a laptop, and they’re really looking at making AI into a product.

“So that’s ChatGPT in all its forms. They’re also looking at an API that launched around the same time, so businesses could use it without the chat interface. They’ve also got enterprise solutions going on. They are the big name in AI at the moment. Everybody says ChatGPT, nobody says Claude.

“Then we take a look at Anthropic, which has teamed up with Amazon and Google. Now Amazon provides cloud services. Microsoft does too. Amazon’s are called AWS (Amazon Web Services). Microsoft uses Azure. Google’s is called Google Cloud. Any large language model or AI model needs to have a lot of what’s called compute. They need a lot of compute because artificial intelligence is incredibly greedy in terms of memory use in order to come up with its ideas.

“You’re not going to be able to have all those chips lying around in your research lab. So, it made sense at the time that they were developing these models to outsource that capability to someone like Microsoft or Amazon or Google, people who could grow and scale the data centres that provide the level of computing needed, with no real bricks-and-mortar cost required from OpenAI or Anthropic.

“So that’s kind of where we’re at, with the potted history of things. But let’s look at compute again. It’s not just the fact that GPU (graphics processing unit) time is expensive. You’ve heard of Nvidia and the GPU chips that they’re using and the boom in Nvidia’s shares. The problem is that everything that surrounds that is also expensive.

“They’re not making the chips fast enough. Power costs are rising. Cooling costs are rising because chips that think create a lot of heat. You’ve got to buy the land your data centres are on, and data centres are sprawling all over the place.

“Training a large language model like OpenAI’s GPT-4 costs tens of millions of dollars. Even letting ChatGPT work, just making it available to the general public and the paid users, costs probably millions daily.

“So, both OpenAI and Anthropic have become companies that aren’t lightweight outfits offering a chatbot. They’re tied to really, big physical constraints.

“What that means, especially for something like OpenAI with a capped-profit model, is that they need to protect themselves. They can’t afford to have the entire organisation at the mercy of a single vendor like Microsoft, especially when Microsoft is also competing to buy chips and build out their own data centres.

“So OpenAI has diversified. They’ve got deals with Google Cloud. They’ve got deals with Oracle. And they’ve got another cloud computing provider called CoreWeave. That one is actually optimised towards AI, which is interesting. OpenAI is also hiring a lot of people who are specialising in infrastructure. And they’re starting to essentially verticalise the supply chain so it’s all under their own roof. Think about them as the LVMH or Tyson Chicken or ALDI of the artificial intelligence world.

“They’ve also got involved with Stargate, which is a massive, massive, massive AI data centre build. They’re going to need secure chips and compute and things like that to deliver on their product.

“So, tension. Anthropic cut off OpenAI’s access to Claude because they said OpenAI was violating their terms of service. OpenAI said that was absolute bollocks: they were using industry standard practices to benchmark things. Nobody is claiming any kind of source code was taken, but there’s clear broken-down trust there.

“What’s super interesting is that both of these companies claim to be obsessed with safety and trust, but their definitions are really, really diverging. I find ChatGPT and OpenAI reasonably productised, and they seem lately to be shifting a lot of liability, which you’ve seen in that LinkedIn comment I made today about their licensing of open-weight models that can run on a small laptop.

“Claude seems to be trying to take the high road. They’re talking about constitutional AI and high interpretability. And I certainly find them reasonably transparent with the studies and experiments that they run. At the same time, I note that I can’t tune Claude, and I don’t have an opportunity to take a look at the weights, which is interesting.

“So where am I going with this brain dump? OpenAI is acting more and more like a for-profit commercial company, and it feels like they’re going to take the loft off that capped-profit status. They’re releasing things quickly, they’re doing big sales, they’re positioning themselves as an enterprise player. I wouldn’t be surprised if there isn’t an IPO coming. I wouldn’t be surprised if Sam Altman is the world’s first trillionaire. If there is an IPO, get some stocks.

“Anthropic is academic-ethical branding, which makes it interesting that they’re aligned with Amazon and AWS. Just seems like a cultural clash to me. So, there’s kind of a battle royale here. Culture wars. Commercial versus stability and alignment. I find that very, very interesting.

“What’s happening on the other hand is that regulators are nowhere in it. They are so behind on AI, it’s not even funny. So, what’s shaping the future is this constant AI arms race where these guys are trying to make their models the best, and that’s fuelling demand for more and more and more compute, which means more chips and more data centres and everything along those lines.

“All of a sudden, these things are consolidating. And that’s going to have a real impact on access to AI, how it’s priced in, how things work together. Are we looking at Sony and Betamax? Quite possibly. I don’t think one’s going to swallow the other, but the moat for these products from a product development perspective is how capable the infrastructure they run on is. And it does certainly make sense if I was in Sam Altman’s shoes or Dario Amodei’s to really double down on making sure the infrastructure was in my control.

“Now just to talk about the release of open-weight models. DeepSeek released some open-weight models last year and that woke up the industry a bit. Now OpenAI and others have released some too. That suggests maybe not every use case should be locked to a platform. But at the same time, it doesn’t really seem clear to me how open those models are.

“It does feel like OpenAI is just shifting the liability for making those models behave appropriately to the people who actually download them.”

Exit mobile version