The AI funding conversation has spent two years focused almost entirely on the labs: who’s training the biggest model, who has the most compute, who’s closest to AGI. Harvey is a useful corrective to that framing.
The legal AI startup this week confirmed an $11 billion valuation after raising $200 million in a round co-led by Singapore’s GIC and Sequoia, with Andreessen Horowitz and Coatue also participating. Sequoia tripled down – the valuation is up from around $8 billion in late 2025, which means Harvey has more than tripled in value in under a year.
Harvey doesn’t train foundational AI models, it builds deeply specialised agents and workflows on top of third-party infrastructure, primarily OpenAI and Anthropic, to handle legal tasks like contract review, due diligence, regulatory compliance checks and litigation support.
The entire $11 billion bet is on how Harvey applies AI to legal workflows, not on what model sits underneath it. That distinction is worth taking note of, because it speaks to where the next wave of AI value is actually going to be created.
Why Legal AI Is Such A Compelling Vertical
Law is, in many ways, a perfect test case for vertical AI. The willingness to pay is high because legal work is already billed at significant rates, which means even modest efficiency gains translate into substantial software spend. The cost of error is equally high, thereby driving the level of specialisation, guardrails and tight workflow integration, all of which create real switching costs once a firm is embedded.
And the distribution is sticky: Harvey already serves over 100 law firms and corporate legal teams, including many of the largest firms in the world, giving it a feedback loop that compounds with every new piece of work the tool handles.
These three factors together, high willingness to pay, high cost of error and deep embedding in institutional workflows, are exactly what investors are now actively looking for in vertical AI bets. The generic model API layer is crowded and commoditising fast. The value is shifting to the companies that have done the hard work of understanding a specific domain, building the right guardrails and getting close enough to the customer that switching away becomes materially disruptive.
Harvey’s valuation is confirmation that institutional capital has understood this shift. The question for founders is whether they have too.
The Model Isn’t The Differentiator, It’s the Workflow
This is the part of the Harvey argument that should matter most to founders building in regulated or high-stakes industries right now.
Harvey’s competitive advantage has nothing to do with having better underlying AI than anyone else. OpenAI and Anthropic models are available to anyone with an API key. What Harvey has built is the legal-engineering layer on top: the domain knowledge, the error-reduction workflows, the trust of over 100 law firms, and the feedback data that makes the product better every time it’s used.
That’s a very different kind of advantage to anything model-related, and it’s a great deal harder to replicate. You can’t download Harvey’s institutional relationships or its understanding of what a contract review workflow actually needs to look like in a real law firm. Those things are built through proximity to customers over time, which is why companies like Healx in drug discovery and Tessian in cybersecurity have been able to build defensible positions in their verticals despite not training their own foundation models either.
The overarching trend is clear: in high-value, regulated domains, the application layer is where the money is, the model is a commodity. The domain expertise, the workflow integration and the customer trust are the product.
More from Artificial Intelligence
- AI Is Now Sitting In On Your Therapy Session, We Should Probably Talk About That
- Are Oral Exams The Solution To AI Cheating? Education Leaders Weigh In
- Google Just Made It Easy To Leave ChatGPT. The AI Wars Are No Longer About Who Has The Best Model
- No More Dirty Talk: ChatGPT’s “Adult Mode” Suspended “Indefinitely” Over OpenAI’s Age Prediction Inaccuracy
- Artists Will Now Have More Control Over What Appears On Their Spotify Profiles
- AI Has Already Changed How Coders Work – Now It Is Coming For The Rest Of Us
- How Is AI Helping University Graduates Find Jobs?
- Your Phone Calls Could Be Used By AI Voice Cloning Scammers – Here’s How To Stay Protected
What This Means For The Next Wave Of AI Unicorns
Harvey is often cited as the canonical example of a new category: vertical AI companies that reach large-scale valuations not by competing with OpenAI or Anthropic but by building on top of them.
Insurance AI, healthcare AI, fintech AI, HR AI, the same logic applies across every domain where the workflows are complex, the stakes are high and the incumbent software is either old or inadequate. In each case, the founder who wins won’t necessarily be the one with the best model. It’ll be the one who understands the domain deeply enough to build something the customer can’t easily leave.
For UK and European founders, this is a particularly relevant frame. Messy, complex industries often hide the biggest opportunities, and the UK has no shortage of regulated, high-stakes sectors where vertical AI is still early. Legal, healthcare, financial services, property, insurance, all of which have the characteristics Harvey has exploited in law: high professional fees, high error costs and workflows that haven’t fundamentally changed in decades.
You Don’t Need To Build A Model, You Need To Own A Workflow
The lesson from Harvey’s valuation is that the model race was never the only race.
While the labs compete on capability and compute, there’s an enormous amount of value to be captured by founders who are willing to go deep into a single domain, earn the trust of the people working in it and build something that fits so precisely into their workflow that it becomes part of how they work rather than a tool they occasionally use.
Harvey proved it in law – the same opportunity exists in a dozen other verticals. For founders exploring where to build, the question isn’t ‘can we train a better model?’ It’s ‘which domain do we understand well enough that we can build the layer nobody else has built yet?’ That’s where the next crop of AI unicorns will come from, and Harvey just made it very hard to argue otherwise.