The BBC has formally threatened AI startup Perplexity AI with legal action after accusing them of using BBC content ‘verbatim’ without permission.
In a letter addressed to Perlexity’s CEO, the BBC has demanded that Perplexity stop using BBC content, delete any that it is using and pay the broadcaster for use of materials it has already used or face legal consequences.
This is the BBC’s first legal move against an AI company, showing the increasingly complicated relationship that is emerging between content creators and AI platforms that scrape their content to generate text.
BBC Accuses Perplexity Of Breaching Copyright Laws
In a letter addressed to Perplexity’s CEO Aravind Srinivas, the BBC accused the company of breaching copyright laws.
“This constitutes copyright infringement in the UK and breach of the BBC’s terms of use,” the letter says.
The BBC also noted that Perplexity AI (amongst 4 other chatbots) were inaccurately summarising news stories – including stories by the BBC.
This not only hurts the broadcaster’s reputation, but does not respect editorial guidelines around generating accurate news.
“It is therefore highly damaging to the BBC, injuring the BBC’s reputation with audiences – including UK licence fee payers who fund the BBC – and undermining their trust in the BBC,” it added. (Source: BBC)
It also noted that the BBC uses robot.txt code, which is designed to prevent AI tools scraping content – an element that it claims Perplexity has not respected.
More from Tech
- Crypto Clash: Kraken Vs. Coinbase
- Crypto Clash: Binance Vs. Coinbase
- How Is Tech Helping Criminals Hijack Vehicles?
- Top Blockchain Startups in the UK
- What Tech Is Used For Building Ballistic Missiles?
- Alternatives To Ring Doorbells
- Retro Rewind: 10 Vintage Tech Gadgets We Left in the Past
- Best Tech and Apps for Preventing Phone Theft
Perplexity Fights Back
Perplexity, which brands itself as an ‘answer engine’ responded to the letter via the Financial Times, where a spokesperson said that the claims were “manipulative and opportunistic” and that it had a “fundamental misunderstanding of technology, the internet and intellectual property law”. (FT).
It noted that its AI is not built or trained on models like OpenAI, Google and Meta, but provides users with multiple answers from the web to choose between. The company also claimed that sources are referenced properly, refuting the BBC’s accusations.
The problem? There is a growing grey area around what counts as copyright when using AI.
This is because many models simply re-write the same content in different language. Because it’s not copied directly, some argue that this isn’t plagiarism – but a growing number of content creators – including the BBC, disagree.
What Does The Law Say?
This dispute between the BBC and Perplexity throws up some important questions around AI data scraping.
According to law firm Taylor Wessing, companies that train or run AI models on content must still comply with data protection and GDPR.
This means that they have to:
- Show that there is ‘legitimate interest’ for data scraping
- Respect robots.txt on websites that do not want their content scraped
- Not collect any private data
- Reference sources correctly, so people know where the information is coming from
But these laws are being developed in real time. In fact, on July 12th 2024, the EU passed the AI Act which explains more clearly what process developers will need to adhere by when training AI models.
All companies will need to comply with this fully by August 2026.
Under this, they will need to make sure data is protected, datasets are accurate and information is properly referenced.
In short, more laws (and clarification on expectations) are coming – so AI companies need to prepare now.
The Future Of AI And Content Use
The BBC’s legal threat is a signal that content creators are clamping down on the value of their IP, especially when AI models are using them to compete for attention online.
For AI companies, it’s a sign to take copyright laws seriously, and with new laws coming next year, it’s only going to get stricter.
For publishers, it’s a sign to keep defending the value of their work in the face of AI.
As we wait to see the outcome of this legal case, it will certainly set a precedent for the use of content in AI for years to come.