How Is The Tech Industry Managing Deepfake Scams In The UK And EU?

Meta has launched facial recognition tools in the UK and EU to stop scam adverts using celebrity images. This technology scans advertisements flagged as suspicious and checks if the faces match public figures’ official profiles. If it finds a match and the advert is fraudulent, it will be removed.

The system is designed to stop criminals from using well-known faces to deceive people. Meta also plans to use the same technology to help users recover locked accounts. People who lose access to their accounts will be able to submit a short video of themselves to confirm their identity.

David Agranovich, a director at Meta, stated, “Scams and account security are top of mind for people. We’re constantly working on new ways to keep people safe while keeping bad actors out.”

 

What Recent Scams Have Used Deepfake Technology?

 

Criminals are now using deepfakes of celebs to trick people. Just last year, a scam operation had used deepfake videos of celebrities to promote fake crypto investments. Over 6,000 victims lost a total of about £27 million.

Cases like this show just how criminals are using tech to make scams more believable, and this costs people who fall for them financial losses.

 

How Are Public Figures Reacting To Deepfake Scams?

 

Actor Tom Hanks recently warned his followers about deepfake ads showing his likeness promoting products without his consent. He urged people to be cautious and not to trust advertisements that appear suspicious.

Polish billionaire Rafal Brzoska has taken a legal route, filing a complaint against Meta for allowing deepfake adverts featuring him and his wife. He identified over 260 ads using their images in deceptive promotions.

The frustration among public figures continues to grow as criminals take advantage of artificial intelligence to make fake endorsements seem real.

 

 

What Legal Measures Are Being Taken?

 

Governments are starting to act against the spread of fake digital content. In the United States, a new bill was made to criminalise the distribution of nonconsensual deepfake images, especially those used for scams. Meta and TikTok are among the companies supporting this legislation.

Australia has introduced a fraud prevention system called the Fraud Intelligence Reciprocal Exchange, which works with the Australian Financial Crimes Exchange to track and remove scam content. Since its launch, it has blocked over 8,000 fraudulent pages and 9,000 celebrity-related scams.

Australians reported losing millions to scams on social media in the first eight months of 2024, with fake investment schemes accounting for nearly $30 million of the total.

Authorities are starting to act, but the use of artificial intelligence in scams is developing quickly, making it an ongoing issue.

 

How Can People Protect Themselves From Deepfake Scams?

 

With deepfake scams being harder to spot, online users might want to consider these…

Double-check endorsements– be skeptical of celebrity promotions for financial investments or products. If an ad seems suspicious, check the official social media accounts of the person featured.

Stay informed about scam tactics– keeping up to date with how scams work can help people recognise warning signs.

Report suspicious content– if ever something seems fraudulent, reporting it to the platform can help prevent others from being misled.