Can AI Help Detect Fake Reviews In Ecommerce?

The Digital Markets, Competition and Consumer Act landed on 6 April 2025 and draws a line under fake online reviews and hidden costs. The text bans false reviews outright and orders every retailer to show the full charge at the very start of a purchase. Ministers say families could keep an extra £2.2 billion each year once sneaky fees vanish.

Hidden booking or admin charges that used to appear just before checkout now must sit in the headline price. A train ticket, a pizza or a cinema seat will read the same from basket to payment screen, sparing shoppers the disappointment.

Web hosts carry new duties as well, as they must spot and remove fake feedback or face fines worth 10% of global turnover. The CMA plans to chase the worst offenders first while keeping paperwork light for small firms.

Government figures show 9 in 10 shoppers read them and those opinions guided £217 billion in online spending during 2023. Honest businesses deserve a level field where fake reviews no longer overpowers real voices.

 

How Does Amazon’s System Spot False Reviews?

 

Amazon runs machine learning models that watch wording, timing, buyer history and links among accounts. In 2023 those models blocked more than 250 million suspect posts before any customer saw them.

LLMs judge tone, copied phrases and sudden changes in sentiment. Deep graph networks trace connections between sellers and reviewers, catching groups that work together to pump up ratings. When the signals align, the review vanishes and the writer may lose posting rights or end up in court.

Speed matters. A comment that stands out can change buying choices within minutes, so the code works in real time. Borderline cases reach trained investigators who look at extra data and keep genuine feedback online.

 

 

Which Browser Tool Keeps Buyers Safe?

 

Fakespot, a free extension for Chrome, Firefox, iOS and Android, scans pages on Amazon, eBay, Walmart and other large marketplaces. Over one million users have installed it.

When a shopper opens a product page, Fakespot drops a badge on screen. Green hints at strong trust; red warns of trouble. The colour appears without extra clicks, making the verdict hard to miss.

The engine studies language patterns, purchase timing and seller history against a huge reference set. If the grade looks poor, the plug‑in also hunts for the same item from a trusted seller and shows that link beside the warning.

Mobile users gain the same guard on handheld screens, closing the gap for those who jump between laptop and phone during one purchase. Early adopters praise the clarity—no need to scroll through dozens of mixed comments when a single badge does the heavy lifting.

Retailers pay attention as well, as a red badge can drain traffic overnight, so honest shops watch third party marketing firms and ask verified buyers for balanced stories rather than short bursts of praise.

 

Where Is Academic Research Heading?

 

The 2022 paper in the Journal of Business Research, written by Sami Ben Jabeur, Hossein Ballouk, Wissal Ben Arfi and Jean‑Michel Sahut, reviewed more than a decade of work on fake review research and grouped it into 3 themes: price and reputation effects, machine learning filters, and shopper behaviour once fraud enters the mix.

The review puts it that mixing text cues with timing, location and emotional markers lifts accuracy. Word choice alone no longer cuts it; context around the comment adds power to the model.

Scholars also call for shared, anonymised datasets so peers can test new code. Open data would speed progress and help regulators check that new tools meet the bar set in UK law.