Will Meta’s Plans To Deploy Its Own Moderation Tools Better Regulate AI Scams?

Meta says it will roll out more advanced AI systems across Facebook and Instagram over the next few years to transform how it enforces its rules. The company says these systems will find and remove severe violations such as scams and illegal content more accurately, so people see less of them.

In its update, Meta said, “Reduce the chance that scammers trick people into giving away their login details, ultimately finding and mitigating 5,000 scam attempts per day that no existing review team had caught before.”

Meta also said, “Identify and prevent more accounts from impersonating celebrities and other high-profile people, which helped us to reduce user reports of the most impersonated celebrities by over 80%.”

It added, “After being tested more broadly, this AI drove down views of ads with scams and other serious violations by 7%, offering promising results and better protections for users and brands.”

AI driven scams have increased across social platforms. Criminals now use AI to clone voices, generate fake investment ads and build convincing phishing pages. Even on Meta’s own apps, users report more sophisticated fraud attempts. Against that background, Meta is betting that AI can police AI driven crime faster than human teams alone.

 

What Happens To Human Moderators?

 

The change also affects the people who review content. According to Engadget, Meta plans to “further “transform” its approach by drastically reducing the number of human moderators in favour of AI-based systems”. The transition will take place “over the next few years,” and Meta says it will allow the company to catch more issues faster than its current method.

 

 

Meta did not say how many contractors could lose work. The company employs thousands of contract workers worldwide to review posts flagged by AI systems and user reports. Engadget reported that Meta “didn’t say how much of its contract workforce might be cut as it makes this transition”.

At the same time, Meta insists humans will stay involved in high risk decisions. In its own statement the company said, “Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high-impact decisions. For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”

That means account bans and police referrals should not rest on automation alone. It also means regulators will watch closely to see if human oversight is strong enough when mistakes happen.

 

How Will The AI Support Assistant Affect Users And Complaints?

 

Meta is also launching a Meta AI support assistant inside Facebook and Instagram. It offers 24 hour help for account problems such as password resets, privacy settings and reporting scams or impersonation accounts.

The company said, “When you have an account issue, you need a solution – not just a suggestion. The new Meta AI support assistant is designed to help resolve account problems for you from start to finish.” It added that the assistant “can respond to requests typically in under five seconds, dramatically reducing wait times compared to traditional help centre searches or seeking answers on external websites.”

Users will also be able to see why their content was taken down, review appeal options and track what happens next. Meta said that among people who gave feedback, “the majority report a positive experience with the Meta AI support assistant.” That feedback comes from Meta’s own data.

For regulators, this creates a clearer audit trail. Faster reporting tools and visible appeal tracking could make it easier to measure how scams are handled and how quickly harmful content goes down. At the same time, public trust will depend on accuracy. If AI removes lawful content or fails to catch fraud, complaints will grow.

Meta says its Community Standards are not changing. The rules stay the same, but enforcement becomes more automated and more global. The company says its advanced AI can work in languages spoken by 98% of people online, compared with around 80 languages under its earlier moderation systems. That wider language coverage, according to Meta, allows it to act against scams and fraud across more regions without waiting for local review teams.

Regulators in the UK, EU and elsewhere have pressed platforms to act faster against online fraud. Meta’s bet is that AI systems trained and overseen by human teams can meet that pressure at scale. The next few years will test whether those systems catch more scams, handle appeals fairly and meet legal standards for accountability.