How Is ‘AI Slop’ Directly Funding The Rise Of Cyber Fraud?

One of the biggest threats to online security right now and for the year ahead, is AI. The tech behind AI makes it easier for criminals to automate scamms, fake content and social engineering attacks.

Research connected to the World Economic Forum’s Global Cybersecurity Outlook shows just how bad and common cyber fraud is in our daily lives. Around 73% of respondents said they or someone they know experienced cyber fraud in 2025. Corprate leaders also put it at number 1 for their cybersecurity threats after about 77% of them report incidents went up over the past year.

Generative AI tools let criminals produce convincing messages and media in seconds. The criminals can now create emails, social posts and ads that seem legit. The World Economic Forum explains, “AI is enabling rapid, tailored content creation, allowing criminals to scale and personalise scams with unprecedented efficiency.”

These capabilities also feed automated advertising fraud systems. Operations such as the campaign known as AutoBait show how generative AI produces large volumes of clickbait articles and images. Each page costs around $2.25 to generate.

The pages appear as slide-based stories packed with advertising slots. Operators place them inside abandoned but once legitimate websites. The tactic helps the pages attract traffic and pass automated checks.

This method produces millions of advertising impressions that direct advertising money to automated made-for-advertising websites that exist only to collect revenue.

 

Where Does The Advertising Money Go?

 

Digital advertising fraud takes up so much from the global economy year after year. Research by Search Engine Lead says $84 billion in global digital ad spending went to fraud in 2023. That’s up to 22% of all online ad spend.

Statistics gathered in the Digital Ad Fraud 101 document written by the creators of Operation AutoBait summarise the scale of the problem using multiple research sources. The document explains that estimates differ depending on how fraud is measured, although most studies place the cost in the tens of billions each year.

Projections show the scale growing after PR Newswire reports that losses could exceed $170 billion each year by 2028 if the current trajectory continues. The same document also notes that mobile advertising often experiences higher fraud rates.

MediaPost research mentioned in the document reports that roughly 30% of mobile advertising spending may be affected. The authors explain that these figures depend on definitions used to measure fraud or invalid traffic.

Researchers identify four main destinations for this money. One destination involves organised cybercrime networks running botnets or malware infected devices that generate fake clicks and ad impressions.

Another destination involves made for advertising websites filled with scraped or low quality content created to harvest programmatic advertising revenue. Mobile fraud ecosystems form a third destination through fake app downloads, click injection and spoofed user behaviour.

A fourth category exists inside advertising supply chains where complex intermediary structures absorb revenue through hidden fees and arbitrage. Academic research published on arXiv explains that weak transparency in programmatic advertising makes the activity difficult to trace and allows bad actors to divert revenue or hide fraudulent traffic inside legitimate advertising flows.

 

 

How Does AI Supercharge Scams And Fraud?

 

AI tools allow criminals to create persuasive scams at scale. Synthetic voice technology can replicate a person’s speech patterns and tone. Criminals then use the voices during phishing calls or video meetings.

One well-known case involved a British engineering company that lost $25 million after criminals used an AI generated voice during a video call. The impersonation convinced employees that the request came from a trusted executive.

Deepfake videos have also entered political and social spaces. One fabricated clip showed an Irish presidential candidate falsely announcing a withdrawal from an election campaign.

Another scam used a manipulated Instagram video appearing to show Indonesia’s president directing citizens to a WhatsApp number to claim financial aid.

The World Economic Forum describes the situation in direct terms. It states that “Fraud has become the connective tissue of cyber risk, affecting households, corporations, and national economies simultaneously.”

The report also explains how small scams can trigger large damage. “One scam email can lead to data breaches that cause a breakdown in a company’s operations, setting off a chain reaction that can ripple through supply chains and across borders, denting not just bottom lines, but trust in digital and international systems.”

 

What Makes The Problem Difficult To Stop?

 

AI improves both cyber defence tools and criminal tactics. Machine learning systems detect suspicious transactions in banking and payment systems through pattern analysis that flags irregular behaviour.

At the same time the technology lowers the cost of deception. Criminals can generate convincing messages, images and voices at extremely low cost. The World Economic Forum explains that “AI’s potential to automate cybercrime may only be matched by its capacity to prevent it.”

The Global Cybersecurity Outlook also identifies structural problems that weaken cyber defences. Regulations differ between countries, intelligence sharing stays limited, and many organisations lack cybersecurity skills.

SMEs feel it the most with around 46% report critical cybersecurity skills shortages, according to the report.

International organisations have started holding meetings to start discussing how they’re all dealing with the problem. Events such as the United Nations and Interpol Global Fraud Summit in Vienna bring governments, technology groups and civil society together to tackle cybercrime between countries.