Cheapfakes are edited real videos, images or audio made with everyday editing tools. They might change a date on an ID card, reuse old footage with a different caption, or stitch together unrelated clips with a voiceover. They do not require technical skill, only access to basic software, which makes them an easy tool for spreading false information.
On YouTube, some cheapfakes have brought in huge audiences. WIRED found 120 channels running AI-assisted celebrity fanfiction videos that fall into this category. One featured Mark Wahlberg in a furious exchange on The View. The scene never happened… Wahlberg has not appeared on the show since 2015. The clip was nothing more than a still image with an AI voice reading a dramatic script, yet it reached 460,000 viewers.
Simon Clark from the University of Bristol said these videos often work because they tap into strong emotions like outrage. This makes people more likely to share them, even if they doubt the content. YouTube has removed 37 channels for not having clear disclaimers, but many remain active.
What Makes Cheapfakes Dangerous?
Cheapfakes can be used to support identity fraud. Altered documents have been used to open bank accounts, apply for loans and access healthcare services. In some cases, they have bypassed both human checks and automated systems.
Sandra Wachter from the University of Oxford said the low cost of AI tools makes it easier to fill the internet with attention-grabbing material, regardless of whether it is true. She explained that outrage and drama keep viewers watching longer, which is exactly what the platforms reward.
Reality Defender, a company that detects manipulated media, said even people who are aware of AI trickery sometimes have to double check with experts before deciding if a cheapfake is real or fake. This uncertainty makes them a lasting problem for online trust and security.
More from News
- How Does Google’s AI Flights Tool Work For Travellers?
- Why Is Meta Randomly Banning User Accounts?
- A Levels: Advice From Top Entrepreneurs To Students Getting Results
- Time Is Money: How Are Business Owners Spending In 2025?
- How Will Federal Agencies Use ChatGPT Enterprise?
- Lycamobile Expansion In Italian Market Highlights Recent Retail Partnerships
- How Much Water Do Data Centres Actually Use?
- What Is Agentic AI?
What Is The Difference Between Deepfakes And Cheapfakes?
Data Society has created a spectrum that lays out how different types of altered audio and video sit in terms of how hard they are to make and who can make them. On one end are deepfakes. These work using advanced machine learning, large amounts of computing power and technical knowledge… On the other end are cheapfakes. These can be made with free or inexpensive tools and very little training. The further along the spectrum towards cheapfakes, the more people are able to create them.
Deepfakes use techniques such as AI face swaps, voice cloning and lip-syncing. They often run on systems like generative adversarial networks or recurrent neural networks, which need a lot of training data to produce convincing results. Data Society gave examples such as Jordan Peele’s Barack Obama public service clip and AI art created by Mario Klingemann. These are intricate projects that can replace a person’s face or voice with a realistic digital version.
Cheapfakes use much simpler methods. A clip can be slowed down to change how a person sounds, sped up to alter their behaviour, or relabelled so that it appears to show something else. Some cheapfakes use lookalikes instead of digital edits. Others reuse old footage in a misleading way.
These can be made with basic editing programmes like Adobe Premiere Pro, Sony Vegas Pro or even quick in-camera tricks. Data Society’s spectrum shows that while both deepfakes and cheapfakes can mislead viewers, the skill, time and tools involved are very different.