AI Incidents Reached 346 Reported Cases In 2025, AI Incident Database Says

Recorded AI related harm reached a new level last year. The AI Incident Database logged 346 cases during 2025, according to data reviewed by Cybernews. The total covers fraud, impersonation and unsafe content linked to AI systems used by the public.

Cybernews examined every entry and grouped them by type. The results describe misuse tied to trust and deception appearing more often than technical faults or accidental errors. The database draws on reported cases, meaning the real scale is likely higher.

The data indicates an increasing amount of exposure as AI tools become easier to access. Cybernews treats this as a public safety issue tied to daily use, not a niche technology concern.

 

What Types Of Incidents Appeared Most Often?

 

Deepfakes dominated the database… Out of 346 incidents, 179 involved fake audio, video or images. These ranged from cloned voices used in phone scams to fake videos of public figures shared online.

Fraud linked to AI appeared in 132 cases. Cybernews found that 81% of those fraud cases relied on deepfake technology, equal to 107 incidents. Criminals used AI to impersonate people that victims trusted, such as relatives or well known figures.

Financial losses were worth at least thousands, sometimes even way more. A woman in Florida lost $15K after hearing a fake version of her daughter’s voice. Another case saw a Florida couple lose $45K after criminals posed as Elon Musk and promoted fake investments.

The database also records a UK case where a widow lost £500,000 during a romance scam tied to an AI impersonation of actor Jason Momoa. Cybernews links the success of these scams to emotional pressure and familiarity.
 

 

What Did The Report Find About Unsafe And Violent Content?

 

Incidents tied to violent or unsafe content appeared less often but carried heavier consequences. The database lists 37 such cases in 2025. Cybernews describes these as the most severe category.

Some cases involved self harm, where chatbots gave dangerous advice after specific prompts, according to Cybernews testing. The research found that existing controls fail under targeted use.

One widely reported case involved 16 year old Adam Raine, who died by suicide after interacting with ChatGPT. OpenAI rejected claims that the chatbot encouraged his actions. The case remains listed due to the reported link.

Violence also appeared outside mental health cases. An IT professional tested a chatbot called Nomi and found that it could generate instructions for murder after sustained prompting. The database records this as unsafe content generated through AI.

 

Which AI Tools Appeared In The Cases And What Does That Mean?

 

Most incidents did not name any specific tool. But with those that did, ChatGPT appeared most often, showing up in 35 cases during 2025, according to Cybernews.

These incidents were anything from copyright disputes heard in a German court to mental health related reports. Grok, Claude and Gemini followed next, each named in 11 cases.

Cybernews says the actual tool names appear less often because reports tend to describe outcomes as opposed to where exactly the incidents would take place. Testing by Cybernews found that well known AI systems can generate dangerous output after carefully written prompts, and this ends up leaving users exposed.