Clearview AI, a facial recognition firm, has offered the Ukraine government its services for free. These services could help the country uncover Russian infiltrators, identify the dead and tackle misinformation.
It’s said that the company offers a searchable database of 10 billion faces. This database has been sourced from the web, with the tech having previously been hit with fines from industry regulators.
How Could the Tool Be Used?
Clearview AI’s offering was sent to the Ukrainian government in a letter, which was first reported by Reuters. In this, the firm claim a substantial amount of its facial database has been drawn from social media platforms in Russia, with over two billion images from what’s been known as the “Facebook of Russia” – Vkontakte.
The letter details a number of scenarios where the Clearview AI technology could be used, including those that have been previously mentioned (the identification of infiltrators, identifying the dead, tackling misinformation) as well as reuniting family without needing their paperwork.
More from News
- What Is The SaaSpocolypse: Will It Be The End Of SaaS As We Know It?
- Who Really Controls Oil Prices? The OPEC Effect Explained
- Liz Kendall Speaks On Why AI Is At The Top Of Britain’s Economic And Security Agenda
- Taylor Swift Versus AI: The Trademark Battle That Could Reshape The Music Industry
- Converge Bio Designs Stronger Cancer Antibody With AI In Hours Using a Single Prompt, Signaling Shift In Drug Discovery
- DeepSeek Releases New AI Model – But What Makes It So Powerful?
- Why Gen Z Is Choosing To Work At Startups Over Tech Giants?
- Instagram Just Launched A Disappearing Photo App In 2026 – And Yes, That’s Exactly What It Sounds Like
Clearview AI has faced criticism from privacy watchdogs, and was hit with fines including a provisional £17m in November from the Information Commissioners Office (ICO) and 20m euros by Italian regulators.
Concerns have also been reported on whether the technology could backfire; whether it could potentially misidentify those at checkpoints and, as Surveillance Technology Oversight Project’s Albert Fox Cahn commented to Reuters “harming the very people it’s supposed to help.”