Facial Recognition Technology is now in use in London
In February, live facial recognition technology was deployed near Oxford Circus in what was believed to be the second operational use of this surveillance by Metropolitan Police. The van-mounted cameras were seen around the Oxford Circus area on 20th February. The police disclosed via Twitter that the cameras would be used “at key locations in Westminster from 11 am”. The tweet went on to say that “This technology helps keep Londoners safe. We are using it to find people who are wanted for violent and other serious crimes”.
This form of surveillance faces criticism from civil liberties and privacy campaigners. UK civil liberties group Big Brother Watch also took to Twitter to discuss this alongside a picture of the van: “It’s alarming to see biometric mass surveillance being rolled out in London. Never before have citizens been subjected to identity checks without suspicion, let alone on a mass scale,” The director of Big Brother Watch, Silkie Carlo, emphatically stated: “We’re appalled that Sadiq Khan has approved such useless, dangerous and authoritarian surveillance technology for London…This undemocratic expansion of the surveillance state must be reversed.”
More from News
- World Snooker Championships 2024: An Industry Of Winning Shots
- Will The EU Banning TikTok Lite’s Rewards Programme Affect Creators?
- Smart Motorway Tech Causing Road Incidents, Reports Find
- DARPA Tests First Ever Human Pilot Vs AI Dogfight
- Mia Acquires Fem Foundry: Elevating Women’s AI Learning Globally
- How Is The New AI Regulation Bill Set To Change The Workplace?
- Spiritrade Celebrates Growth of Over 300% In The Last 12 Months
- The Bitcoin Halving Event Could Be Today – What Does It Mean?
How Does it Work?
The Metropolitan Police reportedly spent £200,000 on rolling out this controversial technology. Trials took place in locations such as Westfield shopping centre, Stratford and the West End and operational use of the cameras began at the Stratford Centre in February. Passersby are scanned by the cameras and checked against a “watchlist” made up of wanted suspects . If a match is found, Police then approach the suspect.
The Metropolitan Police state that the cameras has a very high success rate, with a false alert shown only once in every 1,000 times. However, researchers from the University of Essex have reported that the cameras work just 80% of the time, could be ruled illegal and should be stopped immediately. A spokesperson for the Mayor of London asserted that Sadiq Khan has sought reassurance that these methods are in accordance with conditions set out by an independent report on ethical use of the technology.
Fears Mount Internationally
This is not only a problem facing London streets. Police forces in the Indian capital New Delhi and the northern state of Uttar Pradesh have been using facial recognition technologies during protests. The protests in these regions began in mid-December in condemnation of new citizenship laws that critics say marginalises Muslims. Indian startup Innefu Labs provide the Delhi Police with facial recognition software AI Vision, which also includes gait and body analysis. Activists fear the lack of regulation of the new technology as well as the secrecy surrounding its use on the public. It only became known that this software was in use during Delhi protests when revealed by the Indian Express newspaper.
However, this holds no comparison to the extensive and rapidly-growing network of facial recognition cameras in use in China. Those in China are subject to mass identification and surveillance in almost constant use. Facial recognition is now being used in schools to track the behaviour of students. Most critically, human rights campaigners have accused Chinese authorities and related surveillance companies of “exporting authoritarianism” via the technology that has led to the detention of more than 1.8 million people, predominantly Uighur Muslims.