Live facial recognition technology being used by the Metropolitan and South Wales police forces has been branded “unethical” and possibly illegal in a new report. Researchers from the Minderoo Centre for Technology and Democracy at Cambridge University have called for a halt to its use, declaring “the risks are far greater than any small benefit that might be gained from using it”.
The technology takes footage from a live CCTV camera feed, looks for faces and compares the features to those in a pre-determined watchlist of “people of interest” in real-time. When one is spotted it generates an alert that officers can then investigate.
The problems with police live facial recognition
Researchers at the Minderoo centre created “minimum ethical and legal standards” that should be used to govern any use of facial recognition technology and tested those standards against how UK police forces are using it, finding they all failed to meet the minimum.
Professor Gina Neff, executive director of the centre, said her team compiled a list of all the ethical guidelines, legal frameworks and current legislation to create the measures used in the tests. These aren’t legal requirements, but rather what the researchers say should be used as a benchmark to protect privacy, human rights, transparency and bias requirements, as well as ensure accountability and provide oversight on the use and storage of personal information.
Companies Intelligence
View All
Reports
View All
Data Insights
View All
All the current police use cases for live facial recognition failed the test, Professor Neff says. “These are complex technologies, they are hard to use and hard to regulate with the laws we have on the books,” she told Tech Monitor. “The level of accuracy achieved does not warrant the level of invasiveness required to make them work.
“These deployments of facial recognition technologies have not been shown to be effective and have not been shown to be safe. We want documentation on how the technology is used, regulated and monitored and urge police to stop using live facial recognition technology.”
A spokesperson for the Metropolitan Police said there was a firm legal basis for the way it makes use of live facial recognition technology, explaining that it has had significant judicial scrutiny in both the divisional court and Court of Appeal.
The appeal court recognised the legal basis via common law policing powers, finding that the legal framework in place to regulate the deployment of LFR contains safeguards and allows for examination of the use and whether there was a proper law enforcement purpose and whether the means used were strictly necessary.
Content from our partners
How Germany’s new digital strategy can boost manufacturers
How companies are tying data strategy to business value
Powering digital transformation for SaaS providers
The technology has enabled the Met to “locate dangerous individuals and those who pose a serious risk to our communities”, the spokesperson explained. Officers deploy it with a primary focus on the most serious crimes and locating people wanted for violent offences or those with outstanding warrants that are proving hard to find, they claim.
Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters
“Operational deployments of LFR technology have been in support of longer-term violence reduction initiatives and have resulted in a number of arrests for serious offences including conspiracy to supply class A drugs, assault on emergency service workers, possession with intent to supply class A & B drugs, grievous bodily harm and being unlawfully at large having escaped from prison,” the spokesperson added.
Current facial recognition uses by police not meeting standards
Professor Neff says the study looked at best practices and existing principles, measuring against guidelines already in place and found them lacking. “By the best principles and practices and laws on hand today, these deployments are not meeting that standard,” she says.
“That is why we say the technologies are not fit for purpose. They are not meeting the standards for safe operation of large-scale data systems. For example, what safeguards were in place during the procurement process? This might not be covered by the rule of law used in a court case, but it is something we have guidelines on for use in public agencies.”
Deputy information commissioner Stephen Bonner told Tech Monitor: “We continue to remind police forces of their responsibilities under data protection law when using live facial recognition technology in public places. This includes making sure that deployments are strictly necessary, proportionate and the public is kept informed.”
Imogen Parker, associate director for policy at the Ada Lovelace Institute, which recently carried out an extensive legal review of the governance of biometric data in England and Wales, said this new research highlights the ethical, legal and societal concerns around biometric technology.
The Ryder Review into biometric data regulation found that existing legal protections were fragmented, unclear and “not fit for purpose”.
“The fact that all three of the police deployments audited failed to meet minimum ethical and legal standards continues to demonstrate how governance failings are leading to harmful and legally questionable deployments in practice,” said Parker in an emailed statement.
“The challenges presented by biometric technologies are not limited to facial recognition. Attempts to use biometrics to make inferences about emotion, characteristics or abilities – often without a scientific basis – raises serious questions about responsible and legitimate use, something recently highlighted by the ICO.”
New legislation required to govern facial recognition
Parker argues there is an urgent need for new, comprehensive legislation addressing the governance of all biometric technologies, not just facial recognition and police use. “This should be overseen and enforced by a new regulatory function, to require minimum standards of accuracy, reliability and validity, as well as an assessment of proportionality, before these technologies are deployed in real-world contexts,” she says.
Without it the legal basis for live facial recognition will remain unclear, she says, adding “the risk of harm persists. There must be a moratorium on all uses of one-to-many facial identification in public spaces until such legislation is passed.”
A Home Office spokesperson told Tech Monitor: “Facial recognition plays a crucial role in helping the police tackle serious offences including knife crime, rape, child sexual exploitation and terrorism. It is right that we back and empower the police to use it but we are clear it must be done in a fair and proportionate way to maintain public trust.”
Professor Neff says the message is simple: “Don’t deploy this technology as the risks are far greater than any small benefit that might be gained from using it. Don’t do it.”
Read more: Facial recognition needs a stronger case in law enforcement
Topics in this article: AI, Police, Regulation