小程序
传感搜
传感圈

Police use of live facial recognition “unethical” and possibly illegal

2022-11-03
关注

Live facial recognition technology being used by the Metropolitan and South Wales police forces has been branded “unethical” and possibly illegal in a new report. Researchers from the Minderoo Centre for Technology and Democracy at Cambridge University have called for a halt to its use, declaring “the risks are far greater than any small benefit that might be gained from using it”.

Police forces are trialing live facial recognition technology using CCTV cameras and watchlists (Photo: Narin Nonthamand/Shutterstock)
Police forces are trialling live facial recognition technology using CCTV cameras and watchlists (Photo courtesy of Narin Nonthamand/Shutterstock)

The technology takes footage from a live CCTV camera feed, looks for faces and compares the features to those in a pre-determined watchlist of “people of interest” in real-time. When one is spotted it generates an alert that officers can then investigate.

The problems with police live facial recognition

Researchers at the Minderoo centre created “minimum ethical and legal standards” that should be used to govern any use of facial recognition technology and tested those standards against how UK police forces are using it, finding they all failed to meet the minimum.

Professor Gina Neff, executive director of the centre, said her team compiled a list of all the ethical guidelines, legal frameworks and current legislation to create the measures used in the tests. These aren’t legal requirements, but rather what the researchers say should be used as a benchmark to protect privacy, human rights, transparency and bias requirements, as well as ensure accountability and provide oversight on the use and storage of personal information.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

All the current police use cases for live facial recognition failed the test, Professor Neff says. “These are complex technologies, they are hard to use and hard to regulate with the laws we have on the books,” she told Tech Monitor. “The level of accuracy achieved does not warrant the level of invasiveness required to make them work.

“These deployments of facial recognition technologies have not been shown to be effective and have not been shown to be safe. We want documentation on how the technology is used, regulated and monitored and urge police to stop using live facial recognition technology.”

A spokesperson for the Metropolitan Police said there was a firm legal basis for the way it makes use of live facial recognition technology, explaining that it has had significant judicial scrutiny in both the divisional court and Court of Appeal.

The appeal court recognised the legal basis via common law policing powers, finding that the legal framework in place to regulate the deployment of LFR contains safeguards and allows for examination of the use and whether there was a proper law enforcement purpose and whether the means used were strictly necessary.

Content from our partners

How Germany’s new digital strategy can boost manufacturers

How Germany’s new digital strategy can boost manufacturers

How companies are tying data strategy to business value

How companies are tying data strategy to business value

Powering digital transformation for SaaS providers

Powering digital transformation for SaaS providers

The technology has enabled the Met to “locate dangerous individuals and those who pose a serious risk to our communities”, the spokesperson explained. Officers deploy it with a primary focus on the most serious crimes and locating people wanted for violent offences or those with outstanding warrants that are proving hard to find, they claim.

Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters

“Operational deployments of LFR technology have been in support of longer-term violence reduction initiatives and have resulted in a number of arrests for serious offences including conspiracy to supply class A drugs, assault on emergency service workers, possession with intent to supply class A & B drugs, grievous bodily harm and being unlawfully at large having escaped from prison,” the spokesperson added.

Current facial recognition uses by police not meeting standards

Professor Neff says the study looked at best practices and existing principles, measuring against guidelines already in place and found them lacking. “By the best principles and practices and laws on hand today, these deployments are not meeting that standard,” she says.

“That is why we say the technologies are not fit for purpose. They are not meeting the standards for safe operation of large-scale data systems. For example, what safeguards were in place during the procurement process? This might not be covered by the rule of law used in a court case, but it is something we have guidelines on for use in public agencies.”

Deputy information commissioner Stephen Bonner told Tech Monitor: “We continue to remind police forces of their responsibilities under data protection law when using live facial recognition technology in public places. This includes making sure that deployments are strictly necessary, proportionate and the public is kept informed.”

Imogen Parker, associate director for policy at the Ada Lovelace Institute, which recently carried out an extensive legal review of the governance of biometric data in England and Wales, said this new research highlights the ethical, legal and societal concerns around biometric technology.

The Ryder Review into biometric data regulation found that existing legal protections were fragmented, unclear and “not fit for purpose”.

“The fact that all three of the police deployments audited failed to meet minimum ethical and legal standards continues to demonstrate how governance failings are leading to harmful and legally questionable deployments in practice,” said Parker in an emailed statement.

“The challenges presented by biometric technologies are not limited to facial recognition. Attempts to use biometrics to make inferences about emotion, characteristics or abilities – often without a scientific basis – raises serious questions about responsible and legitimate use, something recently highlighted by the ICO.”

New legislation required to govern facial recognition

Parker argues there is an urgent need for new, comprehensive legislation addressing the governance of all biometric technologies, not just facial recognition and police use. “This should be overseen and enforced by a new regulatory function, to require minimum standards of accuracy, reliability and validity, as well as an assessment of proportionality, before these technologies are deployed in real-world contexts,” she says.

Without it the legal basis for live facial recognition will remain unclear, she says, adding “the risk of harm persists. There must be a moratorium on all uses of one-to-many facial identification in public spaces until such legislation is passed.”

A Home Office spokesperson told Tech Monitor: “Facial recognition plays a crucial role in helping the police tackle serious offences including knife crime, rape, child sexual exploitation and terrorism. It is right that we back and empower the police to use it but we are clear it must be done in a fair and proportionate way to maintain public trust.”

Professor Neff says the message is simple: “Don’t deploy this technology as the risks are far greater than any small benefit that might be gained from using it. Don’t do it.”

Read more: Facial recognition needs a stronger case in law enforcement

Topics in this article: AI, Police, Regulation

参考译文
警方使用实时面部识别是“不道德的”,甚至可能是非法的
在一份新的报告中,大都会和南威尔士警方使用的实时面部识别技术被认为是“不道德的”,甚至可能是非法的。剑桥大学明德路技术与民主中心的研究人员呼吁停止使用它,称“使用它的风险远远大于可能获得的任何小好处”。该技术从闭路电视直播的镜头中获取画面,寻找面部特征,并将其与预先确定的“相关人员”实时监视列表中的特征进行比较。一旦发现,它就会发出警报,警察就可以进行调查。明德路中心的研究人员制定了用于管理面部识别技术使用的“最低道德和法律标准”,并对这些标准与英国警方的使用情况进行了测试,发现它们都未能达到最低标准。该中心的执行主任吉娜·内夫教授说,她的团队编制了一份清单,列出了所有的道德准则、法律框架和现行立法,以制定测试中使用的措施。这些并不是法律要求,而是研究人员所说的应该被用作保护隐私、人权、透明度和偏见要求的基准,同时确保问责制,并对个人信息的使用和存储提供监督。内夫教授说,目前所有警方使用的实时面部识别都没有通过测试。“这些都是复杂的技术,它们很难使用,也很难用我们现有的法律来监管,”她告诉Tech Monitor。“所达到的准确性水平并不能保证它们工作所需的侵入性水平。”这些面部识别技术的部署没有被证明是有效的,也没有被证明是安全的。我们希望有文件说明该技术的使用、监管和监控情况,并敦促警方停止使用实时面部识别技术。伦敦警察厅的一名发言人表示,他们使用实时面部识别技术的方式是有坚实的法律基础的,并解释说,它在地方法院和上诉法院都受到了重要的司法审查。上诉法院透过普通法警务权力认可有关法律依据,裁定现行规管LFR的部署的法律架构包含保障措施,并可审查使用LFR的情况,以及是否有适当的执法目的,以及所使用的手段是否绝对必要。该发言人解释说,这项技术使伦敦警察厅能够“定位危险人士和那些对我们的社区构成严重威胁的人”。他们声称,官员们将其部署在最严重的犯罪和寻找暴力犯罪的通缉犯或那些被证明难以找到的通缉犯上。“LFR技术的实际部署是为了支持长期减少暴力的举措,并导致一些严重罪行的逮捕,包括阴谋提供a类毒品、袭击急救服务人员、持有并意图提供a类毒品等;B吸毒,严重身体伤害,非法越狱,”发言人补充说。内夫教授表示,该研究考察了最佳实践和现有原则,并与已有的指导方针进行了对比,发现它们存在不足。她说:“根据现有的最佳原则、做法和法律,这些部署没有达到这一标准。”“这就是为什么我们说这些技术不适合我们的目的。它们不符合大规模数据系统安全运行的标准。例如,在采购过程中采取了哪些保障措施?这可能不包括在法庭案件中使用的法律规则中,但我们有在公共机构中使用的指导方针。” 副信息专员Stephen Bonner告诉Tech Monitor:“我们继续提醒警方在公共场所使用实时面部识别技术时,他们在数据保护法下的责任。这包括确保部署是严格必要的、适当的,并保持公众知情。艾达·洛芙莱斯研究所的政策副主任伊莫金·帕克(Imogen Parker)最近对英格兰和威尔士的生物识别数据治理进行了广泛的法律审查。他说,这项新研究凸显了生物识别技术在伦理、法律和社会方面的担忧。Ryder对生物识别数据监管的审查发现,现有的法律保护是分散的、不明确的,而且“不适合目的”。帕克在一份电子邮件声明中说:“所有三个被审计的警察部署都未能达到最低的道德和法律标准,这一事实继续表明,管理的失败是如何导致在实践中有害和法律上有问题的部署。”生物识别技术带来的挑战不仅限于面部识别。试图使用生物识别技术来推断情感、特征或能力——通常没有科学依据——引发了关于负责任和合法使用的严重问题,最近ICO强调了这一点。帕克认为,迫切需要制定新的、全面的立法来治理所有生物识别技术,而不仅仅是面部识别和警察使用。她说:“这应该由一个新的监管职能来监督和执行,要求在这些技术在现实环境中部署之前,对准确性、可靠性和有效性提出最低标准,并对相称性进行评估。”她说,没有它,实时面部识别的法律基础将仍然不清楚,并补充说“伤害的风险仍然存在。”在此类立法通过之前,必须暂停在公共场所使用一对多的面部识别。一位内政部发言人在接受《科技观察》采访时表示:“面部识别在帮助警方打击严重犯罪方面发挥着至关重要的作用,包括持刀犯罪、强奸、儿童性剥削和恐怖主义。”我们支持并授权警察使用它是正确的,但我们很清楚,这必须以一种公平和适当的方式来维护公众的信任。内夫教授说,道理很简单:“不要使用这项技术,因为它的风险远远大于使用它可能获得的任何小好处。”不要这样做。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘