小程序
传感搜
传感圈

Don’t buy emotion-analysing AI, ICO warns tech leaders

2022-10-30
关注

The Information Commissioner’s Office (ICO) has warned companies to avoid buying emotion analysing artificial intelligence tools as it is unlikely the technology will ever work and could lead to bias and discrimination. Businesses that do deploy the technology could face swift action from the data regulator unless they can prove its effectiveness.

Emotional AI can be used to monitor the health of workers via wearable devices. But the technology is otherwise unproven, the ICO says (Photo by LDprod/Shutterstock)

Emotional analysis technologies take in a number of biometric data points including gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture levels and attempts to use that to determine or predict someone’s emotional state.

The problem, says deputy information commissioner Stephen Bonner, is that “there is no evidence this actually works and a lot of evidence it will never work,” warning that it is more likely to lead to false results that could cause harm if a company relies on the findings.

He told Tech Monitor that the bar for a company being investigated if it does implement emotional analysis AI will be “very low” due to the warnings being issued now.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

“There are times where new technologies are being rolled out and we’re like, ‘let’s wait and see and gain a sense of understanding from both sides’ and for other legitimate biometrics we are absolutely doing that,” Bonner says. But in the case of emotional AI, he adds that there is “no legitimate evidence this technology can work.”

“We will be paying extremely close attention and be comfortable moving to robust action more swiftly,” he says. “The onus is on those who choose to use this to prove to everybody that it’s worthwhile because the benefit of the doubt does not seem at all supported by the science.”

AI emotional analysis is useful in some cases

There are some examples of how this technology has been applied or suggested as a use case, Bonner says, including for monitoring the physical health of workers by offering wearable tools, and using the various data points to keep a record and make predictions about potential health issues.

The ICO warns that algorithms which haven’t been sufficiently developed to detect emotional cues will lead to a risk of systematic bias, inaccuracy and discrimination, adding that the technology relies on collecting, storing and processing a large amount of personal data including subconscious behavioural or emotional responses.

Content from our partners

How to protect the public sector against ransomware attacks

How to protect the public sector against ransomware attacks

Can transformational procurement aid the public sector?

Can transformational procurement aid the public sector?

The ongoing battle to secure schools from cyberattack

The ongoing battle to secure schools from cyberattack

“This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” the organisation warned, reiterating the lack of any evidence it actually works in creating a real, verifiable and accurate output.

Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters

Bonner says the ICO isn’t banning the use of this type of technology, just warning that its implementation will be under scrutiny due to the risks involved. He told Tech Monitor it is fine to use as a gimmick or entertainment tool as long as it is clearly branded as such.

“There is a little bit of a distinction between biometric measurements and inferring things about the outcome intent,” he says. “I think there is reasonable science that you can detect the level of stress on an individual through things in their voice. But from that, determining that they are a fraudster, for example, goes too far.

“We would not ban the idea of determining who seems upset [using AI] – you could even provide them extra support. But recognising that some people are upset and inferring that they are trying to commit fraud from their biometrics is certainly something you shouldn’t be doing.”

Cross-industry impact of biometrics

Biometrics are expected to have a significant impact across industries, from financial services companies verifying human identity through facial recognition, to voice recognition for accessing services instead of using a password.

The ICO is working on new biometrics guidance with the Ada Lovelace Institute and the British Youth Council. The guidance will “have people at its core” and is expected to be published in the spring.

Dr Mhairi Aitken, ethics research fellow at the Alan Turing Institute welcomed the warning from the ICO but says it is also important to look at the development side of these systems and make sure developers are taking an ethical approach, creating tools where there is a need and not just for the sake of it.

“The ethical approach to developing technologies or new applications has to begin with something about who might be the impacted communities and engaging them in the process to see whether this is really going to be appropriate in the context where it’s deployed,” she says, adding that this process gives us the opportunity to be aware of any harms that may not have anticipated.

Emotion-detecting AI – a ‘real risk of harm’

The harm that could be caused by such AI models is significant, especially for people who might not fit the ‘mould’ developed when building the predictive models, Dr Aitkin says. “It is such a complex area to begin to think about how we would automate something like that and to be able to take account of cultural differences and neurodivergence,” she adds.

AI systems could find it difficult to determine what is an appropriate emotional response in different contexts, Dr Aitkin says. “We display our emotions very differently depending on who we’re with and what the context is,” she says. “And then there are also considerations around whether these systems could ever fully take account of how emotions might be displayed differently by people.”

Unlike Bonner, who says there is minimal harm in using emotional AI tools in entertainment, Dr Aitken warns that this use case comes with its own set of risks, including people becoming accustomed to the technology and thinking it actually works. “It needs to be clearly labelled as entertainment,” she warns

When it comes to emotional AI, the problem is there are too many data points and differences from one human to the next to develop a model, Bonner adds. This is something that has been shown in multiple research papers on the technology.

“If someone comes up to us and says, ‘we’ve solved the problem and can make accurate predictions’, I’ll be back here eating humble pie and they’ll be winning all of the awards but I don’t think that is going to happen,” he says.

Read more: The EU wants to make it easier to sue over harms caused by AI

Topics in this article: AI, Regulation

参考译文
ICO警告科技领袖,不要购买分析情感的人工智能
英国信息专员办公室(ICO)警告企业避免购买情感分析人工智能工具,因为这种技术不太可能起作用,还可能导致偏见和歧视。部署了该技术的企业可能面临数据监管机构的迅速行动,除非它们能证明该技术的有效性。情绪分析技术采用了许多生物特征数据点,包括注视跟踪、情绪分析、面部运动、步态分析、心跳、面部表情和皮肤水分水平,并试图利用这些数据来确定或预测某人的情绪状态。副信息专员斯蒂芬·邦纳(Stephen Bonner)表示,问题在于“没有证据表明这种方法真的有效,而且有很多证据表明它永远不会有效。”他警告称,如果公司依赖于调查结果,这更有可能导致错误的结果,从而造成伤害。他告诉Tech Monitor,由于现在发布的警告,如果一家公司实施了情绪分析AI,那么它被调查的门槛将“非常低”。Bonner说道:“有时候当新技术被推广时,我们就会想,‘让我们拭目以待,并获得双方的理解’,对于其他合法的生物识别技术,我们绝对会这么做。”但就情感AI而言,他补充说,“没有合法证据表明这种技术可以发挥作用。”他表示:“我们将极为密切地关注这一问题,并愿意更迅速地采取有力行动。”“责任在那些选择利用这一点向所有人证明它是值得的人身上,因为无罪推定似乎根本没有科学依据。”Bonner说,有一些例子表明这项技术已经被应用或建议作为一个用例,包括通过提供可穿戴工具来监测工人的身体健康,以及使用各种数据点来记录和预测潜在的健康问题。ICO警告说,尚未充分发展到检测情绪线索的算法将导致系统性偏见、不准确和歧视的风险,并补充说,该技术依赖于收集、存储和处理大量的个人数据,包括潜意识的行为或情绪反应。该组织警告称:“这种数据使用比用于验证或识别一个人的传统生物识别技术风险要大得多。”该组织重申,缺乏任何证据表明,这种数据使用能够产生真实、可验证和准确的输出。邦纳说,ICO并没有禁止使用这类技术,只是警告说,由于涉及的风险,其实施将受到审查。他在接受Tech Monitor采访时表示,只要有明确的商标,就可以把它作为噱头或娱乐工具使用。他说:“在生物测量和推断结果意图之间有一点区别。”“我认为,你可以通过一个人的声音来检测他的压力水平,这是有道理的。但是,从这一点上来说,判定他们是一个骗子就太过分了。“我们不会禁止(使用人工智能)判断谁看起来不高兴的想法——你甚至可以为他们提供额外的支持。但如果你意识到有些人对此感到不安,并推断他们试图从自己的生物特征信息中进行欺诈,那肯定是你不应该做的事情。“从金融服务公司通过面部识别验证人类身份,到使用语音识别代替密码访问服务,生物识别技术预计将对各个行业产生重大影响。ICO正在与Ada Lovelace研究所和英国青年委员会合作制定新的生物识别指导。该指导方针将“以人为核心”,预计将于春季发布。 艾伦·图灵研究所的伦理研究员Mhairi Aitken博士对ICO的警告表示欢迎,但他表示,同样重要的是要关注这些系统的开发方面,并确保开发人员采取符合伦理的方法,在有需要的地方创造工具,而不是仅仅为了它。她说:“开发技术或新应用的道德方法必须从哪些人可能是受影响的社区开始,并让他们参与到这个过程中来,看看它在部署的环境中是否真的合适。”她补充说,这个过程让我们有机会意识到可能没有预料到的任何危害。艾特金博士说,这种人工智能模型可能造成的危害是巨大的,特别是对那些可能不符合构建预测模型时开发的“模式”的人来说。她补充说:“这是一个非常复杂的领域,要开始思考我们如何将这样的东西自动化,并能够考虑到文化差异和神经分化。”艾特金博士表示,人工智能系统可能会发现很难确定在不同环境下什么是适当的情绪反应。她说:“我们表达情感的方式非常不同,这取决于我们和谁在一起以及所处的环境。”“还有一个问题是,这些系统是否能够完全考虑到人们可能表现出的不同情绪。”邦纳表示,在娱乐中使用情感AI工具的危害很小,但艾特肯博士警告称,这种用例有其自身的一系列风险,包括人们开始习惯于这项技术,并认为它确实有效。她警告称:“需要明确地将其贴上娱乐的标签。”谈到情感AI,邦纳补充称,问题在于,每个人之间的数据点和差异太多,无法开发出一个模型。关于这项技术的多篇研究论文已经证明了这一点。他表示:“如果有人来找我们说,‘我们已经解决了问题,可以做出准确的预测’,我就会回到这里赔礼道歉,他们会赢得所有的奖项,但我认为这不会发生。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘