The Information Commissioner’s Office (ICO) has warned companies to avoid buying emotion analysing artificial intelligence tools as it is unlikely the technology will ever work and could lead to bias and discrimination. Businesses that do deploy the technology could face swift action from the data regulator unless they can prove its effectiveness.
Emotional analysis technologies take in a number of biometric data points including gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture levels and attempts to use that to determine or predict someone’s emotional state.
The problem, says deputy information commissioner Stephen Bonner, is that “there is no evidence this actually works and a lot of evidence it will never work,” warning that it is more likely to lead to false results that could cause harm if a company relies on the findings.
He told Tech Monitor that the bar for a company being investigated if it does implement emotional analysis AI will be “very low” due to the warnings being issued now.
Companies Intelligence
View All
Reports
View All
Data Insights
View All
“There are times where new technologies are being rolled out and we’re like, ‘let’s wait and see and gain a sense of understanding from both sides’ and for other legitimate biometrics we are absolutely doing that,” Bonner says. But in the case of emotional AI, he adds that there is “no legitimate evidence this technology can work.”
“We will be paying extremely close attention and be comfortable moving to robust action more swiftly,” he says. “The onus is on those who choose to use this to prove to everybody that it’s worthwhile because the benefit of the doubt does not seem at all supported by the science.”
AI emotional analysis is useful in some cases
There are some examples of how this technology has been applied or suggested as a use case, Bonner says, including for monitoring the physical health of workers by offering wearable tools, and using the various data points to keep a record and make predictions about potential health issues.
The ICO warns that algorithms which haven’t been sufficiently developed to detect emotional cues will lead to a risk of systematic bias, inaccuracy and discrimination, adding that the technology relies on collecting, storing and processing a large amount of personal data including subconscious behavioural or emotional responses.
Content from our partners
How to protect the public sector against ransomware attacks
Can transformational procurement aid the public sector?
The ongoing battle to secure schools from cyberattack
“This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” the organisation warned, reiterating the lack of any evidence it actually works in creating a real, verifiable and accurate output.
Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters
Bonner says the ICO isn’t banning the use of this type of technology, just warning that its implementation will be under scrutiny due to the risks involved. He told Tech Monitor it is fine to use as a gimmick or entertainment tool as long as it is clearly branded as such.
“There is a little bit of a distinction between biometric measurements and inferring things about the outcome intent,” he says. “I think there is reasonable science that you can detect the level of stress on an individual through things in their voice. But from that, determining that they are a fraudster, for example, goes too far.
“We would not ban the idea of determining who seems upset [using AI] – you could even provide them extra support. But recognising that some people are upset and inferring that they are trying to commit fraud from their biometrics is certainly something you shouldn’t be doing.”
Cross-industry impact of biometrics
Biometrics are expected to have a significant impact across industries, from financial services companies verifying human identity through facial recognition, to voice recognition for accessing services instead of using a password.
The ICO is working on new biometrics guidance with the Ada Lovelace Institute and the British Youth Council. The guidance will “have people at its core” and is expected to be published in the spring.
Dr Mhairi Aitken, ethics research fellow at the Alan Turing Institute welcomed the warning from the ICO but says it is also important to look at the development side of these systems and make sure developers are taking an ethical approach, creating tools where there is a need and not just for the sake of it.
“The ethical approach to developing technologies or new applications has to begin with something about who might be the impacted communities and engaging them in the process to see whether this is really going to be appropriate in the context where it’s deployed,” she says, adding that this process gives us the opportunity to be aware of any harms that may not have anticipated.
Emotion-detecting AI – a ‘real risk of harm’
The harm that could be caused by such AI models is significant, especially for people who might not fit the ‘mould’ developed when building the predictive models, Dr Aitkin says. “It is such a complex area to begin to think about how we would automate something like that and to be able to take account of cultural differences and neurodivergence,” she adds.
AI systems could find it difficult to determine what is an appropriate emotional response in different contexts, Dr Aitkin says. “We display our emotions very differently depending on who we’re with and what the context is,” she says. “And then there are also considerations around whether these systems could ever fully take account of how emotions might be displayed differently by people.”
Unlike Bonner, who says there is minimal harm in using emotional AI tools in entertainment, Dr Aitken warns that this use case comes with its own set of risks, including people becoming accustomed to the technology and thinking it actually works. “It needs to be clearly labelled as entertainment,” she warns
When it comes to emotional AI, the problem is there are too many data points and differences from one human to the next to develop a model, Bonner adds. This is something that has been shown in multiple research papers on the technology.
“If someone comes up to us and says, ‘we’ve solved the problem and can make accurate predictions’, I’ll be back here eating humble pie and they’ll be winning all of the awards but I don’t think that is going to happen,” he says.
Read more: The EU wants to make it easier to sue over harms caused by AI
Topics in this article: AI, Regulation