Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) systemscans your face, scoring you fornervousness,empathyanddependability. It may sound like science fiction, but these systems are increasingly used,often without people’s knowledgeor consent.

Emotion recognition technology (ERT) is in fact a burgeoningmulti-billion-dollar industrythat aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systemsis controversial: there are biases built into the systems.

[Read:Can AI read your emotions? Try it for yourself]

Many companies use ERTto test customer reactionsto their products, from cereal to video games. But it can also be used in situations with muchhigher stakes, such as inhiring, byairport securityto flag faces as revealing deception or fear, inborder control, inpolicingto identify “dangerous people” or ineducationto monitor students’ engagement with their homework.

Shaky scientific ground

Calling all Scaleup founders! Join the Soonicorn Summit on November 28 in Amsterdam.

Meet with the leaders of Picnic, Miro, Carbon Equity and more during this exclusive event dedicated to Scaleup Founders!

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the discovery that many facial recognition technologies do not accurately detect darker-skinned faces. And the research team managing ImageNet, one of the largest and most important datasets used to train facial recognition, was recently forced to blur 1.5 million imagesin response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technologyfaces legal challengesregarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations havecalled for a banon facial recognition technology entirely.

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on thetheory of “basic emotions”which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotionsare expressed differentlyacross cultures and societies. In 2019, theAssociation for Psychological Scienceconducted a review of the evidence, concluding thatthere is no scientific supportfor the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A studyhas shownthat systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Althoughthe study of racial biasin ERT is small, racial bias in other forms of facial recognition is well-documented.

There are two ways that this technology can hurt people, says AI researcher Deborah Rajiin an interview with MIT Technology Review: “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”

So even if facial recognition technology can be de-biased and accurate for all people, it still may not be fair or just. We see thesedisparate effectswhen facial recognition technology is used in policing and judicial systems that are already discriminatory and harmful to people of colour. Technologies can be dangerous when they don’t work as they should. And they can also be dangerous when they work perfectly in an imperfect world.

The challenges raised by facial recognition technologies – including ERT – do not have easy or clear answers. Solving the problems presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred onpracticeandeffectson people’s lives.

When it comes to ERT, we need to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we need to ask ourselves: even if ERT could be engineered to accurately read everyone’s inner feelings, do we want such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and action.

Citizen science project

ERT has the potential to affect the lives of millions of people, yet there has beenlittle public deliberationabout how – and if –it should be used. This is why we have developeda citizen science project.

On ourinteractive website(which works best on a laptop, not a phone) you can try out a private and secure ERT for yourself, to see how it scans your face and interprets your emotions. You can also play games comparing human versus AI skills in emotion recognition and learn about the controversial science of emotion behind ERT.

Most importantly, you can contribute your perspectives and ideas to generate new knowledge about the potential impacts of ERT. As the computer scientist and digital activistJoy Buolamwinisays: “If you have a face, you have a place in the conversation.”

This article byAlexa Hagerty, Research Associate of Anthropology,University of CambridgeandAlexandra Albert, Research Fellow in Citizen Social Science,UCL, is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with