Donate

Emotion analytics: a dystopian technology

Major corporations want to use your facial expressions to determine your employability.

Norman Lewis

Topics Science & Tech USA World

Emotion analytics – using AI to analyse facial expressions and infer feelings – is set to become a $25 billion business by 2023. But the very notion of emotion analytics is emotionally bankrupt and dangerous.

The next time you go for a job interview and there is a video-interview component, you had better be prepared to manage your facial expressions. An AI system could be cross-referencing your face and your body language with traits that it considers to be correlated with job success.

This might sound ridiculous, but it is already happening. In the US and South Korea, AI-assisted recruitment has grown increasingly popular. Large corporations like Unilever are already deploying such systems: an AI system developed by HireVue apparently saved the consumer-goods giant 100,000 hours of recruitment time last year. Huge tech companies like Microsoft and Apple are also pouring money into emotion-analytics research.

The idea is that an AI system can use algorithms to generate an employability score. Apparently, by looking at a face, a computer can assess an individual’s internal emotions and experience. What it calls ‘emotion recognition’ can apparently tell employers everything from whether a candidate is resolute or is a good team player.

There’s only one tiny flaw: it is utter nonsense. Emotion recognition is snake oil, not science. AI systems, used correctly, could be extremely useful, supplementing human activity and speeding up progress. But sadly, so much AI research is driven by a diminished view of human consciousness and emotions.

When humans navigate feelings, they use an immense amount of information: from facial expressions and body language to cultural references, context, moods and more. But the AI systems trying to do the same thing tend to focus mainly on the face. This is a big flaw, according to Lisa Feldman Barrett, a psychologist at Northeastern University and co-author of a damning study on the claims being made about ‘emotion recognition’.

Of course, people do smile when they are happy and frown when they are sad, Barrett says, but the correlation is weak’. Facial expressions don’t always mean the same thing – smiles, for instance, can be wry or ironic. Also, there are a whole manner of expressions which people might make when they are happy or sad beyond smiling. Barrett was one of five scientists brought together by the Association for Psychological Science, who spent two years reviewing more than 1,000 papers on emotion detection. Their conclusion was that it is very hard to use facial expressions alone to tell, accurately, how someone is feeling.

Human emotions are complex. Reducing them to algorithmic certainties denigrates human experience. This is very dangerous. These AI-driven emotion-recognition systems cannot actually assess an individual’s internal emotions. At best, they can only estimate how that individual’s emotions might be perceived by others.

Nevertheless, career consultants are cashing in by training new graduates and job-seekers on how to interview with an algorithm. There is also a huge appetite for using emotion detection in more and more areas of daily life. Emotion recognition has already been deployed in children’s classrooms in China. In the US, studies have tried using AI to detect deception in the courtroom. That bastion of anti-democracy, the EU, is reportedly trialling software which can purportedly detect deception through an analysis of micro-expressions. The aim is to use this to bolster security on its external border. It is not an overreaction to say that this technology has the potential to be deployed for dark, authoritarian purposes.

If emotion recognition becomes widespread, it will encourage a debased and diminished idea of human experience. And it will inculcate a culture that either thrives on dishonesty – we will have to learn how to circumvent or play the system if we want to be employed – or on mass conformity. The dystopian implications are very real.

Norman Lewis works on innovation networks and is a co-author of Big Potatoes: The London Manifesto for Innovation.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Science & Tech USA World

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today