Emotion analytics: a dystopian technology

Major corporations want to use your facial expressions to determine your employability.

Norman Lewis

Share

Emotion analytics – using AI to analyse facial expressions and infer feelings – is set to become a $25 billion business by 2023. But the very notion of emotion analytics is emotionally bankrupt and dangerous.

The next time you go for a job interview and there is a video-interview component, you had better be prepared to manage your facial expressions. An AI system could be cross-referencing your face and your body language with traits that it considers to be correlated with job success.

This might sound ridiculous, but it is already happening. In the US and South Korea, AI-assisted recruitment has grown increasingly popular. Large corporations like Unilever are already deploying such systems: an AI system developed by HireVue apparently saved the consumer-goods giant 100,000 hours of recruitment time last year. Huge tech companies like Microsoft and Apple are also pouring money into emotion-analytics research.

The idea is that an AI system can use algorithms to generate an employability score. Apparently, by looking at a face, a computer can assess an individual’s internal emotions and experience. What it calls ‘emotion recognition’ can apparently tell employers everything from whether a candidate is resolute or is a good team player.

There’s only one tiny flaw: it is utter nonsense. Emotion recognition is snake oil, not science. AI systems, used correctly, could be extremely useful, supplementing human activity and speeding up progress. But sadly, so much AI research is driven by a diminished view of human consciousness and emotions.

When humans navigate feelings, they use an immense amount of information: from facial expressions and body language to cultural references, context, moods and more. But the AI systems trying to do the same thing tend to focus mainly on the face. This is a big flaw, according to Lisa Feldman Barrett, a psychologist at Northeastern University and co-author of a damning study on the claims being made about ‘emotion recognition’.

Of course, people do smile when they are happy and frown when they are sad, Barrett says, but the correlation is weak’. Facial expressions don’t always mean the same thing – smiles, for instance, can be wry or ironic. Also, there are a whole manner of expressions which people might make when they are happy or sad beyond smiling. Barrett was one of five scientists brought together by the Association for Psychological Science, who spent two years reviewing more than 1,000 papers on emotion detection. Their conclusion was that it is very hard to use facial expressions alone to tell, accurately, how someone is feeling.

Human emotions are complex. Reducing them to algorithmic certainties denigrates human experience. This is very dangerous. These AI-driven emotion-recognition systems cannot actually assess an individual’s internal emotions. At best, they can only estimate how that individual’s emotions might be perceived by others.

Nevertheless, career consultants are cashing in by training new graduates and job-seekers on how to interview with an algorithm. There is also a huge appetite for using emotion detection in more and more areas of daily life. Emotion recognition has already been deployed in children’s classrooms in China. In the US, studies have tried using AI to detect deception in the courtroom. That bastion of anti-democracy, the EU, is reportedly trialling software which can purportedly detect deception through an analysis of micro-expressions. The aim is to use this to bolster security on its external border. It is not an overreaction to say that this technology has the potential to be deployed for dark, authoritarian purposes.

If emotion recognition becomes widespread, it will encourage a debased and diminished idea of human experience. And it will inculcate a culture that either thrives on dishonesty – we will have to learn how to circumvent or play the system if we want to be employed – or on mass conformity. The dystopian implications are very real.

Norman Lewis works on innovation networks and is a co-author of Big Potatoes: The London Manifesto for Innovation.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Comments

steven brook

22nd February 2020 at 2:47 pm

I once went on a human communication course a major component of which was body language. While getting organised I turned my chair around (think Christine Keeler) The lecturer was delighted to highlight me later on and proceeded to “read” my body language. I was obviously defensive, reluctant or nervous to be in the group and possibly introverted. The group then chipped in with their impressions and then finally I was allowed to say something, I pointed out that I was in a great deal of pain and was due to have a multilevel spinal fusion the following week. Consequently I turned the chair around to provide some kind of support. As you would expect no apology for their half baked deductions, they probably thought I was in denial. These would be the same kind of people who bought into the Satanic abuse nonsense.

Connor Trent

22nd February 2020 at 11:10 am

This reminds me of the so called science of Graphology, which, to the uninitiated is claimed to be able to discern numerous character traits and features of someone by analyisis of their handwriting. It is of course, utter nonsense.

Gareth Edward KING

21st February 2020 at 2:10 pm

Skype interviews are easily more difficult than face-to-face ones as there is far less to go on in terms of the information one can gleen from either side. How does one interpret a hand-shake? and those who look straight at you all of the time? I find the former reassuring if they’re firm, but on the other hand, ‘glarers’ can be disconcerting. If Skype interviews are unsuccessful, which they often are, I wonder how many offers I could’ve got if the interview’d been face-to-face.

jessica christon

21st February 2020 at 11:56 am

Job interviews have always been about conformity and dishonesty, so I don’t see this use of AI as any real departure from the status quo. The important thing is transparency and access to the information it gathers, rather than the tech itself.

Jim Lawrie

21st February 2020 at 10:40 am

This technology is for weak management who cannot face making judgement calls. The IF decisions in the code only enact the judgements and prejudices of others. It failed in football, where some managers retreated into what could be measured. “Paralysis by analysis” is what is was called – judgement calls rejected because the Optima stats said otherwise. David Beckham used to run up and down like a headless chicken towards the end of games already won so as to up his stats for ground covered and passes completed. If he could beat it, who can’t?

If human face to face contact can be delegated to machines, then all those expensive business trips, meetings and offices can be done away with. We can all stay at home and face the cameras in our jimjams. Then we would have to be judged on our choice in that particular attire. I would be wary of anyone who wears a onesy. Especially if it were bought in the merchandise shop of the (new)Rangers Football Club. The swines won last night from 2-0 down. Who could have predicted that?

Leave a comment

You must be logged in to comment. Log in or Register now.