Several companies have built AIs designed to figure our when you are angry or sad or excited. But there are serious questions about their accuracy, and the extent to which they should be used in public life
18 November 2020
RANA EL KALIOUBY was alone in her flat, messaging her husband. “How are you doing?” he typed. “I’m fine,” she typed back. Except that wasn’t true. The couple had been apart for weeks and she was feeling miserable. Had he been in the room, he could have read the emotions on her face at a glance. But he was miles away.
It is a scene that could easily have played out during a coronavirus pandemic lockdown, when colleagues, friends and even families were cut off from one another. But it actually took place 20 years ago, soon after el Kaliouby had moved from Egypt to the UK to study, leaving her husband behind.
It was in that moment, she says, that she realised how technology was blind to human emotions. Ever since, el Kaliouby has dreamed of building an emotionally intelligent computer – or as she puts it “a mind-reading machine”. With so many relationships mediated by text or video call these days, it is a technology that couldn’t be more relevant.
These days, the company el Kaliouby co-founded, Affectiva, and others like it, claim to have systems capable of detecting human emotions. The promises they make about the potential of this emotion artificial intelligence (AI) are staggering. Computers, they say, will know if we are distracted while driving, angrily typing an email that we may regret or when our mental health is beginning to slump. In fact, systems like this already exist. But do they live up to their billing? And do …