Heads up – the computer knows what you’re thinking!
Actually, according to a lot of experts, that’s not really true. But new reports breaking today show how emotion recognition technology is becoming popular in various different fields, as a way to automate that age-old process of human judgment, even though some claim that the algorithm-based systems really don’t work very well.
“(Emotion detection software) claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” says AI Now co-founder Prof Kate Crawford in an article out today by Leo Kelion at BBC News. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”
One of the biggest applications of this type of software tool is in human resources – the program can screen job applicants by their facial expressions. Some contend that using this type of software guidance removes the ‘human bias’ from the hiring process. Others feel like you’re going to be discriminated against, based on what your face looks like, especially without a plan that holds the software to a sufficiently complex rule set.
“One needs to understand the context in which the emotional expression is being made,” explained Charles Nduka.
Other journalists have also found a deep vein of skepticism in the expert community around the real efficacy of the emotion detection resource. For example, James Vincent at The Verge found his own sources to show the shortcomings of emotion detection programs in a July story, where Lisa Feldman Barrett, a professor of psychology at Northeastern University, explains in detail how low the accuracy of these tools really is, and why that’s frightening.
“People, on average, the data show, scowl less than 30 percent of the time when they’re angry,” says Barrett. “So scowls are not the expression of anger; they’re an expression of anger — one among many. That means that more than 70 percent of the time, people do not scowl when they’re angry. And on top of that, they scowl often when they’re not angry …Would you really want outcomes being determined on this basis? Would you want that in a court of law, or a hiring situation, or a medical diagnosis, or at the airport … where an algorithm is accurate only 30 percent of the time?”
If we don’t really trust computer vision and image processing enough to use it to identify common images, why would we use these tools to make important decisions about human intent based on facial expressions?
It’s part of that big debate, and the walking the balance between rapid innovation and AI errors. We want the next big thing, but we want it to be a net positive to our society, and not the other way around.