An argument for greater integration of artificial intelligence (AI) or machine learning (ML) into our daily lives is that machines can’t make mistakes. While it is possible there can be mechanical breakdowns as well as software glitches, when an AI-powered device works it won’t make math errors, won’t get lost or otherwise confused.

Software doesn’t get tired or distracted, and thus AI – whether used in autonomous vehicles or as part of a security system – could improve safety. Unlike humans who can make mistakes, AI should be nearly perfect at doing some jobs.

This is explained in the ground-breaking science fiction movie The Terminator, where the killing machine sent from the future is described as something that “can’t be bargained with, it can’t be reasoned with, it doesn’t feel pity or remorse or fear.” Such a system would seem ideal for say airport security, but obviously would have shortcomings where common ground is needed.

Misreading Human Emotions

However, now there could be another shortcoming with AI, it might misread human emotion. That could be a problem in situations where it is necessary to tell the difference between fear and anger. A security system could see a threat when someone is actually in danger.

Kate Crawford, research professor at USC Annenberg explained some of the shortcomings with AI in reading – or rather misreading – human emotion in a recent article for DefenseOne. She explained that AI giants including Amazon, IBM, and Microsoft have all designed systems for emotion detection, but argued that there is no good evidence that a person’s facial expressions can actually reveal a person’s feelings.

If may not be possible to infer happiness from a smile or anger from a scowl – even as the tech companies would like to suggest otherwise. Crawford is only the latest researcher to delve into the subject.

A Harvard Business Review paper from November 2019 offered a similar take on the subject that “AI is often also not sophisticated enough to understand cultural differences in expressing and reading emotions, making it harder to draw accurate conclusions.”

If AI can’t read emotions, can it be trusted to do the job it is tasked to do? Some governments may not be willing to take the chance, and the European Union has already proposed rules that could restrict, and in some cases ban AI if it were a threat to the safety, livelihoods and rights of people. This could potentially be an example where AI could be a threat to safety.

“Understanding emotions or affects through textual, auditory, visual, biological, or other channels has been a topic of deep interest and investigation for decades,” explained Dr. Chirag Shah, associate professor in the Information School at the University of Washington.

“While a lot of progress has been made in this area, there are many issues with using such technology in real-life or mission-critical applications,” Shah told ClearanceJobs.

Not That Simple

Shah warned that two problems still remain.

“The first is that most of the AI models for recognizing or classifying human emotions from visual clues (e.g., pictures or videos) are built using biased data — primarily using Caucasian participants,” said Shah. “Those models don’t do very well with participants of different ethnicity as we also found in our own research. Often these AI systems claim to have high accuracy, but the traditional ways of measuring effectiveness of such systems may miss the imbalance in data. For example, if the data has 90% of race-X and 10% of race-Y represented, it is possible to achieve an overall accuracy of up to 90% by doing well on race-X and completely missing race-Y.”

The other problem, Shah added, is that there is often a disconnect between recognizing an emotion and connecting it to behavior or intention.

“In our research, we have used Q-sensors, which are small bracelets that measure one’s electrodermal activities as a way to give us some signals about their emotions in real time,” Shah noted. “But it would give the same signal for someone being angry and being excited. In other words, it does not give us enough nuance to be able to make judgments solely based on that signal. More importantly, everyone has a slightly different baseline for their emotions, so to truly capture their emotions, we would need to know that baseline first. In many real-life applications where such emotion capturing and interpretations are done, we don’t have that possibility.”

Context of Communications

A point to remember is that a machine can’t be reasoned with, and it doesn’t feel any emotion – pity, remorse, fear or anything else. It also utilizes the data that is provided to it.

This could include a security system at an airport, or a patrol drone near a distant battlefield. It would analyze potential threats based on the data. That should include the emotions of the people, but if AI can’t accurately read those emotions that data can’t be trusted.

“It presents a potential challenge for AI that it cannot detect human emotions because it often removes context from any given exchange – and depending on the data and information that AI is looking to extract, context can often weigh considerably on that information, and sometimes context can be everything,” explained futurist Scott Steinberg.

“Likewise, when attempting to process and make sense of data, a lack of background and surrounding information provided by human emotion – which can often be non-verbal and sometimes even subjective in nature – can make it more difficult to accurately catalogue and track important details surrounding an interaction,” Steinberg told ClearanceJobs. “If you think about it, people communicate in many different ways including body language, sly references, a subtle glance, etc. that are hard to pick up in real life. Trying to do so without a sense of backgrounder or broader reference can be even harder for a computer – leading to miscalculations, errors, or overlook items and potential discrepancies or errant insights.”

This isn’t to say that AI couldn’t learn to read emotions, but we’re not there yet even if process is already being made.

“As a way to be cautious, I want to point out that many AI systems detect affects, and not emotions,” said Shah. “Affect typically goes from negative to positive with gradation, whereas emotions are more complex.”



Related News

Peter Suciu is a freelance writer who covers business technology and cyber security. He currently lives in Michigan and can be reached at You can follow him on Twitter: @PeterSuciu.