Robots can read your p-p-p-poker face

Emotion-sensing technology could be the next frontier of personalization.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

This article is an installment of The Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.

Robots are learning to interpret our emotions. They’re not that great at it yet, but with rapid advances in sensor technology and A.I., they’re getting better.

Why this matters: Artificial emotional intelligence is new, but it’s already being used in ways that could impact your life — like if you get that job or promotion. The industry has an estimated value of $20 billion USD and is growing fast. We should be weighing the implications of this technology now before it fully arrives.

What it is: This technology — referred to as emotion A.I. — can sense our internal state and use that data to make decisions about how it will respond.

Some companies are already using emotion A.I. to analyze a job candidate’s employability.

How it works: Emotion A.I. analyzes your facial expressions, eye movements, body movements, voice waves, biometric data…even how you walk! Our bodies give off unconscious cues about how we’re feeling all the time — some that are barely noticeable by other humans — and the right technology could pick it up. When we’re stressed or confused, our pupils dilate. When we sweat more, that may indicate excitement or frustration. We also tend to exhale more when we’re scared. Advanced emotion recognition systems could potentially combine multiple sets of data into a machine learning algorithm and reveal things like how hard our brains are working (or not!), whether we’re under stress, engaged in the conversation — or if we’re feeling lonely, excited, surprised, etc.

How it’s being used: Some companies are using emotion A.I. to analyze a job candidate’s employability and some automakers want to install it in their vehicles — claiming it as a safety feature that could, for example, warn the driver if their mental state (distracted, angry, etc) becomes dangerous. Some researchers are even looking at using it in the classroom, claiming it could help improve instruction by telling teachers how engaged students are.

The next frontier of personalization: The falling cost of these sensors could make it feasible to deck out a space with thermal sensors that track blood flow, CO2 monitors that detect breathing rate, and cameras and microphones that detect facial expressions and voice patterns. Poppy Crum, chief scientist at Dolby Labs and professor at Stanford University, says this will allow for spaces to perceive how we are feeling and respond to it — changing the temperature, sound, music, lighting, and color to help you achieve whatever it is you are trying to accomplish.

Deeper well-being: The tech could also lead to more personalized healthcare, argues Crum. She tells Forbes that the consumer technology we have in our homes may eventually be “a better indicator of our mental and physical wellness than most of our clinical visits” because of how much time we spend with the technology.

Accuracy and bias: Brace yourself, here comes the cold water. Facial expressions alone do not reveal a person’s internal state — but, unfortunately, many of the commercial products already on the market (such as those claiming the ability to analyze job applicants) mostly rely on analyzing the face. This has led critics to claim that the field is overhyped and can’t be trusted, raising discrimination concerns similar to those around predictive sentencing.

Privacy questions unanswered: It’s hard to get more personal than data about your emotions. Who gets access to this data? Will we get a say in how it’s used? Will we be able to opt-out of this technology? (According to Wired, Crum believes that will soon be impossible.)

The science of emotions itself is extremely complex. For example, can emotions be measured independently or do they fall somewhere on a spectrum? And the connection between physical expressions and emotions is nuanced — you can scowl or shout if you’re angry, but you can also cry or laugh at perceived injustice, or you could stew silently. We’re not at a point yet where technology can reliably pick up on these gradations — and to be fair, that’s even hard for humans!

Bottom line: While allowing robots to read our emotions and influence our opportunities sounds incredibly dystopian — it doesn’t have to be all Black Mirror.  Like any technology, emotion A.I. probably won’t be “all good” or “all bad” — the impact on humanity will depend on the specifics of how we use it.

Related
LLMs are a dead end to AGI, says François Chollet
AI researcher François Chollet thought we needed a better way to measure progress on the path to AGI — so he made one.
Meet Thresh, the world’s first professional gamer
Was Elon Musk any good at Quake? “He’s a legit gamer,” but…
You’re thinking of the metaverse all wrong, says Matthew Ball
Rumors of the metaverse’s demise have been greatly exaggerated.
Perplexity, Google, and the battle for AI search supremacy
AIs that generate answers to user queries could transform search, but only if someone can get the tech and the business model right.
How AI is rewriting Silicon Valley’s relationship with the Pentagon
Silicon Valley is warming to the Department of Defense as it works to get new AI systems developed and deployed en masse.
Up Next
Distracted Driving
Subscribe to Freethink for more great stories