How to spot a deepfake? It’s all in the eyes.

Researchers have created a tool capable of spotting deepfakes with 94% accuracy — given the right conditions.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Spotting deepfakes is no easy task — just ask Tom Cruise. (There’s a Mission: Impossible joke in here somewhere…)

Designed with sophisticated deep-learning algorithms, computer scientists expect deepfakes to keep getting more convincing, making spotting deepfakes ever more difficult — and valuable.

“We all assume that there will be a point where there’s no way to tell the difference. I mean, for visual effects, I think you can get pretty close already. It’s just the question of how much effort you put into it,” USC associate professor and Pinscreen founder Hao Li told PBS NewsHour.

“But in terms of content that (…) can be created by anyone, I think it’s getting very close to the point.”

Deepfakes aren’t all bad; they can be abused to produce porn and political chaos, but a deepfake tool can bring the past to life or star in TV comedies. But regardless of intent, spotting deepfakes is still essential if we don’t want to lose reality. 

Fortunately, researchers at The University at Buffalo have developed a tool capable of spotting deepfakes with 94% accuracy, the university announced in a release.

The Secret’s in the Eyes

Spotting deepfakes is a nuance game. 

“The cornea is almost like a perfect semisphere and is very reflective,” UB computer science professor Siwei Lyu said in a statement. Because of that reflectivity, the eyes in a photograph will reflect back anything emitting light.

“The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we typically don’t typically notice when we look at a face,” Lyu said.

Deepfakes, for some reason, often fail to nail these reflections — perhaps because they use composite images to craft their creations. Lyu and his team’s tool takes advantage of this weakness.

The tool — accepted for presentation at the IEEE International Conference on Acoustics, Speech and Signal Processing this June in Toronto, and published as a pre-print — begins by mapping the face. Then it turns its attention to the eyes, then the actual eyeballs themselves, before drilling down further and focusing, pixel-by-pixel, on the light reflected in the corneas.

If there’s oddities or deviations in shape, intensity, or other features of the reflections, you’re (likely) spotting deepfakes.

To train their model, the researchers used a database of deepfaked faces from www.thispersondoesnotexist.com, as well as images of real people from Flickr Faces-HQ. 

Although, all of the images were what the researchers describe as in a “portrait setting” — well-lit, facing the camera, and at 1,024 x 1,024 pixels.

Lyu’s model has some other limitations, too. Since you’re examining the light reflected in the eyes you need a source of light to be reflected. And you need two eyes, at that — if one is missing or obscured, the tool has nothing to compare.

It also doesn’t examine the shape of the eyes, or what is being reflected in them, per UB; it’s only looking at the pixels, and those pixels could be made to match each other by a sufficiently dedicated and detail-oriented deepfaker.

The Eyes Have It

The need for spotting deepfakes runs from the intensely intimate to the geopolitical. 

“Unfortunately, a big chunk of these kinds of fake videos were created for pornographic purposes, and that (caused) a lot of … psychological damage to the victims,” Lyu said. “There’s also the potential political impact, the fake video showing politicians saying something or doing something that they’re not supposed to do. That’s bad.”

And the deepfakes are proliferating. According to a report from CB Insights, the number of deepfakes identified online jumped from around 24,000 to over 49,000 — from January to June of 2020.

As they do, so will the ranks of researchers looking to spot them. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
AI is now designing chips for AI
AI-designed microchips have more power, lower cost, and are changing the tech landscape.
Why futurist Amy Webb sees a “technology supercycle” headed our way
Amy Webb’s data suggests we are on the cusp of a new tech revolution that will reshape the world in much the same way the steam engine and internet did in the past.
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Up Next
americas cup ai
Subscribe to Freethink for more great stories