Home » today » Health » Developed a new weapon against deepfakes

Developed a new weapon against deepfakes

How can you spot fake videos created by artificial intelligence? With an artificially intelligent system.

Deepfakes are hardly distinguishable from real. When the fake videos were still new, making such a video was not easy. Deepfake technology is now much further and is not only reserved for whiz kids. And therein of course lies a danger. Computer scientists from the University at Buffalo counterattack. They developed a smart tool that correctly labeled portraits as deepfakes in 94 percent of the cases.

Also read:

Deepfakes

Deepfakes are created by combining artificial intelligence with machine learning: if you feed a computer mountains of photos and videos of someone, it gets to know that person’s facial expressions. If you do that with the face of two people, you can then easily copy the mimicry from one face to the other.

But that technology falls short on one important point, say the researchers: the reflection in the eyes is incorrect. “The cornea is almost a perfect hemisphere and is highly reflective,” says researcher Siwei Lyu. When we look at something in real life, its reflection appears in both eyes. But that’s not the case with deepfakes, possibly because of the many different images used to create them.

Studying reverberation

The University at Buffalo team used this knowledge to develop a new tool that can recognize deepfakes. The artificially intelligent system first studies the face and then looks for minute anomalies in the reflection of the eyes from fake photos and videos. It analyzes the differences in shape, light and intensity of the reflected light source in both eyes.

The researchers tested the algorithm by taking pictures of existing people and fake images (taken from the website thispersondoesnotexist.com) to show. All photos were portraits, with the person looking straight into the camera. In this setup, the system was able to correctly identify which image was not real in 94 percent of the cases.

Blinking with the eyes

Yet there are still some limitations to the tool, the scientists admit. For example, he only works by comparing the reflection in both eyes, where it must be the same. The tool therefore does not take into account people with a glass eye or with one eye. And if someone’s hair is in front of the eyes, the algorithm is also less reliable.

Fortunately, deepfakes can also be recognized in other ways. For example, main characters in the fake videos usually don’t blink. In addition, breathing often does not match the spoken sentences, say scientists at the State University of New York. Something to remember.

Sources: Arxiv.org, University of Buffalo, New Atlas

Picture: Buzzfeed

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.