The arrest of Donald Trump and Boris Johnson and the pope in a puffer coat. Images created with artificial intelligence are almost indistinguishable from real photos. But with a trained eye you can go a long way.
Op sociale media kwam je hem de afgelopen week vaak tegen: een foto van paus Franciscus op straat. In een hippe witte pufferjas met daaroverheen een grote ketting en een zilveren kruis. In zijn rechterhand een koffie voor onderweg.
Op Twitter werd het plaatje twintig miljoen keer bekeken. Het sociale netwerk plaatste er later een label bij. Daarop staat dat de afbeelding niet echt is en werd gemaakt met de kunstgenerator Midjourney.
Dat programma kreeg recent een update, waarna hij veel realistischere plaatjes kan maken. Het leverde ook zogenaamde foto’s op van de arrestatie van de Amerikaanse oud-president Donald Trump en die van de voormalige Britse premier Boris Johnson.
AI art developments are booming
“It’s getting better and more convincing,” says AI expert Mark Evenblij of DuckDuckGoose. That is a Dutch company that is working on ways to recognize deepfakes. “Developments are going very fast, in six months the results could be much better than now.”
That makes it more difficult to spot fake images. But for now, there are a few things the software struggles with. These can indicate that an image has been created by artificial intelligence.
Sometimes it’s obvious. “Glasses that dissolve in a vacuum, that’s not possible,” says Evenblij. “Often reflections in eyes are also a good indication. In deepfakes, the reflections in the left and right eyes often differ. If you see that, it can indicate a fake image.”
But anomalous reflections are not yet definitive proof. It may be that there is just a lamp that reflects in one eye and not in the other eye. “It is often the sum of deviations,” says the AI expert.
Melting patterns and gibberish
“You can also see it in patterns that merge,” explains Evenblij. “A hat that blends into the hair or skin of a face. AI models also have trouble with letters, which means that texts on clothing, for example, are incorrect.”
This is visible in the arrest pictures of Trump and Johnson. Gibberish is written on clothing and caps. The computer also often has trouble with hands. More than once people have six fingers and sometimes the wrong hand is on the wrong arm.
This happens, for example, in the picture below. It seems lifelike, until you realize that the thumb is on the wrong side. On the picture of the Pope and Johnson’s arrest, you can also tell by the hands that it is fake.
Fake videos and sounds
More realistic deepfakes are also made from videos and audio. For example, programs allow you to superimpose a celebrity’s face on your own. The fake face then reacts to your movements without delay. In video calls, things can be convincingly faked in this way.
But if you know what to look for, you may be able to spot it. “When someone blinks, you often see the fake face become slightly smaller for a moment,” says Evenblij. “There are also limits to the mask. If a person turns his head too far, distortion occurs. So you could ask someone to turn left and right.”
If recordings of a voice exist, it doesn’t take much to get someone to say things they’ve never said.
These kinds of deepfakes are more difficult to expose. “You can often still hear it in the intonation, which is just not human,” says Evenblij. “There is also software that can pick up whether something exists or not. The better the sound quality, the more information there is to check.”
Risk of fraud is lurking
Artificial intelligence also ensures positive developments. For example, it can help with grief therapy and trauma processing. Or it can give people who can’t talk a voice. But danger also lurks.
“It is already possible to imitate a voice and fake phone calls,” says Evenblij. “In the United States, we have already seen examples of hyper-realistic voice deepfakes to commit fraud. A criminal can pose as a grandmother’s grandson and call her that he needs money quickly. People will make serious money through this kind of thing. going to lose.”
Governments are working on regulations to safely lead the development of artificial intelligence. But organizations and companies themselves should also think about the ethical use of AI, says Evenblij. “It’s not going away, so we have to find ways to deal with it.”