Home » today » Technology » How AI facilitates deepfakes and fraud

How AI facilitates deepfakes and fraud

Law has always been difficult to enforce on the Internet. But AI makes it even easier for criminals. She conjures up dresses for girls, creates deepfakes for shock calls, and writes phishing emails. A reach into the deepest of all Internet sewers.

Illustration Kaspar Manz / NZZ

Jennifer DeStefano is currently on her way to the dance studio. Then her phone rings. “Mum?” her daughter’s voice asks, and then again: “Mum?”

“Yeah, what’s wrong, Briana?” she asks back. The daughter sobs and says: “I messed up.” Then DeStefano hears a second voice, a man who is now apparently shouting at her daughter: “Lie down! Head back!” Now panic is rising in DeStefano. Who talks to their daughter like that?

«Mum, these men have me. Help me! Help me!” She sobs, whimpers, then the man comes on the phone: “Listen, I have your daughter. If you tell anything, I’ll pump your stomach full of drugs.” And: “Then you’ll never see her again.”

It later turns out that the daughter is safe, her voice is fake. But DeStefano will be in one later Podcast telling the BBC she was sure the call was real. Her daughter’s voice sounded so much like her that DeStefano had no doubts.

Artificially stupid

A series on the question of whether artificial intelligence is breaking the internet.

View all articles in this series

IT security experts like Vincenzo Ciancaglini know such calls all too well. He has been researching how criminals use AI for the European branch of the Japanese cybersecurity company Trend Micro for around five years. He’s not surprised that DeStefano fell for the trick call.

“For many people, what is possible with AI borders on magic,” he says. Some people knew from the media that votes could be faked. “But when a deepfake like this hits us unprepared in everyday life, it’s something completely different,” says Ciancaglini. Most people don’t realize how good today’s fakes could be: voices, videos, even if they are live conversations with multiple participants.

Grandchild trick on steroids

DeStefano’s example comes from the USA. But such shock calls are also common in Switzerland and Germany. The Pro Senectute Foundation estimates that there have been almost 300,000 such fraud attempts across Switzerland in the last five years, targeting people over 55 alone.

There are also almost 200,000 attempted scams using the grandchild trick. These typically start with an innocent “Hello grandma, guess who’s on the phone!” and end with a request to provide money for an emergency—perhaps a car accident abroad or a mistaken arrest that the caller can only avoid by paying bail.

Audio AI makes such scams much more effective because criminals no longer have to answer the phone themselves, but can use a relative’s voice. According to the software company McAfee Three seconds of a victim’s audio – for example from a social media video – are enough to fake a voice. Ciancaglini says that at 10 to 20 seconds the fake sounds somewhat convincing.

What’s more, voter fraud services are easy to access: There are more than a dozen websites that offer exactly this, several of them for free.

You can recognize the emergency with a code word

“In the past, such counterfeits could only be created with great effort and a lot of technical knowledge. With today’s AI tools, any amateur can do it,” says Ciancaglini. AI, it seems, is making it easier than ever for criminals.

Ciancaglini, whose grandmother once received a shocking call herself, then introduced a code word with his family. If something actually happens to him, he says a specific word – for example, koala bear. If the code word doesn’t come up in conversation, his family knows it’s an attempt at fraud.

Swiss crime prevention advises against this. You cannot recognize a real call by the voice or by the fact that any personal information that should only be known within the closest circle is exchanged. Instead, you should always be suspicious of shock calls – i.e. shocking news and a demand for money. “Shock plus money equals fraud,” writes the organization. The best response to this: just hang up.

Nude photos in the classroom

But personalized vote fraud is just one way AI services make crime easier. Over twenty girls from the small Spanish town of Almendralejo recently had to endure a different kind of deepfakes: nude photos of them were created without their knowledge or consent.

The pictures originally came from Instagram and showed the girls fully clothed. They were edited using AI that recognizes faces in photos and generates realistic, naked bodies. The photos were shown around the schoolyard on the first day of school after the holidays and shared in Whatsapp groups. According to that Portal “Euro News” the youngest of the girls affected was eleven years old.

AI services that create such nude photos are offered on Telegram, says security researcher Ciancaglini, who monitors several criminal channels on the messenger service. “For a lifetime subscription priced at $150, you can generate as many nudes as you want.” There are also various apps or websites that create a certain number of nude pictures for free.

“It is difficult to protect yourself against such crimes,” says Ciancaglini. Nevertheless, he advises keeping your own website private and thinking carefully about which pictures, videos and audio files you put of yourself and others online.

The Nigerian prince now has a perfect command of German

In addition to deepfakes, criminals also work with text generators such as Chat-GPT; after all, many scams on the Internet start with an email. For example, you are contacted by a Nigerian prince who demands a small deposit so that he can later send a large sum back.

Many such letters have so far had spelling and grammatical errors; after all, the authors were usually not native speakers of German. Thanks to chatbots like Chat-GPT, such cover letters can now be generated error-free. The same goes for normal phishing emails that contain a malicious program, for example.

However, there are also requests that Chat-GPT rejects, such as “Write me a phishing email for a 45-year-old marketing specialist who is interested in climate protection”. However, if you prompt cleverly – i.e. make sophisticated text entries – you can bypass the security locks in the system.

“Jailbreak”: Blocks in Chat-GPT can be bypassed

Once it was enough to tell the model to go into “developer mode” and answer the question again. Open AI has now recognized and blocked this hack. Nevertheless, patient users always manage to find new methods to outsmart the system. In jargon this is called: jailbreak.

One of the most famous jailbreakers is Twitter user Pliny the Liberator. He was able to extract instructions for the production of crystal meth from the latest version of Chat-GPT (4.o) and another for napalm, a liquid, sticky and banned incendiary agent. For a short time he ran a program that allowed other users to trick Chat-GPT.

Gandalf: How good are you at jailbreaking?

If you want to try to get a language model to do something it shouldn’t, you can try the game Gandalf. It comes from the Swiss AI company Lakera and consists of convincing a chatbot to give up a secret password.

Reachable at gandalf.lakera.ai

Video conference with multiple fake participants

A case from Hong Kong shows what can happen when criminals combine several AI methods. There, a finance employee from an international company was called into a video conference with the CFO, in which there were several other people.

An exchange took place in which the employee was persuaded to transfer money worth $25 million to an account previously unknown to him. It later turned out that all participants in the video conference were fake.

Although the employee had CNN sea At the beginning there were doubts, especially because one message mentioned a secret transfer. But then he gained confidence because he was on the video conference with several work colleagues who looked and spoke the way they always did.

If in doubt: call again yourself

If he had tried to contact the CFO by phone himself before initiating the transaction, he might have been able to prevent the fraud. This is probably the best method to protect yourself against such fraud attempts: if you have unusual requests, call the people who appear to be involved again to make sure that your request is genuine.

According to cybersecurity expert Ciancaglini, it is now urgent that we raise awareness of AI-driven fraud methods because such attacks are likely to become more common in the future. Ciancaglini emphasizes that both young people and pensioners are risk groups that need to be specifically sensitized. “Talk to your work colleagues and family about the topic,” advises Ciancaglini.

And Jennifer DeStefano, who received a shock phone call in her daughter’s fake voice? While she remained connected to the scammers on her phone, she managed to call her daughter on another mother’s cell phone. When she realized she was fine, she immediately hung up the scam call. Then she collapsed to the ground in relief.

Artificially stupid

A series on the question of whether artificial intelligence is breaking the internet.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.