Chatbot ChatGPT is quickly becoming more famous and popular. The program showcases the possibilities of artificial intelligence nicely. But criminals are also looking at how they can use AI for more serious cyber attacks. “It is naive to think that people with evil intentions are not involved in this,” says Dave Maasland of cybersecurity company ESET Netherlands.
Late last year, ChatGPT appeared seemingly out of nowhere, when people on social media massively shared witty and eye-catching results of conversations with the chatbot. The program has been in the news a lot lately because students use AI to outsource their homework. ChatGPT can independently write full papers and essays.
But the program can do much more. For example, it can help developers recognize errors in their programming. “AI will offer added value in all sectors,” says cybersecurity expert Stijn Rommens. As long as we use it for the good.
But according to Maasland, it is naive to think that malicious parties are not also involved in ChatGPT. “Cybercriminals are lazy and want to make quick money as efficiently as possible,” he says.
No code red yet
Cybersecurity company Check Point recently warned of a rapidly growing interest in ChatGPT from criminals. Many tips for malicious use of the chatbot are already shared on hacker forums.
However, there are currently no known attacks that have been set up using ChatGPT, says both Maasland and Zahier Madhar of Check Point. According to them, it may take a while before the effects of ChatGPT become visible.
Cyber security experts therefore do not immediately speak of code red. The instrument is new, but the goals have not changed. Criminals still want to steal data or money.
But there are concerns from the cybersecurity world, says Maasland. “This development can democratize cybercrime.” This means that in one fell swoop many people gain access to tools for committing cyber attacks. “I do think that tools like ChatGPT could be the start of a new arms race between attackers and defenders.”
Help with malicious software and phishing
“Everyone can suddenly program,” says Madhar. “People with little technical knowledge can have ChatGPT write codes. They can even ask the chatbot for an explanation if they can’t figure it out.”
ChatGPT even helps you write phishing emails. By convincing people in such an email to click on a link or file, attackers can gain access to victims’ computers. If you ask for it in a smart way, the chatbot provides a ready-made text on behalf of a courier company. The only thing an attacker needs to add himself is a piece of malicious software, but the program can help with that too.
For example, the chatbot contributes to a step-by-step plan for developing malicious software. Madhar himself wrote a script that allowed him to bypass anti-virus programs. “The speed with which I managed to do that is bizarre.”
ChatGPT continues to evolve. Maker OpenAI says it does its best to have the chatbot reject inappropriate requests. “We also use techniques to post warnings about certain types of unsafe or harmful content, but that can occasionally go wrong,” the company says.
AI also helps combat cybercrime
The blocks of ChatGPT are quite easy to circumvent. And the fact that the threshold is so low can eventually lead to many more cyber attacks, says Maasland. According to him, the most important thing is that companies and organizations arm themselves against this. “Many companies still don’t look at security software or update too little. There is a lot to be gained there.”
But AI will not only be abused by cybercriminals. The instruments can also be used to ward off attacks. “We used to look at specific files if we wanted to stop a virus, now we look at behavior,” says Maasland.
If a certain type of behavior is seen on a network or computer, an AI will almost certainly know whether something bad is happening there or not. “Compare it to this: if a person with a balaclava and a crowbar walks around a house, it probably means that he wants to break in,” explains Maasland.
Experts emphasize that ChatGPT is not a bad program in itself. It’s about how you use it, says Rommens. “AI does not yet have self-awareness and must be programmed in a certain way to make decisions and perform work. So basically there are still people behind the buttons who determine what happens to it.”