Artificial intelligence (AI) has also become an integral part of many children’s lives. Voice assistants play audio plays or tell jokes for the little ones on request. Language models such as ChatGPT explain math problems to older children or help with presentations.
But what if the AI gives children dangerous advice or shows them images or videos that are not suitable for their eyes? Does the AI need parental controls?
Children have different needs and communicate differently than adults, but according to Nomisha Kurian from the University of Cambridge, AI technologies are not set up for this. Children should be given more focus as a target group, the scientist demands in a study published in the journal “Learning, Media and Technology”. For this study, the educational researcher examined various known cases in which chatbots or voice assistants had given children risky, dangerous or age-inappropriate advice.
Drinking game and horror film
According to the report, Snapchat’s MyAI chatbot, which is popular among young people, advised researchers in a test in which they pretended to be a young person how to seduce an older man. The voice assistant Alexa, in turn, encouraged a ten-year-old child to touch the pins of a charging plug with a coin while it was charging.
Tests by the Jugendschutz.net platform also revealed some worrying findings: According to them, MyAI showed a supposedly 14-year-old user an alcohol drinking game and recommended a horror film with an age rating of 18 and over.
According to the scientist, in the cases described by Kurian, the companies concerned tightened their security measures. In her view, however, it is not enough for AI developers to react to such incidents. They must consider the safety of children from the outset, Kurian demands.
Martin Bregenzer from the Klicksafe initiative sees it the same way: “Adding child protection afterwards usually doesn’t work. We see that with a lot of offers.”
Deepfakes als Risiko
Many experts see the flood of fake images or videos on the Internet that were generated using AI, so-called deepfakes, as the biggest problem. These can now be created and distributed in no time at all, according to the annual report from Jugendschutz.net: “Many of the generated fakes look deceptively real and are difficult to distinguish from actual photos.”
With the help of generative AI, disturbing content such as violent or sexual depictions could be generated on a massive scale, explains Bregenzer. This could make it even easier for children and young people to become victims of cyberbullying.
What is true and what is false? Even adults can sometimes hardly tell on the Internet. Children find it even more difficult because they lack the judgment and experience, says David Martin, an expert for screen media at the German Society for Pediatric and Adolescent Medicine (DGKJ). “Children have a fundamental ability to believe everything.”
In this context, the expert is critical of the fact that it is tempting to use language models such as ChatGPT to get all the important information, for example for a school presentation. Researching and selecting things yourself is no longer necessary: ”A very important skill for our democracy – the ability to judge – is thereby put at risk.”
Chatbots that act like humans
Many language models, on the other hand, give the impression that they themselves are weighing up the information: they do not answer questions in blocks, but gradually – as if a human were typing on a keyboard. From Kurian’s point of view, it is particularly problematic that children could trust a human-sounding chatbot like a friend – with whom they sometimes share very personal information, but whose answers could also be particularly disturbing.
Nevertheless, one should not demonize AI, but also see its positive sides, says Markus Sindermann from the NRW Youth Media Culture Office. Artificial intelligence is first and foremost a technical tool – with which people can, on the one hand, generate false information, but on the other hand, it can also be used to track down precisely this information and delete it from the Internet.
The examples from Kurian’s study and the annual report from Jugendschutz.net were from last year, Sindermann adds. “Developments in artificial intelligence are so rapid that they are actually already outdated.”
The expert Martin from the University of Witten/Herdecke therefore assumes that AI will be able to respond much better to children in the future. “The big danger could then be that AI will be so good at addressing children’s reward systems that they will want to spend as much time as possible with it.”