Home » Business » Dutch Data Protection Authority Warns of Risks of Algorithms and Artificial Intelligence: Privacy Violation, Discrimination, and Disinformation

Dutch Data Protection Authority Warns of Risks of Algorithms and Artificial Intelligence: Privacy Violation, Discrimination, and Disinformation

The Dutch Data Protection Authority warns that we are not sufficiently aware of the risks of algorithms and artificial intelligence. AI is causing disinformation, privacy violations and discrimination.

In brief:

  • The Dutch Data Protection Authority warns of the risks of the rapid development and use of algorithms and artificial intelligence (AI).
  • AI can lead to the spread of misinformation, privacy violations and discrimination, especially when trained with unrepresentative data.
  • Despite these risks, AI offers opportunities for efficiency and well-being, for example in healthcare, where it can contribute to faster diagnoses, for example.

According to the privacy watchdog Dutch Data Protection Authority, the increasing use of artificial intelligence and technological innovations is happening faster than we as a society can keep up with. As a result, we are lagging behind in recognizing risks. But also in tackling it, for example through regulations and enforcement.

The benefits scandal shows the dangers that algorithms pose to our privacy. Tens of thousands of people got into trouble because they were labeled as fraudsters by algorithms. But artificial intelligence goes one step further, according to experts.

Since the arrival of the AI ​​chatbot ChatGPT, almost every tech company has been using artificial intelligence. But AI is nothing more than a bunch of algorithms, says teacher Rajiv Girwar of Rotterdam University of Applied Sciences. “As soon as an algorithm can draw conclusions or generate something new, we call it AI.”

According to privacy expert Rejo Zenger of Bits of Freedom, algorithms and artificial intelligence should be viewed as a black box. “You put certain data in the box, but you have no insight into what is done with the data in that box. Then a conclusion emerges or new data rolls out, but you have no idea how that came about .”

Receive a notification of new developments. Stay informed with notifications

Risk of privacy violation, discrimination and disinformation

You can unleash algorithms and artificial intelligence on all the data you can find on the internet. For example, on the location data of millions of Dutch people that are available online. You could then have AI analyze who might be involved in criminal matters. That is a significant invasion of your privacy.

The risks of artificial intelligence depend on the data with which AI is trained, explains privacy lawyer Céline van Waesberge of Project Moore Advocaten. For example, with non-representative training data, discrimination and the generation of incorrect information (disinformation) lurk.

A good example of non-representative data can be seen in healthcare, for example. “Traditionally, there has been a lot of medical knowledge about men,” Van Waesberge outlines. “If you train an AI model exclusively with this, the outcome may not be reliable for children or women, for example.”

AI is often trained with non-representative data

In addition, according to Van Waesberge, people in our country must in many cases give permission before an AI model can be trained with their privacy-sensitive health data. There is a risk that certain groups of people give consent and other groups of people do not.

As a result, certain groups of people may be overrepresented in the training data and other groups underrepresented. The result is that the data with which AI is trained is not representative.

“The conclusions that an AI model draws and the information that the system generates are highly dependent on the data you have put into it,” Zenger also emphasizes. “We assume that computer decisions are neutral, but because the data is often unrepresentative, the information and conclusions generated can be incorrect and even discriminatory.”

Supervision of AI is high on the agenda

Because of all these risks, the supervision of algorithms and artificial intelligence is high on the agenda of the Dutch Data Protection Authority this year. “In our supervision, we give priority to complaints, data leaks and investigations involving algorithms and AI.”

“We have also been the coordinating algorithm supervisor in the Netherlands since January 2023,” the privacy watchdog continues. “We do this to protect public values ​​and fundamental rights when using algorithms and AI.”

Dutch citizens’ confidence in algorithms and AI is low, the Dutch Data Protection Authority concludes. But that is not entirely correct. “The technology can also contribute to sustainable prosperity and well-being.”

Artificial intelligence offers many opportunities

“Artificial intelligence not only has a dark side,” privacy lawyer Van Waesberge also emphasizes. “In healthcare, for example, we really need to embrace it. There are already examples that AI can recognize cancer better than the human eye.”

But artificial intelligence will certainly not replace the doctor, Van Waesberge thinks. “AI can help make a good diagnosis more quickly and take over certain other tasks,” says the privacy lawyer. “This leaves the doctor time for, for example, more personal contact or to help more patients. This can make care more efficient and therefore cheaper, and ensure more confidence in care.”

2024-01-21 04:12:06
#Privacy #watchdog #developing #faster #Tech

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.