Home » Technology » Researchers say it would be impossible to control super-intelligent artificial intelligence

Researchers say it would be impossible to control super-intelligent artificial intelligence

Berlin – Saba:

The idea that artificial intelligence overthrows humanity has been debated for decades, and in 2021 scientists issued their verdict that we will be able to control the superintelligence of a high-end computer. the answer? Most certainly not.

The point is that controlling a superintelligence far beyond human understanding requires a simulation of that superintelligence that we can analyze (and control). But if we cannot understand it, it is impossible to create such simulations.

Nor can rules such as “do no harm to humans” be made unless we understand what kind of scenarios AI will bring, as the authors of the new study suggest. Once a computer system operates at a higher level than the range of our programmers, we can no longer set limits.

The researchers wrote: ‘Superintelligence poses a fundamentally different problem from that typically taught under the banner of’ robotic ethics’ … This is because superintelligence is multifaceted and therefore potentially able to mobilize a variety of resources towards achieving targets that are potentially incomprehensible to humans, let alone control them. “

Part of the team’s reasoning came from the stopping problem posed by Alan Turing in 1936. The problem centers on whether or not a computer program will achieve a result and a response (hence stop), or simply be stuck forever in the attempt to find one.

And as Turing demonstrated through clever mathematics, while we may know that for some specific program it is logically impossible to find a way to let us know for every possible program that could ever be written. This brings us back to artificial intelligence, which in a super-intelligent state can hold virtually every possible computer program in its memory simultaneously.

And any program written to stop AI from harming humans and destroying the world, for example, may or may not come to a conclusion (and stop): it is mathematically impossible to be absolutely sure of either. irrepressible.

“In effect, this renders the containment algorithm unusable,” said computer scientist Iyad Rahwan of Germany’s Max Planck Institute for Human Development in 2021.

The alternative to teaching an AI a little ethics and telling it not to destroy the world – which no algorithm can be completely sure of doing – is to limit superintelligence capabilities, the researchers said. They can be cut from parts of the Internet or from certain networks, for example.

This study published in the Journal of Artificial Intelligence also rejected the idea, stating that it would limit the access of artificial intelligence; The argument is that if we are not going to use it to solve problems beyond humanity, why should we create it?

And if we move forward with AI, we may not even know when superintelligence beyond our control will arrive, and that doesn’t mean understanding it. This means that we need to start asking some serious questions about the directions we are going in.

Also in 2021, computer scientist Manuel Cibrian of the Max Planck Institute for Human Development said: “A super intelligent machine that controls the world looks like science fiction. But in reality there are machines that perform certain important tasks independently without programmers. They understand perfectly. how they learned it “.

The question then arises as to whether this could at some point become out of control and dangerous to humanity.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.