Home » today » World » The Dark Side of Artificial Intelligence: The Threat of Weapons and Terrorism

The Dark Side of Artificial Intelligence: The Threat of Weapons and Terrorism

The warnings issued by the leaders of companies in the field of artificial intelligence came about the possibility of its use in developing and manufacturing weapons that threaten the fate of humanity, to add another dimension to its increasing risks if it is not curbed by laws.

Two experts in the field of artificial intelligence and information technology reveal to Sky News Arabia the truth of these fears and the extent to which they can be achieved in the near future.

The “terrorism” of artificial intelligence

And Tuesday, 3 influential leaders of artificial intelligence warned during testimony at a hearing in the US Congress, that the great acceleration in the development of artificial intelligence may lead to serious damage over the next few years, comparable to terrorists who use the technology to manufacture biological weapons.

The hearing took place days after AI companies made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer.

Justified concerns

Information technology expert Salloum Al-Dahdah talks about a “dark and dangerous side” of artificial intelligence. Because these systems are capable of “self-learning and self-development”.

And artificial intelligence can be used to develop “lethal and lethal” weapons, according to the programming that a person feeds into these systems before working on “self-development,” he explains.

In turn, he asks, “What will happen if artificial intelligence contacts people who have an interest in doing evil or developing destructive weapons?”

And the technology expert considers that the occurrence of this may “destroy everything”, and turn these smart systems into “a means that threatens humanity and develops military equipment and weapons with unlimited capabilities.”

‘Unlimited’ risk

For his part, information technology expert Islam Ghanem believes that the development of artificial intelligence “has no limits”; Therefore, the United States and Western countries seek that giant companies not work in the field of artificial intelligence alone, so that it does not turn into a tool that threatens these countries or tamper with the security of humanity.

And generative artificial intelligence, which uses data to create new human-like content, as does GBT Chat, its capabilities have shocked the world, and parliaments and governments, including the European Parliament, have begun to develop legislation that curbs it and puts it under surveillance and control.

Technology companies… huge spending on artificial intelligence

Two types of AI companies

According to Salloum Al-Dahdah, the biggest risks of artificial intelligence relate to “the ability of these systems to learn on their own, and to do things that humans do not want.”

“We support the development of artificial intelligence, but it must be determined what these systems can learn, and what they should not know or engage in,” he explains.

He points to other risks, related to feeding these systems with “malformed data or incorrect applications, or being hacked by some parties that may misuse artificial intelligence capabilities.”

For his part, Islam Ghanem refers to two types of companies working to develop artificial intelligence systems. The first type responds to these concerns and works to “adjust the performance of smart systems.”

But he fears another type of company that “does not respond to these concerns or deems them unjustified”, and is rushing to produce highly intelligent systems “without controls or rules”.

Scientists are trying to harness artificial intelligence to understand the language of animals

Is artificial intelligence becoming a nightmare for translation and dubbing professionals?

#Armed #terrorism #artificial #intelligence #soon.. #threat #technology
2023-07-29 08:25:09

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.