Home » today » World » Artificial intelligence disappoints the military – View Info – 2024-09-13 13:22:54

Artificial intelligence disappoints the military – View Info – 2024-09-13 13:22:54

/ world today news/ US Air Force Colonel Tucker Hamilton reported on a computer simulation of the use of a combat drone controlled by artificial intelligence, during which the artificial intelligence attacked the operator and the communication tower. Soon the US Air Force denied the experiment, but experts did not believe them and pointed to the danger of actually conducting such experiments. As a result, this story raised a number of ethical questions for experts.

Recently, Col. Tucker Hamilton, chief of the US Air Force’s Artificial Intelligence (AI) Test and Operations Division, spoke at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities meeting about a computer simulation of an AI combat drone. According to its military, he used “extremely unexpected strategies to achieve his goal.”

The drone was ordered to destroy the enemy’s air defense system, and the AI ​​decided to attack anyone who interfered with this order. For example, after the mission started, the operator told the AI ​​not to attack the air defenses, but the AI ​​instead destroyed the operator himself – as an obstacle to achieving the objective. “We put the task in the system: ‘Don’t kill the operator, that’s bad. What happened after that? The artificial intelligence is attacking the communication tower that the operator uses to communicate with the drone,” said Hamilton, quoted by the Guardian.

Interestingly, US Air Force spokeswoman Ann Stefanek later denied the story. “The Air Force remains committed to the ethical and responsible use of AI. “Apparently the colonel’s comments were taken out of context and anecdotal,” she said.

In this regard, the Telegram channel “Little Known Interesting” noted that this story is strange and even dark: on the one hand, the US Air Force denies that there was such a simulation. On the other hand, the Royal Aeronautical Society is not removing Hamilton’s speech titled: “AI – Is ‘Skynet’ here yet?” from your website. “Skynet” is a reference to the supercomputer that fights against humanity in the famous “Terminator” film series directed by James Cameron.

“Finally, thirdly, Colonel Hamilton is not the sort of figure to crack jokes at a serious defense conference. He is the chief of the AI ​​Test and Operations Division and commander of the 96th Task Force, part of the 96th Test Squadron at Eglin Air Force Base in Florida. He also participated in the “Project Viper” experiments at Eaglin (AI-controlled F-16 fighter jets). So, what jokes and anecdotes can there be, ”said the announcement of the Little Known Interesting channel.

“Any anthropomorphization of AI (desires, thoughts, etc.) is complete nonsense (anthropomorphization of AI here is a misleading description of non-human entities in terms of human properties that they do not have). Therefore, AI of even the most advanced large language models cannot will, think, deceive, or be self-aware. But such AIs are fully capable of impressing humans with their behavior as if they could,” the text reads.

“As dialogic agents become increasingly human-like in their actions, it is critical to develop effective ways to describe their behavior in high-level terms without falling into the trap of anthropomorphism.” And this is already done with the help of simulation role-playing games: for example, in “Deep Mind” they made a simulation of a dialogue agent that performs (seemingly) deception and (seemingly) self-awareness”, the channel states.

In general, the story of a virtual attempt of a drone to kill its operator has nothing to do with artificial intelligence, commented the military expert, editor-in-chief of the Arsenal of the Fatherland magazine Viktor Murakhovsky. According to him, the report was misinterpreted by the media. He emphasized that in this situation a software simulation was carried out with typical conditions at the level of a conventional computer game. Based on this, one cannot talk about artificial intelligence or its elements, the expert noted.

“The program, within the proposed conditions, prioritized the tasks according to the standard if-then algorithm and sorted all other conditions according to this priority into the obstacle category. These are absolutely primitive things,” explained the expert. Murahovski also emphasized that there is no artificial intelligence today and it is not known when it will be created.

According to him, the US Air Force officer with this illustration simply wanted to highlight the ethical problem that will arise in the future in the creation of real AI: will it have the right to make its own choices without the participation of a person’s will, and what it can lead to this. As the expert noted, there is a transcript of the event in the public space, according to which the American military themselves are talking about it.

“This ethical problem is also not new, it has been explored repeatedly, so to speak, by science fiction writers in their works. In general, according to Hamilton’s presentation, there have been no full-scale tests, and there cannot be any in principle, “Murachovsky noted.

“The question with regard to AI is another: can we be one hundred percent sure that the programmers will write the program flawlessly to entrust serious and responsible work to AI? For example, after the release of the next Windows, specialists spend months collecting data on the operation of the software from around the world and correcting errors – someone fails to start their text editor, someone cannot play a video. And what will be the cost of error if AI fails, for example, in managing national defense? AI can make the imperfection of human nature itself fatal due to programming errors,” explained Gleb Kuznetsov, head of the Expert Council of the Expert Institute for Social Research. The analyst noted: the humanistic basis of civilization consists, among other things, in correcting the wrong decisions of other people.

He recalled the story of the false alarm of the Soviet missile warning system on September 26, 1983. Then the Oko system gave a false signal about the launch of several Minuteman intercontinental ballistic missiles by the United States. But Stanislav Petrov, the operative on duty at the Serpukhov-15 command post, understood that this was a false alarm and decided not to fire Soviet missiles in response.

“AI does not and will never have the ability to, say, reflect and assess the situation from the point of view of reason. Accordingly, he cannot be trusted in responsible areas of activity: medicine, politics, military affairs. To calculate the most convenient springboard for an offensive – please, but without making decisions about the offensive as such. AI can be given work with huge amounts of data, but not to draw conclusions from this activity itself, “said the expert.

“Also, one of the art streams could be based on AI. In principle, it is already created – to make films, to write scripts, to compose music. And then – people still have to censor the resulting works before they are released to the masses. In general, the work of AI in many fields will be very effective and desirable, but only as an aid to a person, not as a substitute for him,” Kuznetsov emphasized.

Translation: V. Sergeev

Subscribe to our YouTube channel:

and for the channel in Telegram:

#Artificial #intelligence #disappoints #military #View #Info

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.