Home » today » World » The danger of artificial intelligence comes from humans – 2024-09-19 09:56:11

The danger of artificial intelligence comes from humans – 2024-09-19 09:56:11

/ world today news/ The former director of business development at Google, Mo Gaudat, predicts the rise of machines. Artificial intelligence, he said, would treat humans like “garbage” and could easily create machines to destroy them.

This is just one of many predictions that AI will gain will, intelligence, and fulfill the old dream of robots to “kill all humans.” Will artificial intelligence treat us like garbage? It’s actually annoying when your own smartphone despises you. This somehow lowers self-esteem. But it’s even worse when he hatches devious plans against you. It’s scary to carry a potential conspirator in your pocket. Are smartphones conspiring with each other at night while you sleep, preparing for the appointed hour when they—along with programmable household appliances, office computers, and industrial robots—will finish off us despicable carbons and live out their silicon lives of freedom?

In fact, the rise of the robots is one of the most beloved stories in science fiction. And as artificial intelligence becomes more and more human-like – it talks, draws, writes, it seems, quite meaningful texts – this dire scenario seems more and more real. We are already used to intelligent robots that inhabit books and movies, and it is difficult for us to imagine a future without them.

And here it is worth noting two things.

First, artificial intelligence does not and (we have reason to believe) never will have anything like a will (a sense of good or evil) or consciousness. As for the smartphone conspiracy, we can breathe a sigh of relief.

Second, there can be really huge dangers associated with AI. But these dangers, like all of our history, come from people. Roughly speaking, if a killer robot flies into your window, it’s only because it will be sent by other people. AI can be a powerful tool for the human drive towards mutual and self-destruction. The human race is dangerous to itself – and all the technical possibilities it creates become instruments of this danger.

Let’s start with the first – the computer does not have and is not expected to have consciousness. When we write (or translate) texts, paint pictures, or compose music, we naturally do so in consciousness—and we are tempted to ascribe consciousness to programs that mimic such activities. But back in the 1980s, the philosopher John Searle, using the simple example of the Chinese room, explained why computer algorithms do not (and will not) have consciousness and understanding.

Imagine that you are simulating the activities of a computer program. You are sitting in a room, and sheets of text in an unknown language are shoved under your door. You respond to these sets of signs according to the instructions you have – you make up a prescribed sequence of signs and slip it under the door. For example, you have received the sequence of characters aliyetafsari nadeki haya. Turning to the instruction book, you arrived at the answer: mwanwa chumbani.

If the instructions are correct, the person who “talks” to you in this way will have the impression that there is someone in the room who understands Swahili and gives meaningful answers to questions. You can even pass the Turing test without noticing – people on the outside will think there is a real East African inside.

But in reality you don’t know a word of this language. You may not even know which language it is, or even that it is a language at all. To you it is nothing more than a meaningless set of letters. You just process these characters that mean nothing to you according to an algorithm. The algorithm can be very complex – and the imitation of “understanding” is very qualitative. However, there is nothing like an understanding going on here. So the program has no such thing as “understanding”. It accesses the syntax but not the semantics. Thus, the dangers of AI are not related to the fact that it will gain consciousness and imbue itself with hostility and contempt for us, but to the fact that human hindsight, irresponsibility and just bad will will acquire a new powerful tool.

We’ve already encountered fake AI-generated photos. A recent example is a photo of a powerful explosion that allegedly went off near the Pentagon. The photo quickly spread across the networks and was picked up by a number of news agencies, even causing a short-term decline in financial markets. It took some (albeit small) time to make sure there was no explosion – and the image was generated using a neural network.

Of course, fraud and forgery have always existed – but neural networks create additional possibilities unique to our time. It is now a matter of minutes to create a fake and throw it on the web where it will be seen by millions of people. In the case of the Pentagon, it was a filler that is relatively easy to disprove – the place is open, you can check it right away.

But we are yet to face a growing avalanche of fakes that will be prepared a little more carefully – they will take longer to debunk, they will be more difficult to detect because they will better confirm the animosity, fears and expectations of an audience.

Worse, these fakes will be able to shape both the mood of the masses and the decisions made by elites – especially in situations where decisions need to be made quickly. The quality of these fakes (and therefore the difficulty of debunking them) will increase rapidly. What can be done about it? The introduction of criminal sanctions, the requirement to place clear indications of their origin in AI-generated images are measures that are quite obvious, but insufficient. The fact that scams are prohibited by law will not save you from scammers.

A new world in which you can see on the screen what never happened in reality requires the development of certain rules for personal safety. And here it is very useful to recall what in the Orthodox ascetic tradition is called the “trial of thoughts.” It is about evaluating the thoughts that come to mind, first, in terms of their moral quality (are these thoughts good?), and second, in terms of their consequences (what will happen if I follow these thoughts ?)

In the Orthodox tradition, some thoughts can be inspired by the enemies of our salvation – the demons; they must be identified and discarded, “cut off.”

Whether you believe in demons or not, malicious people who want to push you into stupid, harmful, and destructive actions certainly exist.

In a situation where it is difficult to assess the authenticity of the message, it is worth asking another question: what does the person who sent it want from me. What state of mind is he trying to put me in? What actions to encourage?

Will these actions be reasonable, moral, dignified, constructive?

The ability to think before you hit the share button has always been important—and it’s only going to grow in importance.

Translation: V. Sergeev

Subscribe to our YouTube channel:

and for the channel in Telegram:

#danger #artificial #intelligence #humans

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.