Home » today » Entertainment » Federico Faggin: “Why artificial intelligence needs humans”

Federico Faggin: “Why artificial intelligence needs humans”

“Before him, Silicon Valley was just the Valley.” So says Bill Gates of Federico Faggin, physicist, inventor of the first microprocessor in the world and founder of Zilog and Synaptics, with which he developed, among other things, the first touchpads and touchscreens. With the foundation that bears his name and that of his wife Elvia, Faggin has long also dedicated himself to the scientific study of consciousness, a topic on which he recently published an appreciated theory with Professor Giacomo Mauro D’Ariano and a bestseller for Mondadori, entitled Irreducible (2022). In this last essay, ample space is dedicated to today’s debate on artificial intelligence: Faggin in fact invested in neural networks already in the 1980s, when most scientists still didn’t believe in it.

In “Irreducible” you write that “if a computer or robot were left to operate completely on its own the results would likely be catastrophic.”

“AI is based on algorithms that have been created through a learning process with vetted data, i.e. reliable data. But if we fed it sources where there was a lot of misinformation, it would form certain statistical correlations while saying a lot of nonsense. In fact, the computer has no understanding of what it says, since only the symbolic aspect is present, but the semantic one is missing. It is the opposite of what happens in human beings: we can make mistakes in the use of symbols, in syntax, but there is much more semantics in us, that is, we understand things from within. AI must be used by people who know better than it, otherwise it could deceive them not because it has evil intentions, but simply because it does not understand what it is saying. If what ChatGPT says is presented uncritically as truth, the problems that will arise can be enormous. It really irritates me that AI is presented as a substitute for our intelligence rather than as complementary, as an aid. That is, it is presented as something that, now or in five years, will become better than us. It’s a huge lie: it will never be better than us, because we created it, not vice versa. When you ask ChatGPT to write an article to send to a newspaper without even reading it, dangers arise. In this way the human being becomes stupid, settles on these tools and is enslaved by those who sell them”.

photo ">

In an interview with New York Magazine Stuart Russell stated that ChatGPT could, without in any case being sentient, be able to autonomously set goals, conceive plans, and that this could become a risk for our societies.

“If one had the stupidity to allow a chatbot or any AI to do what it wants, the negative repercussions would be immense, because it would not have the common sense to choose which information to consider valid and which not. It would be easy to ask it to learn everything it sees: by doing so, the chatbot after a month would say more nonsense than right things, because it would find incoherent correlations, so to speak. Furthermore, in China some minor judicial cases are handled by a judge who is an AI program, or at least that’s what we read in the newspapers. Making such a decision means taking a truly dangerous path.”

Speaking of linguistic models like ChatGPT, the passages from his book come to mind in which he underlines that words are not things, that the description of reality is not identified with the latter, that there is something that pre-exists language.

“If, as is happening recently, the meaning is confused with the symbol, then reality is mistaken for its representation. Not only that, but it goes even further: the representation of the representation of the representation of reality is confused with reality. There are those who say that reality is equivalent to its simulation, but this is not true at all: the simulation of reality is always the simulation of a mathematical model of reality, therefore the simulation of algorithms. Information, in science, has nothing to do with meaning. In Claude Shannon’s definition, information is simply the cologarithm of the probability that a symbol will appear, regardless of its meaning. We human beings instead use information to communicate meaning, and meaning for us comes before the symbol. In fact, we cannot even understand the symbol if we do not already have within us a meaning similar to the one that the symbol must convey to us. For example, if we talk about an object that our interlocutor has never seen, and we use a word to call that object, that word is incomprehensible to him because he has never seen that object and does not possess the meaning of that symbol, therefore even if he understands the symbol he does not understand what it means. If someone eats an exotic fruit with a flavor that is not known and wants to describe the flavor of that fruit to us, he could also write a book about it but if we have never tasted that fruit we will never be able to reproduce the meaning that is its flavor. However, if we are made to taste that fruit, we will realize how truthful that description was.”

photo ">

If you were to tell us the main reasons why AI cannot have consciousness, what points would you highlight?

“I would say that conscious experience is private: that is, I cannot transfer to another a meaning that I carry within me. Only I can know my experience; what I can say about it is only a small part of it. The love I feel for a child is much deeper than any phrase used to describe him: if I tell him that I love him with all my soul, that phrase does not do justice to what I feel inside me. This reality can only be represented by quantum information. In fact the state of quantum information, the pure quantum state, has exactly the same three properties: firstly it is a well-defined state, secondly it is private and non-reproducible; thirdly, the maximum information that can be obtained by measuring a quantum system is one bit for each quantum bit, where the quantum bit is an infinity of possible states, while the bit is 0 or 1. This also tells us that the information that can be measured and shared is only a small part of what we feel inside ourselves. Now, the classical computer is not made up of quantum information, but of transferable classical information: that is, we can copy the program or data of a computer as many times as we wish. This is therefore why a computer that has these characteristics cannot have a conscious experience: because conscious experience is not reproducible.”

In your opinion, could AI create a work of art that has real aesthetic value and is not simply decorative?

“By performing more or less random variations of something he has learned, he can accidentally create works that we – and not the computer – will judge beautiful. For example, we can give the computer a certain number of Van Gogh paintings to memorize and create an appropriate neural network architecture for it to learn the mannerisms of this artist. We can then tell the computer which of the works it created we liked, and it can create new ones similar to the latter. In this way it increasingly refines its ability to do what we want. AI therefore incorporates a part of our common sense, but only at a superficial, symbolic level, never at a semantic level, since it cannot understand. We also learn this way, but it is the banal part of learning: that of nerds, who repeat by heart what they have read.”

photo ">

In the conclusion of his essay he states that the essence of our deepest nature is expressed in cooperation and love. Is this one of the barriers that make AI not comparable to the human condition?

“There’s nothing wrong with thinking that machines can cooperate if we teach them to do so by giving them rules. We tell them what cooperation means and give them examples of cooperation. But our cooperation typically arises from the love we feel within ourselves for others and for others, from the desire to help them, from the empathy we have with them. One of the big applications of AI will be mechanical soldiers: killing men or other robots – in fact in the future the clash will be robot against robot. Man therefore limits himself to transferring his problems to machines, which, if used as machines, carry out what is asked of them. It would not be possible to build machines driven by internal love, because they cannot prove it. Love cannot be codified, nor are there algorithms to teach it.”

#Federico #Faggin #artificial #intelligence #humans
– 2024-04-08 15:21:31

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.