Home » Technology » Artificial intelligence, because it is wrong to humanize the machine and computerize the mind

Artificial intelligence, because it is wrong to humanize the machine and computerize the mind

Artificial intelligence (AI) can generate misunderstandings in many ways. If the dizzying developments in software and hardware are beyond the reach of most of us, perhaps the deepest source of confusion comes from the technical vocabulary of AI. So crowded with terms derived from cognitive science and neuroscience (BCS, which includes cognitive science and neuroscience), AI acquires unjustified biological and cognitive properties that undermine its understanding. In turn, scientific disciplines that study the brain functions underlying learning and behavior have increasingly borrowed from the computer and computational sciences on which AI is based, transforming the most complex and multifaceted biological entity we know in a simple calculating machine.

Interview

Jobs of the future: the digital philosopher

by Bruno Ruffilli


The conceptual loan

Artificial intelligence scholars speak for example of “machine learning”, an expression coined (or popularized, the debate is open) by Arthur Samuel in 1959 to indicate “the development and study of statistical algorithms capable of learning from data and generalizing to new data, and therefore performing tasks without explicit instructionsBut this “learning” does not mean what neuroscientists and cognitive psychologists mean when they refer to the way humans or animals acquire new behaviors or mental contents, or modify existing ones, as a result of experiences in the environment. Likewise , in AI we talk about “hallucinations” to describe errors or deviations in the output of a model compared to well-founded and accurate representations of the input data. There is a big difference with our hallucinations, disturbing perceptual experiences that develop in absence of external stimuli.

To explain this confusion we must take a step back and start from an idea of ​​Carl Schmitt, who observes how “all significant concepts of modern state theory are secularized theological concepts”. For example, political notions such as “sovereignty”, “state of exception”, “sovereign will”, “omnipotence of law” and “legitimacy” can be traced back to theological concepts: this conceptual borrowing has not eliminated the structure or influence of theological concepts, but recontextualized them within a secular framework. It is not only a historical observation, but also a severe criticism. Conceptual borrowing limits the critical evaluation of political concepts precisely because of their theological roots, from which they have not fully emancipated themselves, while power dynamics and decision-making processes in politics still reflect the structures established by religious thought.

Technology Biennial

Artificial intelligence? Let’s knock on wood

by Barbara Caputo*



The words that are missing

These considerations can extend to other disciplines. When new sciences emerge, they lack a technical vocabulary to describe and communicate their phenomena, problems, hypotheses, observations, formulations, theories, etc. There is an urgent need to be precise, clear, coherent and concise; to agree on definitions and promote standardization. Gaps are filled by inventing new terms, using translations from Greek or Latin, or adopting and adapting technical expressions from other disciplines. Artificial intelligence developed very rapidly and needed to borrow its vocabulary from related fields: cybernetics, logic, computer science and information theory; and above all the sciences that study human and animal ways of acting and behavior and their biological bases. The phenomenon developed starting from Alan Turing, who had a decisive influence on the parallelism with human intelligence and behavior to explain how machines could imitate some aspects of biological cognition. But probably the most problematic borrowing occurred with the label that defines the entire field: “Artificial Intelligence”, created by the American scientist John McCarthy in the mid-1950s.

In addition to “learning”, used for “machine learning”, there are numerous biological and psychological terms in artificial intelligence; we remember, for example, “adaptation”, “computer vision”, “memory”. But there are also many terms with technical meanings that are little or not at all related to the meaning they have in their original scientific context. Take the case of “attention,” an extremely popular term recently introduced in machine learning. In BCS it generally refers to the processes of prioritizing relevant neural or psychological signals to guide adaptive behavior in the current context, and the noun is often accompanied by other qualifiers (e.g., selective attention, spatial, object-based, feature-based ). The meaning in machine learning is very different, as Wikipedia also testifies: “Attention is a mechanism, within neural networks, in particular transformer-based models, that “computes ‘soft’ weights for each word, more precisely for its incorporation, in the context”. It is a case of polysemy, if not homonymy: the scientific differences between the two concepts are significant and profound, the similarities superficial and negligible, yet the psychological and biological baggage exerts a semantic power that pushes towards greater anthropomorphism. The ability of AI systems to pay attention, learn, and hallucinate further fuels AI projects, research programs, and business strategies. Unfortunately, but not surprisingly, this leads to recurring “AI winters.”

On the other hand, cognitive science and neuroscience have borrowed technical and quantifiable constructs from information theory and computer science, framing the brain and mind as computational and information processing systems. For example, Ulric Neisser, in the text that marks the birth of Cognitive Psychology, claims that “the task of a psychologist who tries to understand human cognition is analogous to that of a man who tries to discover how a computer was programmed. In In particular, if the program appears to store and reuse information, it would like to know with what “routines” or “procedures” this occurs”. Here too the list of borrowed expressions is long: we talk about “architecture”, “capacity”, “encoding and decoding”, “sampling”, “signal/noise ratio”, “transmission” and so on.

500 Italian men and women who count in Artificial Intelligence

edited by the Italian Tech editorial team



The limits of analogy

In many ways, the parallelism has been successful, providing a scientific and empirical basis for exploring the properties and biological basis of the human mind. However, it can sometimes go too far, and lead to a reductionist and impoverished view, in which the subjective qualities of the mind are more eluded than understood. Thus, for example, the patterns of brain activity necessary for or related to psychological phenomena are considered sufficient explanations in themselves, the vivid and experiential contents of our mind are flattened into prolonged activations or functional states of groups of neurons, the moment of intentional choices reduced to activation levels that reach a decisional limit.

This situation generates confusion in those who are not experts and believe that AI is intelligent, in those who are experts but believe that AI will create superintelligent systems, and in those who do not bother to know the topic and exploit its dark sides to his interests, often financial. Part of the credence enjoyed by the science fiction image of AI comes from an anthropomorphic interpretation of computational systems, but also from a very superficial and merely computational understanding of the mind.

What can be done to address such a conceptual mess? Probably nothing in terms of language reform: AI and BCS will continue to use their terms, no matter how misleading they may be, how much resources they waste, and how much damage they may cause in the wrong hands or contexts. AI will still describe a computer as an artificial brain with mental attributes, while cognitive and brain sciences will continue to flatten the brain and mind as if it were a biological computer.

Interview

Brigitta Muntendorf, at the Biennale Musica a work for voices created with artificial intelligence

by Bruno Ruffilli



The lesson of horsepower

However, it is the history of the language itself that gives us reason to hope. Greater understanding and more facts shape the meaning of words and improve their use. We still use expressions like “the sun rises” and “the sun sets”, even if no one believes that the sun goes anywhere in relation to our planet: the geocentric model has long been abandoned, the language has retained the expressions but has updated the meanings.

We close this article with an analogy that offers reasons for optimism. In the late eighteenth century, during the Industrial Revolution, Scottish inventor James Watt was instrumental in the development of the steam engine. To attract new customers, he had to demonstrate how the engine surpassed the work of horses. He then measured the work done by draft horses in the coal mines. He observed that a mine horse could turn a mill wheel once per minute, lifting to the height of one foot about 33,000 pounds, and therefore defined the standard unit of one horsepower as the displacement of 550 pounds per second . The conceptual borrowing worked, and the term “horsepower” (HP) was universally adopted to measure steam engine power. Today it remains the standard unit for indicating the mechanical power of an engine, but of course no one is looking for hooves and manes between the cylinders. One day, if we are lucky, people will consider AI like HP and stop looking for cognitive or psychological properties in computing and computing systems.

*Center for Digital Ethics, Yale University, USA, and Department of Legal Studies, University of Bologna.
** Wu Tsai Institute and Department of Psychology, Yale University, USA

Translation and summary by Bruno Ruffilli. The original article, longer and in English, appeared in the magazine Minds and Machines 34, 5 (2024), the full version is available Who.

Interview

Cory Doctorow and the theory of enshittification: why technology can only get worse

by Bruno Ruffilli



#Artificial #intelligence #wrong #humanize #machine #computerize #mind
– 2024-05-02 07:57:41

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.