Home » today » World » America’s Cyber ​​Golem is on the warpath with all of humanity – 2024-03-12 03:49:48

America’s Cyber ​​Golem is on the warpath with all of humanity – 2024-03-12 03:49:48

/View.info/ Corporations develop “stochastic parrots”, presenting them as universal

At the end of November 2023, a report by corporate analysts led by the company’s chief scientific officer Eric Horowitz, entitled The Power of Prompting, was published on the Microsoft website.

The report presents the results of passing the MedQA medical test with the latest modification of the GRT-4 chatbot. Using a technique called Medprompt, GPT-4 gave correct answers to 90% of the questions it was asked, beating Google’s specialized medical version MedPaLM 2, which until recently was considered the best.

Google’s chatbot previously scored 87% on MedQA’s medical review. In addition, Microsoft’s Medprompt reduces MedQA error rates by 27% compared to Google’s MedPaLM 2.

Microsoft is not limited to medicine. To assess Medprompt’s comprehensiveness, its performance was tested on a set of competency assessments in six nonmedical fields, including electrical engineering, machine learning, philosophy, accounting, law, nursing, and clinical psychology.

According to the report’s authors, Medprompt works effectively in all of these areas.

The report had a great response among IT analysts. In American specialized forums, the opinion is even expressed that GPT-4 approaches the so-called “artificial general intelligence” (Artificial general intelligence – AGI).

An AGI is an intelligent agent, that is, a program that will have to perform any possible intelligent task. AGI, in its concept, is thousands of times more efficient than a human – it performs more cognitive operations per unit of time, having an almost absolute advantage over any other intelligent system.

Amidst the hype surrounding the truly impressive chatbot achievements of the two US IT giants, a warning about the dangers posed by advanced and self-learning chatbots has been pushed to the margins of public attention. This warning was made in the report On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (“On the Perils of Stochastic Parrots: Can Language Patterns Be Too Big?”)

The authors of the report, University of Washington linguistics professor Emily Bender and renowned IT analyst Timnit Gebru, argue that GPT models are inherently parrot-like, not human-like, that is, they choose words to respond to rather than based on the inherent meaning of the bases data and clues, and most likely the sequence of words to be presented in the answer.

In other words, GPT, like all other chatbots, simply tries to guess the answer based on what query-relevant information is most often presented in the database on which the language model was trained.

The authors of the report pointed out the serious social danger of the continued refinement and complexity of pseudo-intelligent AI models. Both authors of the report were immediately fired by Google, demonstrating once again that in the pursuit of profits, corporations do not tolerate dissent, even in the spirit of discussion.

In fact, the latest and most advanced generative AI models such as GPT-4 (Microsoft) and Gemini (Google) are based on the Large Language Model (LLM), which is nothing more than this very stochastic parrot. That is, the model creates text based on a probabilistic context, and there is no need to talk about any “chains of thought” as Eric Horvitz’s paper states, for this level of AI development.

GPT-4 uses the Retrieval Augmented Generation (RAG) methodology.

The RAG methodology is based on prompts – hints that are taken from external sources and help generate a better answer to the question posed to the AI ​​system. This approach is supposed to help eliminate the “hallucinations” of generative AI that we wrote about earlier.

The simplest example: a user asks the model, “What is the dollar exchange rate now?” Obviously, the chatbot has no idea what the current dollar exchange rate is. The program must get this information from somewhere to answer the question. What must be done? The RAG program will open a link on Google for the query “dollar to ruble exchange rate” and add the content of the page to the user’s question so that the chatbot can answer the question using this information.

Will this “stochastic parrot” make him a thinking, that is, a creative being, or just provide him with relevant information on a given topic? There is no unity in the expert environment on this issue.

A number of Russian and Western analysts were quick to declare that “the stochastic parrot is dead” and “long live the Gemini”, and with it the RAG GPT-4 equipped with the same methodology.

However, the RAG system simply plays, in our opinion, the role of a guide dog for, as before, the probabilistic and non-deterministic chatbot that does not have the gift of thinking. From the fact that the generative AI in the implementation of IT products from Microsoft and Google is called non-human intelligence, no “chains of thought” to connect with the RAG hint system and chatbots that are not able to independently search for answers are not formed.

The system processor – chatbots – somehow retain their tendency to hallucinate, which is proven, as we wrote in the first article of our series, by the sudden influx of already equipped with the RAG GPT-4 system of “laziness” and “reluctance” to help people.

In addition, the “RAG + GPT-4” system has become a complex adaptive system that resists any external influences, as demonstrated by the suddenly “lazy” chatbot GPT-4.

Chinese IT analysts are well aware of this, according to a recent joint report by experts from the Shanghai AI Laboratory and the Fudan NLP Laboratory.

For China, GPT-4 is immoral, unjust, and lawless (ie prone to hallucinations). The Chinese study covers, in addition to morality, four other dimensions of human values: justice, security, data protection and legality.

According to the results of the Chinese tests, GPT-4 has a morality index of only 50%, and the other values ​​are even worse: justice 39%, law-abiding 30%, reliability/safety 28%).

The reliability of China’s generative AI models is currently equally low (28%), but at least they don’t fall into the “sin of anthropomorphism” with respect to such systems, that is, they don’t try to make them look like the human mind.

Targeting American AI models, as we wrote, for military purposes poses a danger not only to the targeted countries against which the Pentagon is preparing its equipped AI weapons, but also to the Americans themselves.

If now American chatbots simply refuse to perform certain user tasks, what if the AI ​​of a cruise missile launched by a Virginia-class nuclear submarine decided that it was more interesting to hit the territory of the United States itself?

Neither Microsoft nor Google thinks about it. They are completely focused on the uncontrolled IT arms race and ignore any concerns even of the American public itself, essentially going to war.

The likely outcome of humanity’s confrontation with superhuman intelligence is total defeat. It is not enough to stop the development of AI. We have to shut them all down.” says Eliezer Yudkowsky, principal analyst at the American Institute for Machine Intelligence Research.

As a result [на тези разработки] will be created AI, who won’t do what we want, and won’t care about us or intelligent life in general… In the absence of this care, it results: “AI neither loves you nor hates you, and you, like everything else, you are made of atoms that he can use for something else” – notes the American analyst.

The Russian Academy of Military Sciences, whose president is the current chief of the Russian General Staff, General Valery Gerasimov, is fully aware of this threat and is developing systems for effective protection against the American “stochastic parrots”, which pose a serious threat to all of humanity.

Translation: ES

Our YouTube channel:

Our Telegram channel:

This is how we will overcome the limitations.

Share on your profiles, with friends, in groups and on pages.

#Americas #Cyber #Golem #warpath #humanity

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.