Home » today » Health » Unveiling the Deceptive Influence of AI: How Machine Learning Leads People to Mistakes

Unveiling the Deceptive Influence of AI: How Machine Learning Leads People to Mistakes

Don’t follow the machine: how does AI teach people to make mistakes and why do we fall for its tricks?

A biased algorithm is not the most reliable assistant.

Study conducted by psychologists from the University of Deusto in Spain, showed how Artificial Intelligence (AI) is a technology for creating intelligent machines and computer programs whose main purpose is to use computer technology to understand human intelligence.

Researchers identify two main directions for the development of AI:
• solving problems associated with bringing specialized AI systems closer to human capabilities and their integration, which is realized by human nature;
• creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of humanity.

” data-html=”true” data-original-title=”artificial intelligence”>artificial intelligence influences the way we think.

The achievements of artificial intelligence systems (for example, the ability to conduct a dialogue with a person on equal terms) have made the technology as convincing and reliable as possible in our eyes. Many companies are actively introducing neural networks and machine learning into their work process to make life easier for employees.

Despite all the advantages, the results that neural networks produce can be quite biased. It is important to understand that the basis for training AI models is human-created materials. If the input contains errors, the algorithm will reproduce them too.

As part of the study, volunteers had to diagnose a patient with a fictitious disease. Participants were divided into two groups: some used AI tips, while others made decisions on their own.

The fake algorithm (in reality, the subjects were interacting with a completely controlled program) deliberately made the same mistakes. Later, when the AI ​​assistant was turned off, people, relying on the same logic, began to make similar mistakes.

This effect was not observed in the control group.

It is obvious: if models trained on open data and supposedly reliable systematically transmit any information to us, it will in any case be stored in memory.

Not only do we unknowingly spread misinformation online, but we also risk becoming victims of it ourselves by adopting the biases of the “authoritative” system. It turns out to be a kind of vicious circle that can only be broken by careful fact checking and new regulations on the part of the developers.

The results of the study were published in the journal Scientific Reports, a scientific journal that publishes research from various fields of natural science. The journal is part of the Nature Publishing Group’s publishing portfolio and uses an open access model, allowing articles to be widely disseminated to the scientific community and the public.

Articles in Scientific Reports undergo rigorous peer review before publication to ensure their high quality and scientific relevance. The journal aims to serve as a platform for the rapid and widespread dissemination of scientific data, research and findings.

” data-html=”true” data-original-title=”Scientific Reports”>Scientific Reports.

2023-10-04 16:38:21

#teach #people #mistakes #fall #tricks

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.