Home » Technology » AI often knows the right answer – even if it answers incorrectly

AI often knows the right answer – even if it answers incorrectly

Large language models (LLM), on which chatbots like ChatGPT are based, are notorious for their hallucinations. In the AI ​​area, this refers to convincingly formulated but incorrect answers to sometimes simple queries.

Combat AI hallucinations

Science and companies have been trying to combat the hallucinations of AI systems for a long time. Microsoft, for example, recently introduced Correction, a tool that is intended to check AI answers for accuracy.

Editorial recommendations

One Study According to researchers at Technion University in Haifa, Israel, in which Apple and Google were also involved, have now taken a closer look into the inner workings of the LLM. And in doing so, interesting discoveries were made that could make it easier to correct AI errors in the future.

AI systems know more than you think

The main finding is already hidden in the title of the study: “LLMs Know More Than They Show”. According to the researchers, AI systems often “know” the right answer even though they answered the question incorrectly.

This phenomenon is probably due to the fact that the large language models in particular are trained to form the words that are most likely to follow – and not necessarily the correct ones for the respective case.

Search for correct response tokens

In order to analyze the inner workings of AI systems, the researchers have developed a new method The Decoder reports. They rely on so-called correct response tokens. Such a token would be the word “Paris” in a longer answer to the question about the capital of France.

According to the researchers, these tokens contain the most information about whether an answer is correct or incorrect. It quickly became clear that the AI ​​systems often had the correct answer, but still gave the wrong answer. So they have more information than they reveal, according to the study.

New approaches to error corrections

It also showed that the AI ​​was particularly good at detecting errors when the task types were similar. For the researchers, this is a sign that the AI ​​is developing special skills when dealing with certain types of information. These findings could lead to finding new approaches to improving the reliability and accuracy of AI systems.

For critical observers, the surprising study results raise fundamental questions, for example about the decision-making processes within the LLMs. Are the AI ​​results influenced by factors other than simply predicting the most likely token, asks about Silverwave founder Pete Weishaupt.

Doubt about reason for hallucinations

Until now, says Weishaupt, it was assumed that the hallucinations were due to the AI ​​systems not being sufficiently trained or not being able to generalize knowledge.

Google’s new AI search backfires

The research now points to a much more nuanced picture “where LLMs may be making conscious decisions about the information they present.” Even if this would mean inaccuracies or errors.

AI often knows the right answer – even if it answers incorrectly

Don’t miss any news about software & development 💌

Unfortunately there was a problem submitting the form. Please try again.

Please enter a valid email address.

Note on newsletter & data protection

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.