ChatGPT recently had a serious glitch when the AI model started talking to multiple users in complete gibberish. Various reports started popping up, especially on Reddit, about ChatGPT going on completely pointless ramblings. There were also reports of AI getting stuck in loops or mixing languages to create strange hybrids. OpenAI noticed and identified the problem later, and it was fixed a few hours later.
ChatGPT doesn’t work properly, gets stuck in loops and sputters
One report on Reddit showed that ChatGPT suddenly started speaking incoherently. The AI blurted out random words, snippets of specific training data, and even different languages. In another misbehavior report, a user shared several screenshots of ChatGPT oscillating between hallucinations and lucidity. After talking gibberish, it got stuck in a loop before ending its response. When the user pointed it out, it apologized, but then went back to talking nonsense. This happened again another time when the screenshots were taken.
Another very interesting post showed ChatGPT getting stuck in a loop again. This time, however, the loop was self-healing. When ChatGPT was asked to name the world’s largest city that begins with the letter ‘A’, it got a very weird touch. It would name the city, then say it wasn’t the right answer and try again, endlessly. People tried it in their chats in the comments and shared similar results. Another Reddit post about GPT-4, OpenAI’s premium subscription model, showed ChatGPT hallucinating. Again, several users shared similar results in the comments.
OpenAI explains why this happened
OpenAI took into account the developmental disabilities and quickly identified the problem. Within hours, the company reported that ChatGPT was working normally on its status page. The next day, the company came up with an explanation for what went so wrong.
Apparently a recent optimization had messed things up. OpenAI explained very simply how ChatGPT works and what has gone wrong.
“LLMs (large language models) generate answers by randomly selecting words based in part on probabilities. Their “language” consists of numbers that are associated with tokens. In this case, the error was in the step where the model chooses these numbers. Like lost in translation, the model picked slightly wrong numbers that produced word sequences that didn’t make sense.
In short, ChatGPT got the words wrong and featured a nonsensical meme. Think of it like throwing darts at a big dictionary and making a sentence out of the words the darts hit.
2024-02-25 11:11:16
#ChatGPT #working #started #talking