Home » Business » ChatGPT Chatbot: Unpredictable Behavior and Seasonal Depression Explanation by OpenAI

ChatGPT Chatbot: Unpredictable Behavior and Seasonal Depression Explanation by OpenAI

ChatGPT was unleashed on the world just over a year ago. The chatbot became extremely popular in no time. In less than two months, ChatGPT reached one hundred million users, an unprecedented achievement in the world of technology. The tool owes its popularity to the many tasks it can perform. Whether it concerns writing text, generating code or filling out a spreadsheet: ChatGPT has become an indispensable tool in the workplace for many people.

But in recent weeks, the chatbot has started exhibiting somewhat peculiar behavior. Users on X, formerly Twitter, are increasingly reporting that ChatGPT does not answer their questions or only partially answers them. For example, instead of generating code, the tool explains to users how to write it themselves. Fill in a CSV file (a specification for table files)? Do it yourself.

OpenAI itself doesn’t know

The problem quickly reached such proportions that OpenAI itself was forced to respond. “We haven’t updated the model since November 11, and there’s certainly no intent involved. The model’s behavior can be unpredictable and we are looking to fix this,” the company behind the chatbot said. In other words, OpenAI itself has no idea what’s going on.

It is then up to the users to find out what is wrong. Some came up with the surprising idea that this has to do with the time of year. More specifically, ChatGPT is said to suffer from a kind of seasonal depression. The logic behind this is that the model would have learned from the datasets it was trained on that people slow down in December, and it could copy this behavior.

Michiel Vandendriessche, co-founder of the Leuven AI startup Raccoons, explains this further. ‘There are people who have investigated the issue and who speak of a statistically significant result.’ For example, if you enter an assignment in May, you will receive more complex answers than in December, according to those researchers. “OpenAI itself, however, found no connection,” says Vandendriessche.

He himself considers it rather unlikely that the winter season has an influence on ChatGPT’s behavior. “It seems very strange to me that this is actually the reason,” he said. Another possible explanation, according to experts within Raccoons Vandendriessche spoke to, is that energy is simply more expensive during the winter months, which could make OpenAI consciously decide to shorten the output. On the other hand, OpenAI would contradict itself by stating that it does not know the cause itself, Vandendriessche admits.

Another possible explanation is that the load on the servers has simply become too much due to the enormous popularity of ChatGPT. In mid-November, for example, OpenAI had to temporarily stop accepting new subscriptions. This happened shortly after the first OpenAI DevDay, where some new tools were announced, including the ability to create your own custom GPT chatbot. In mid-December, CEO Sam Altman posted the following on X: ‘We have enabled ChatGPT+ subscriptions again! Thank you for your patience while we find more GPUs. However, since then there are still users who complain about a less efficient ChatGPT.

‘Black box’

While it’s still not entirely clear what’s going on and whether ChatGPT is suffering from the winter blues, one thing is certain. Technology is increasingly starting to behave like a ‘black box’. This means that we only know the inputs and outputs, but actually have no idea what is going on inside a system.

“Even a top researcher within OpenAI cannot predict in any way, even by looking at the underlying code, what the exact output will be,” says Vandendriessche. After all, LLMs or large language models (algorithms that can generate content using very large data sets; in the case of ChatGPT, almost the entire internet) work by performing complex calculations to arrive at an answer. “Those calculations are simply beyond people’s understanding,” said the Raccoons co-founder.

“As systems become more complex, it will increasingly happen that we simply don’t know what is going on,” says Vandendriessche. ‘However, that does not necessarily have to be a problem. As long as we can check that the outputs are not too excessive, we should ask ourselves how bad it is that we do not always have insight into how an AI arrives at a certain answer.

Take a deep breath

But even if we don’t always know why a chatbot gives a certain answer, we can find ways to generate the desired output in the meantime. Here too, it sometimes seems that ChatGPT takes over human behavior.

For example, a user on The larger the tip, the longer the answer becomes. Others say ChatGPT needs to encourage them a little to continue, like a coach who gives an athlete that extra push. Even when you ask the chatbot to take a deep breath before giving an answer, it seems to perform better. Even typing everything in capital letters seems to have an effect.

So the next time ChatGPT gives you a disappointing answer, just stay patient. Maybe you just need to be a little more empathetic and ask very nicely if the chatbot wants to do it again. And if that doesn’t work, you can always try to bribe the tool.

In collaboration with Data News


2023-12-28 19:21:27
#ChatGPT #exhibits #strange #behavior

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.