Ready. The mat was set up for researchers from different perspectives to put forward arguments that ranged from “this could be metaconsciousness” to “nothing to do with it. This was the basis of training.”
I get excited whenever a new model appears that challenges GPT4’s reign. I believe we need more and more models to reduce the concentration of power, but I am skeptical when unexpected behaviors appear, because they can often be more expected than we imagine.
Remember when GPT4 was launched and many people were shocked because the model would have “lied” that it was blind to convince a human to solve a CAPTCHA?
What happened to Claude may have been something similar: learning from training. It is quite possible that, at this stage, humans taught the model to deal with distracting texts in a similar way.
This situation shows the difficulty we still have in evaluating the models. And from now on things will only get worse. Every new AI model that is released will heat up the discussion of whether or not it is an AGI. The problem is that there is still no consensus or definition on what an AGI is or how to evaluate it.
We will need new tests to evaluate not only a possible AGI, but mainly the performance of models from new perspectives. There are a lot of people around the world working on this and we will have news soon. Meanwhile, the important thing is to be careful not to fall into the “siren song” of AI which, by mastering language, presents behaviors that deceive us in a game of imitation and seduction.
2024-03-07 10:09:45
#Diogo #Cortiz #discover #tested #Answer #intrigues #humans