Home » today » Health » Why AI models collapse – DOCMA

Why AI models collapse – DOCMA

As previously reported here, all kinds of AI models collapse when trained with input they themselves have produced. But why do AI systems poison themselves with their own creations?

It has long been known that AI systems need fresh, human-produced input for their further development if they are not to degenerate. current articles just shows how universal the problem is, as it affects text-based systems like ChatGPT as well as neural networks with a completely different architecture that generate images.

The more a generative AI is trained with its own images (“poison”), the fewer steps its output degenerates in. (Source: “Nepotistically Trained Generative-AI Models Collapse” by Matyáš Boháček and Hany Farid)

We know the problem from vampire films: Vampires bite non-vampires, suck their blood and turn their victims into vampires too. If it weren’t for the Van Helsings fighting them, the whole of humanity would soon be undead – but what then? What would the undead want to feed on if there were no more living people? So aren’t the vampire hunters actually the ones ensuring the survival of the vampires by preventing their triumph? The manufacturers of AI systems have a similar problem: They have already scoured almost the entire Internet to obtain training material and need new input to further improve their systems, but the images and texts that are newly posted are increasingly already AI products. The AI ​​models are therefore often fed with their own output as input.

Anyone who wants to develop further needs challenges, and anyone who is only fed with their own creations will inevitably stagnate. A generative AI will not pull itself out of the swamp by its own hair and invent something new, but will instead fit into an increasingly narrow corridor of the current mainstream. It is obvious that an AI trained with AI images will produce increasingly boring results, but why does it instead generate increasingly abstruse results?

Once an AI has been “poisoned” with its own output, it cannot always be cured by feeding it human input over several training generations. (Source: “Nepotistically Trained Generative-AI Models Collapse” by Matyáš Boháček and Hany Farid)

Training an AI with examples that are found has the general tendency to reinforce probable results and to weaken less probable (but possible) results. Now, every AI develops certain idiosyncrasies, a style if you will, that normally remain below our perception threshold – we just pay attention to other things. However, if images or texts with such idiosyncrasies are fed back as teaching material in the training of the next AI generation, they are reinforced because they now appear in every example and must therefore be considered very probable. An AI that highlights the idiosyncrasies of its previous generation is reinforced in this during training, while its actual task increasingly fades into the background. After just a few generations, the AI ​​only cultivates its own eccentricities and is worthless for our purposes – it has collapsed.

Even if you identify the problem and feed human-made training material back into the system during further training, the AI ​​will not fully recover, as you can see from the examples above – in some cases the quality of the images generated by a generative AI will improve again, but some of the acquired oddities will persist.

To prevent such a development, the training material could be viewed beforehand, but this would be a complex process and hardly practical given the required size of a training corpus. It would therefore also be in the interest of the manufacturers of AI systems to mark their products as AI-generated, be it via digitally signed metadata according to the CAI/C2PA standard or with embedded watermarks. This is the only way they could protect their AI models from poisoning themselves with their own output.

There is hope that the deadly self-reinforcement of undesirable characteristics would be avoided if an AI were trained with the material of another AI with different idiosyncrasies. However, this might only delay the collapse, as the AI ​​models of different manufacturers would become more and more similar – in the end, only “fresh blood” will help, to resume the vampire comparison.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.