I think for now it’s just a feature of what exactly is happening. You generate an image, just like you would draw it or use CGI, it is an attempt to capture reality.
We are extremely well trained to recognize certain things that should all be parameters in those processes. Some are very clear, others are not and not always quantifiable for someone who recognizes what it is.
CGI (as in rendering a 3D scene) has an advantage here because many parameters are present and have interdependencies. Think of ray tracing and materials.
AI works from a completely different angle and tries to guess pixels, so that interdependencies between things actually do not exist. The chance that things are not quite right increases exponentially as an image becomes more complex. For example, lighting is something that affects everything and an error somewhere is immediately noticeable to most people.
And then of course you have a big pitfall of AI that we are slowly moving towards: the training data. As you say, AI art has a recognizable style (especially for non-realistic images), there is a good chance that the next iteration of training data will contain the same images that were once created. After all, a number of sites have now made agreements to be used as a source, which also contains those pictures.
Result: feedback loop that can become increasingly larger.
AI picture -> training -> new AI picture -> back to training.
More people use it, all get typical pictures and they go back in, so the bias for those pictures increases.
And if we all stop making things ourselves and use AI for everything, we will stand still.
2024-03-08 10:59:35
#Beta #version #Adobe #Express #app #Android #iOS #includes #tool #Firefly