One of the questions players have been asking since the invention of graphics is, “How long will it take to make a GPU game look real”? “That only became our concern at the time. Although photo-realistic recordings can be made, they usually require hard work on the part of the artist, not to mention a lot of processing time. Games that typically last 30 to 60 cannot become FPS, where you are spending one frame per second, less than a few hours.
Intel publishes the results of a new paper explaining how true He can Coming to the game with improved AI. The video below explains how the new system works, but for those who can’t see it, I’ll cover it with text:
Intel researchers started Photos provided, Then bypass the image development network. These are the rules for development or any other type of image development. In addition, the AI network disrupts graphics card repositories and extracts data from game engines about object types, shapes and lighting in the current scenario. This information is then sent to the G-Buffer coding network to develop Tensor features.
The tiles use Perceptual Highlighter to record how realistic each scene is, with additional information about the title provided via real-life images. The data is taken from the terrain and categorized to make sure the network understands how to handle trees differently from cars, eg. (Trees with showroom glossy paint are less likely to pop.)
We’ve put together a lot of comparative photos to give you an idea of a back and forth. Additional photos available Here:
The path is open, by default GTA V.
Same frame, but used with Intel picture enhancement. The leaves at the foot of the hill are green and the asphalt is very beautiful. More colors appear, but in a way that feels more realistic (in my eyes) than the original GDAV picture.
Comparative citation from publication. Here’s another example:
This view is not a very interesting film, but it is the reason it was chosen. Compare GDAV default screenshots and AI enhanced releases:
Intel optimized image defocusing. It turns a once bright scene into a dull, cloudy gray environment with some interesting splashes of color. In the first shot, the sky appeared high white and only cloudy. Second, the cloud hangs firmly. The panels need to be colored. But the overall picture looks very realistic to my eyes.
I think it’s fair to ask whether the upgraded AI version has changed the way the gaming experience around the film is affected, and the aim of this project is not to “improve the graphics” but to create realistic games. This raises the question of whether the game should be original at first, as many universes rely on explicit visual traditions. Not. This means that it is a reality in life. But Intel’s work shows real progress in this area.
Lastly, here’s a comparison where AI clearly needs work. First, a screenshot of the original GTA V:
Now the Intel version:
Square raindrops are definitely an exception, maybe they can be solved. Urban planning data are from trained researchers, which do not contain many images of rain, especially puddles.
Some people don’t like the way GTAV changes the look, and that’s fine. The aim of the program is not to create “realistic” candidates for every topic. This job is a surefire step towards achieving the best overall realistic image. Give it a few more years and the developer can use this type of technique to enhance the content before selling it.
Read now:
–