Home » Entertainment » New Technology Unveiled by MIT Researchers to Combat Manipulation of AI-Generated Images

New Technology Unveiled by MIT Researchers to Combat Manipulation of AI-Generated Images

Generative artificial intelligences have made it no longer necessary to be an expert to create and manipulate images with results that are indistinguishable from reality. This new creative explosion also has its dark side. and allows anyone to use the photos you have uploaded to your favorite social network and manipulate them to harass or blackmail you. Despite the increase in this type of crime, so far we have not found an efficient system that prevents these malicious uses of AI. At least until now. MIT researchers have just unveiled new technology that they say is capable of distorting these manipulations and preventing them from being passed off as real.

The official who affirmed that the Pentagon hides remains of alien spacecraft denounces harsh reprisals

Maria Duarte

The new versions of generative artificial intelligences such as Midjourney or Stable Diffusion are already capable of converting text into images of extraordinary photorealism. They can also modify existing images and create a new manipulated photo of such quality that even experts would have a hard time distinguishing them from the real thing.

As we explained in the episode of Control Z: ‘The End of Reality’ (under these lines) both the photos and the videos will soon cease to be reliable documentary evidence of our activities. This technology can be used for good—some say it is the greatest creative leap humanity has taken since the 19th century—or for evil, creating images that can be used to frame an innocent person for a crime or to show fabricated situations. with which to harass teenagers in high school.

“The rapidity of these actions compounds the problem. Even when the deception is finally discovered, the damage—whether reputational, emotional, or financial—often has already been done. This is a reality for victims at all levels, from individuals harassed at school to the manipulation of the whole society”, affirms Hadi Salman, MIT CSAIL PhD student and lead author of a new study that proposes an effective tool to solve this problem: a technology that they have called PhotoGuard.

How does it work

AI models don’t see images as a whole, like we humans do, but as a complex set of mathematical data that describes the color and position of each pixel that makes up the image. PhotoGuard employs two manipulation methods to ‘immunize’ the photos. The first one introduces small alterations in this mathematical representation of the images that are invisible to the human eye. If someone tries to use an AI like Stable Diffusion to manipulate an image that has been treated with this system, the result will be a distorted version of the original photo that will not be unreal to us.

In the example the researchers present, they take an image of the South African comedian Trevor Noah. By applying PhotoGuard, the result is that any attempt to use the AI ​​to modify Noah’s photo results in a gray block.

The second technique is called broadcast attack and researchers say it is much more complex. The diffusion attack alters the way AI models generate images, introducing hidden signals that completely change the way they are processed. Going back to the image of Trevor Noah, the team managed to manipulate the AI ​​into ignoring its text prompts and generating a gray image that is clearly artificial.

A test carried out with the image of Trevor Noah (i). (MIT)

“The AI ​​progress we’re seeing is really impressive, but it allows for beneficial and malicious uses alike,” he says. Alexander Madry, MIT professor and CSAIL Principal Investigator, who is also an author on the paper. “It is therefore urgent that we work to identify and mitigate the latter. I see PhotoGuard as our small contribution to that important effort.”

A good solution, for now

The researchers acknowledge that, despite its efficiency, the system has its limitations. For now it only works effectively with Stable Diffusion and does not protect older photos that have already been uploaded to the internet. However, it is a technique that can complement others such as the incorporation of invisible watermarks in the images, although according to what the experts told us in the interviews carried out for Control Z, these watermarks can also be easily forged by criminals.

Artificial intelligence will bring social disaster in the short term and nobody knows what to do

Jesus Diaz

The researchers think the most effective way would be for tech companies to automatically add it to the images people upload to their platforms. And they encourage the creators of these generative image models and all interested parties to actively collaborate so that technologies like theirs are really effective, at least until criminals find a way around them.

“A collaborative approach involving model developers, social media platforms and policy makers presents a strong defense against unauthorized image manipulation. Working on this pressing issue is critically important today,” says Salman. “And while I’m pleased to contribute to this solution, there is still a lot of work to be done to make this protection practical. Companies developing these models need to invest in engineering strong immunizations against the potential threats posed by these AI tools. As we enter this new era of generative models, let’s strive for equal parts potential and protection.”

Generative artificial intelligences have made it no longer necessary to be an expert to create and manipulate images with results that are indistinguishable from reality. This new creative explosion also has its dark side. and allows anyone to use the photos you have uploaded to your favorite social network and manipulate them to harass or blackmail you. Despite the increase in this type of crime, so far we have not found an efficient system that prevents these malicious uses of AI. At least until now. MIT researchers have just unveiled new technology that they say is capable of distorting these manipulations and preventing them from being passed off as real.

2023-07-28 03:21:40
#MIT #launches #technology #prevents #artificial #intelligence #manipulating #photos

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.