SOCIAL NETWORKS – While for a week the Chinese have taken to the streets in an unprecedented and massive way to express their disagreement with the government in its “zero-covid” policy, the protesters are playing cat and mouse with social networks to spread their images .
While Chinese platforms were quickly censored by government authorities, it didn’t take long for protesters to turn to Western platforms. Use of filters or parasitic noise, everything is good for fooling the algorithm of Chinese networks, and above all for not being detected. And the solution against censorship is Twitter, where already altered content arrives as a last resort.
If these images manage to go around the world it is above all because Twitter is experiencing a loss of personnel that is far too great to mass filter all content, but not only. Users employ well-thought-out strategies to prevent their content from being deleted.
“A computer is not very intelligent”
All social networks, Twitter such as Weibo or Facebook, use algorithms and artificial intelligence to recognize the images transmitted on their platform, and possibly filter them according to the respective rules. To better understand their role when content is involved in political issues of this magnitude, BZhunt hacker Victor Louis Pouchet answered questions from the huffpost.
It is day 3 and the videos of the protests are still spreading on Chinese social networks. Let’s see some familiar tricks, people use fil… https://t.co/bTUiXx8LGn
— Paul Mozur Meng Jianguo (@paulmozur)
If in the streets of China the protesters evade the censorship of the Xi Jinping government with the help of white sheets, on social networks they do it with the help of other tricks. As you can see in the tweet above, you can see images of protests, and to spread them the user has added a filter, an element that disturbs the original image, as well as music that is not suited to the content.
« When the algorithm analyzes video content, it will inspect each pixel and try to associate it with something it already knows. In the case of demonstrations in China, the AI will then try to capture a keyword on a sign, for example, or the language spoken and used on it. explains the hacker.
Prevent AI from contextualizing images
But recognizing the elements on video content is not enough as associating them with the real context of this content can be a complete game changer. ” If we take the example of a crowd, the AI will wonder if it is a crowd at a concert, rally or mall, and then try to put it into context he says.
Hence the addition of “parasitic” and “superfluous” elements complicates the AI’s task: ” adding filters or clutter, like text or music, outside the basic context will make the AI much less sure of what it’s seeing. And after that, the algorithm has to go even further to make sense of what it sees, in this case the opposition to the government, and put it into context”.
It’s hard to censor videos. AI-trained software can find specific videos or items in videos (such as candles and… https://t.co/wDKa7kD5hi
— Paul Mozur Meng Jianguo (@paulmozur)
In this other example, a user is filming the content he wanted to stream but taking another phone as a frame. ” In this case the algorithm is not smart enough and will say that it is a phone video without paying attention to what is being streamed in this phone observes Victor Louis Pouchet. So the more the user adds scenarios in his content that are ultimately not intended, the longer it will take for the AI to understand the real intention of the user.
“The concept of filters is very interesting and allows you to bypass an algorithm. Example with Youtube: an artificial intelligence automatically detects copyrighted content but to get around it, some people will flip the image, zoom in or add filters so that it passes. The more parasites we add, the more the stats will be scrambled” adds the hacker. “These algorithms remain very specific learning patterns, a computer is not very smart, it does what we tell it to but not always what we want it to do” He notices.
A reaction from the Chinese government?
And on Twitter, failing to censor, the government would have preferred to drown the fish by diluting the content of demonstrations with pornographic content in the mass.
I searched for 北京 (Beijing) today (11/28 ~ 5:00 Beijing Time) and identified accounts with tweets appearing in the… https://t.co/JBPBKcGEt5
— Air Moving Device (@AirMovingDevice)
According to this Chinese analyst, many fake accounts are being used to spread illegal content to distort search results in China on Twitter. In fact, in many newspapers, the geolocation of the places where the gatherings of protesters were most massive was added to the pornographic content.
But the hacker remembers “Moderation remains an inherent problem in any social network, especially if there are still few humans behind it and inevitably there will always be holes in the racket. And the techniques we use in one moment will be obsolete tomorrow, so it’s an eternal reboot.”
See also on The HuffPost: