From the acquisition by US billionaire Elon Musk Twitter headlines almost every day – and not good for that. Advertisers and users alike are concerned about the direction the platform is developing. Musk had announced the Content rules to be completely reviewed. Fear: Hate speech, abuse and fake news could increase. However, even before the acquisition, Twitter had problems with critical content. That shows one new cause of the lawyer of Würzburg Chan-jo Jun On.
I’m 24. November the short message service is expected to respond to the Frankfurt regional court, Jun announced on Twitter. The media attorney filed an urgent request for an injunction. This means that it is primarily a preliminary decision to secure “urgent claims”. In this case: illegal content should be removed. It will be him first case of this kind – at least according to previous research, Jun admits in his tweet.
Würzburg lawyer sues Twitter: illegal tweets are not deleted
So what is it exactly? Twitter got it repeated refused to delete illegal content, so Jun. Represents the anti-Semitism commissioner of the state government of Baden-Württemberg, Michael Blume. This has been “exposed to attacks repeatedly for a long time,” explains Jun dem BR. On Twitter, for example one, “blatantly false factual allegations” were spread about Blume alleged relationship with a minor. However, Twitter did not want to delete the corresponding tweets. “We see a regularity that Twitter is not prepared to apply German law and to protect its users, especially in case of defamation and defamation, “says Jun.
Dozens of illegal tweets remained online or even after they were reported and checked republished been, the lawyer said on Twitter. In essence, it is also about them Content moderation on the platform – a problem that could be exacerbated by the acquisition of Musk. Because of mass layoffs Twitter’s content moderation team was also impressed, confirmed Yoel Roth, Head of Safety & Integrity. Officially it is said that the layoffs will not change anything in previous trials. So there are still employees who view illegal content and decide whether to delete it or not. However, Jun warns that moderation could be done more by algorithms in the future: “Musk wants to solve everything with algorithms. However, the protection of freedom of expression and human dignity still requires people. “
The statements of the Twitter lawyers also indicate this. They consider the manual “verification obligation”. unreasonable. The functioning of the platform is thus endangered or at least made disproportionately more difficult, according to a letter that Jun published in part. “We’ve seen in the past that this doesn’t work. Algorithms can be used to find illegal content, but the decision whether something really needs to be removed must ultimately be made by a human. That costs money“explains Jun dem BR.
“The platform would collapse”: apparently Twitter has problems moderating content
In a Twitter thread, Jun refers to the case of Austrian politics Eva Glawischnig, which ended up before the European Court of Justice. The court ordered Facebook to remove the reported posts as well as the same or equivalent content to delete. The case “made it clear that automated procedures can be used but not the limit of reasonableness“, so Jun. The process should now clarify the question if Musk can hide behind artificial intelligence. “Or can it be expected – and I am convinced – to provide moderation teams adequately equipped for manual and qualified, legally verifiable decisions?”
Twitter has not yet publicly commented on the court case. Ma Jun calculates aloud BR with a quick decision. Similar to the Glawischnig judgment, the lawyer also wants to achieve this “equal slander” it must be removed from Twitter and not just from tweets sent. This should be on the entire platform happen, because with the “geo-blocking” it is also possible to block the contents only at the regional level. But this is easy to get around. “Here, however, Twitter argues in court that the platform would collapse”, writes Jun. Anyone who runs such a broad opinion platform should at least meet the basic requirement of compliance with the law. “When Twitter says the business model can not do without the violationit does not deserve protection “.
Versus Facebook Jun succeeded with a similar cause. He represented the politics of the Greens Renate Künast, who opposed the spread of a wrong quote he wanted to act on the platform. The false claim had spread out of control because Facebook deleted the reported posts but did not follow up when they were uploaded again. Ultimately, the metagroup was forced to take more consistent action against hate comments: if the content is reported, it must not only be deleted, but also active look for identical or “same core” copies of the original contribution. Copies must also be removed. The judgment is valid pioneering – and it could even become important in the Twitter process.