X is more likely to remove deepfakes if they are reported on the grounds of copyright infringement. This is what researchers at the University of Michigan found out. Accordingly, X, formerly Twitter, only deleted content that was reported as non-consensual intimate representations from the platform after several weeks or simply not at all.
Advertisement
The images that the researchers reported as non-consensual depictions of nudity were still online three weeks after they were reported. The accounts that posted these were neither blocked nor notified or warned in any way.
Lack of legal framework in the USA
However, according to the authors of the paper, not every victim can have non-consensually published photos deleted online via a DMCA notification: the copyright to a photo always belongs to the person who took the picture. If a photo was taken by someone else, the DMCA does not apply. In addition, when you hear a message like this, you have to speak loudly 404 Media apparently provide relatively extensive information. Although there are services that can be commissioned to carry out such reports, not every victim can afford the associated costs.
The authors of the study come to the conclusion that a law against the non-consensual distribution of intimate personal content must encourage Internet platforms to react similarly quickly to such reports.
EU legislation as a positive example
They cite the GDPR as a positive example: The General Data Protection Regulation has shaken up the way platforms have previously handled users’ data and content. The data protection requirements formulated therein are important steps in the right direction. According to the authors, the protection of intimate personal representations requires a similarly binding legal framework. A peer review of the study is still pending.
In the EU, non-consensual publications of nude images and deepfakes fall under the Digital Services Act, or DSA for short. The DSA has been fully in force since February 2024 and was implemented in Germany with the Digital Services Act. The law requires platforms to moderate such content promptly.
The short message service run by Elon Musk had already become acquainted with the law at the end of 2023. At that time, the EU Commission initiated formal proceedings against the service to examine whether X could have broken the DSA in the areas of risk management, content moderation and dark patterns, among other things.
(kst)