Home » Technology » what has changed in the law since 2020

what has changed in the law since 2020

The trial for the assassination of Samuel Paty continues before the Paris Special Assize Court. A number of defendants are on trial charged with running a hate campaign on social media before the murder. Since 2020, new laws – European and French – have come into force to combat online violence. What can we learn from these laws? Are they effective?

The trial for the assassination of Samuel Paty began Monday, November 4. In the dock are those who created controversy and intentionally distorted the reality of one of the teacher’s courses on freedom of expression. In a video, the father of a student who will be tried during the trial called for writing to the management of the establishment “to fire this patient” and publicly provided the name of the teacher, his cell phone number and address. of college.

Insults, threats and hateful comments against the teacher and the college director flooded the social networks Facebook, WhatsApp, Instagram, Twitter, Snapchat, YouTube, TikTok, Google… which became veritable virtual courts.

More broadly, in recent years, multiple legislative initiatives at French and European level have sought to make content platforms, social networks and also users responsible for limiting the impacts of online violence.

Measures to force platforms to be more transparent

Coming into force in 2023, the European Digital Services Regulation (Digital Services Act, DSA) aims in particular to stem the virality of violent content to avoid exposing users to it. It also requires the publication of transparency reports by social networks and sharing platforms such as Facebook, Google Search, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X (formerly Twitter) and YouTube.

Under this text, social networks must also provide information on their content moderation teams, set up mechanisms for reporting illegal content and provide information on the operation of their recommendation algorithms.

This sharing of information on the functioning of the algorithms should allow users to better understand and control what they see online. It is especially useful for European Center for Algorithmic Transparency who controls the application of the regulation.

A portrait of murdered French teacher Samuel Paty during a tribute ceremony at the Bois d’Aulne school in Conflans-Sainte-Honorine (78), October 16, 2023. Bertrand Guay/AFP

Platforms must also assess and reduce systemic risks for public safety and fundamental rights linked to their algorithms such as the spread of hatred online. These elements must appear in the reports made available to the European Commission. Otherwise, or if the actions of the platforms do not sufficiently reflect the expectations of the DSA, it is the European Commission which will contact the company and, in the event of inertia of the latter, will proceed to a reminder of public law. This is precisely what he did Thierry Breton last August while speaking to Elon Musk.

And whistleblower tool allowing employees or other whistleblowers to report harmful practices of very large online platforms and search engines has been put in place.

Before these new measures, these companies were only subject to a non-legally binding code of conduct and whose results had reached their limits.

How can we evaluate the effectiveness of these measures? We will soon know, the European Commission opened, on December 18, 2023, a procedure against X (formerly Twitter) after having carried out a preliminary investigation for non-compliance with the obligation of transparency and failures in content moderation. X also prohibits eligible researchers from independently accessing its data in accordance with the regulations. The company faces a fine of up to 6% of its global turnover and, in the event of repeated breaches, it may even have access to its restricted service in the European Union.

Today, the sword of Damocles of financial sanctions and especially the blocking of the service on European territory pose an economic and reputational risk that the platforms wish to avoid. Several formal procedures have been launched by the European Commission against the social network X in 2023, TikTok, AliExpress and Meta this year.

Measures to combat cyberharassment

In France, the law “Securing and Regulating Digital Space” (SREN) promulgated in May 2024 sanctions platforms that fail to remove illicit content within a rapid period (75,000 euros fine) and also puts in place mechanisms to raise awareness and protect users from online dangers.

This translates into informing middle school students and parents at the start of the year. A digital citizen reserve (attached to the civic reserve) has also been established with the aim of combating hatred in the digital space and carrying out missions of education, inclusion and improvement of online information. This system, which constitutes an official means of alerting the public prosecutor, would have been welcome 4 years ago, at the time of the Paty affair. At the time, only school mediation and the rectorate’s secularism referent had been activated, without effect.

Online platforms have increasing legal obligations arising from the European Regulation and the SREN Act to prevent and respond to cyberharassment and illegal content, with accountability at multiple levels.

The proactive moderation requirement means that platforms must put systems in place to quickly detect, report and remove content that is hateful, violent, or incites cyberharassment. This is theMY FACE (Regulatory Authority for Audiovisual and Digital Communication) which ensures that publishers and site hosting service providers actually remove this content and conducts audits to verify compliance with the rules. This obligation, resulting from the European regulation on digital services and the SREN law, will be implemented through the publication of an annual report on the number of reports made. To date, the first report has not been published.

what has changed in the law since 2020

A place named after Professor Samuel Paty, in Bordeaux. Christophe Archambault/AFP

Hosts who are aware of the illegal nature of the content and who do not inform the competent authorities, for example, the public prosecutor, nor block access to this publication, incur sanctions of up to 250,000 euros. fine and one year’s imprisonment for the manager.

Platforms must also educate their users about the risks of cyberbullying and provide tools to easily report harmful content and behavior. This is the case for example on, Facebook, Instagram.

In France, cyberharassment is a offense severely punished through laws aimed at combating moral harassment and repeated acts of violence online, particularly on social networks. A person guilty of moral harassment faces up to two years of imprisonment and a fine of €30,000. If the victim is a minor, these penalties are increased to three years in prison and a fine of €45,000. More serious penalties may apply in the event of the victim’s incapacity for work or if the actions lead to suicide or attempted suicide, with a maximum penalty of ten years’ imprisonment and a fine of €150,000.

To protect victims of group harassment, in 2018 the Schiappa law had introduced an offense of “pack harassment” or digital raid, aiming to penalize concerted attacks by multiple Internet users against a victim, even if each participant has not acted repeatedly.

The SREN law also provides that people found guilty of cyberharassment can be banned from social networks for six months or a year in the event of a repeat offense. This ban includes the creation of new accounts during the ban period. Platforms risk fines of up to €75,000 if they do not ban users convicted of cyberharassment or block the creation of new accounts for repeat offenders.

detail ⁤photograph

– What are the key responsibilities of online platforms under European and⁤ French law regarding user-generated content?

1. How are online platforms, such ‍as social media and hosting services, being held‍ accountable under European and French law for the content they publish and moderate?

2. What​ specific measures are being taken to combat cyberharassment and illegal content on these platforms?

3. What are the legal consequences for individuals found ‌guilty of cyberharassment in ‌France?

4. How are platforms working to educate their users‌ about risks and reporting mechanisms ​for harmful content and behavior?

5. What role do ⁤you think regulatory bodies, like the MY FACE in France, play​ in ensuring compliance with these laws ‌and regulations?

6. Can you provide any examples of successful initiatives or collaborations ‍between platforms and regulatory ⁤bodies to address issues related⁢ to online safety and wellbeing?

7. Do ⁤you believe that the current legal framework ‌is sufficient to protect users from online harassment‍ and ⁣harmful content? If​ not, what‍ changes or improvements would you suggest?

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.