Regulation of hateful content on social media: where are we now?

Regulating hateful content on the internet remains a thorny issue. Thus, What obligations currently apply to social media platforms? What are the legislative prospects?
  1. A mismatch between the law and reality 
The liability of social networks is based on an outdated distinction between host and publisher. Social networks benefit from host status, which limits their liability. Indeed, the LCEN law of 21 June 2004 limits the host's liability to ensuring, «even free of charge, the provision to the public by online public communication services, the storage of signals, writings, images, sounds or messages of any kind provided by the recipients of these services». It is therefore necessary for the hosting provider to be notified of any contentious content in order to potentially engage their liability. The platform is therefore considered a technical intermediary that cannot intervene in the creation or selection of the content disseminated.
  1. A thorny issue 
Regulating hateful content on the internet is inherently problematic, as the illegality of content depends on various factors. It may seem strange to entrust this moderation – which borders on judging public decency – to a private entity. This touches on the limitations of the Avia bill, which required social networks, collaborative platforms and search engines to remove terrorist and child pornography content within one hour to 24 hours. In this sense, the Constitutional Council considered that it was unthinkable to give social networks unilateral power to remove content based on the private operator's own assessment. Following the Constitutional Council's ruling, the bill now essentially contains only two concepts: – simplifying the reporting of hateful content to platforms; – creating an online hate observatory (effective in July 2020; the e-Enfance Association is one of its expert members).
  1. Imposing stricter obligations on social media platforms 
The European Commission has signalled its desire to take proactive action through the Digital Services Act. This comprises two regulations on digital services. The first aims to clarify responsibilities in relation to digital services, particularly social media, in order to: – protect users from illegal products, content or services; – protect their fundamental rights online; – ensure transparency and regulatory oversight of online platforms. The second part aims to provide Member States with an instrument for regulating social media platforms upstream. At the national level, public authorities wish to intensify the detection and monitoring of online platforms and mechanisms for responding to calls for violence. To this end, various options are being considered: – Platforms could be subject to an obligation of means regarding the notification and processing of reported content; – The lifting of anonymity would make it possible to prosecute the perpetrators of criminal acts on social networks; – The creation of an «offence of endangerment through the publication of personal data» on the internet. To this end, the measures would be implemented in line with the work currently being finalised under the Digital Services Act and could be presented as part of the draft law on combating separatism, which will be presented at the end of the year.  

Let us work together to combat online harassment and violence!