AI: increased risks to the protection of minors online observed by the e-Enfance Association / 3018

Artificial intelligence: a technological revolution with increased risks for minors online.

Twenty years ago, the e-Enfance/3018 association was on the front line to protect children in the face of the digital revolution that was the’Internet. Today, a new technological revolution is needed: artificial intelligence (AI). Its possibilities are vast, but his dangers for the online child protection are alarming.

Cases specific instances of cyber violence involving AI are regularly reported to 3018, revealing a worrying increase in risks : image misappropriation, blackmail, identity theft... In the face of these new tools, vigilance and information are more necessary than ever.

An alarming proliferation of deepfakes and deepnudes

AI has revolutionised content production : with just a few clicks, anyone can generate ultra-realistic images or manipulate photos and videos. This ease of access has led to an explosion in deepfakes and deep nudes for the purpose of humiliation or from blackmail.

The phenomenon is all the more serious given that images of children, sometimes originating from photographs shared by their own parents on social media, are modified and disseminated on forums paedophile criminals. By 2023, more than 4,700 items of child sexual abuse material generated by AI have been reported to the National Centre for Missing & Exploited Children, including 50 % come from photographs shared initially by parents (source: 2024 Report > Generative AI: the new weapon for child sexual abuse by the Fondation pour l'Enfance (Children's Foundation)).

Exploitation facilitated by virality and online anonymity

The danger does not end with the creation of this content. Once online, these images become weapons of manipulation and blackmail. Teenagers are victims of sextortion after the broadcast of deepfakes compromising, forcing them to pay a ransom or send other images under threat of public exposure.

AI also facilitates identity theft and grooming, where malicious adults create ultra-realistic fake profiles to trap children. Artificial voices imitating those of loved ones can manipulate, the encouraging to disclose sensitive information or send intimate content.

AI, a vehicle for misinformation and mental health issues 

Deepfakes also contribute to the spread of fake newsaltering the perception of young people on sensitive topics such as politics, health or even their own identity.

Conversational AI generates artificial interactions which can affect the mental health of minors. Some young people develop emotional ties with chatbots or virtual avatars, leading to a isolation social and a loss of bearings. This gradual disconnection from reality can worsen anxiety and the depression.

Regulate and raise awareness: an urgent matter

In response to these threats, legislation is evolving, but often too slowly in relation to the pace of technological advancement. It is crucial to strengthen regulation and detection of abuse. AI itself can be an ally in improving moderation and reporting tools.

The three areas of focus for e-Enfance / 3018: 

  • One active advocacy for integrate the protection of children in AI regulations.
  • One mobilisation massive in the field (100 interventions and training sessions per week).
  • The device 3018, a helpline dedicated to young victims of cyber violence and harassment, with the ability to have accounts and content harmful to minors removed (160,000 contacts in 2024, free, anonymous, confidential, available 24/7). This direct link with young people is unique.

> Find out about all our initiatives to protect minors in the age of AI. 

Let us work together to combat online harassment and violence!