LE MAG J&A

LE MAG J&A

NEWSLETTER IT-DATA MARCH 2026

Newsletter / 31 March 2026

  1. The CNIL fines France Travail 5 million euros for failure to secure personal data

 

On January 22, 2026, the CNIL imposed a fine of 5 million euros on France Travail (formerly Pôle Emploi) for failing to ensure the security of personal data, in accordance with Article 32 of the GDPR.

 

In 2024, cyber attackers targeted France Travail’s information system and compromised the accounts of advisors from the “CAP EMPLOI” service.

 

This intrusion allowed access to personal data of all individuals registered or who had been registered over the past twenty years, as well as users with a candidate space on francetravail.fr, including social security numbers, email and postal addresses, and phone numbers. Sensitive health information was not compromised, however.

 

While the CNIL recalls that the obligation to ensure the security of personal data is an obligation of means and that it is not possible for a data controller to protect against all so-called “social engineering” attacks, it noted that France Travail violated this obligation by not implementing security measures adapted to the risks identified during its impact assessments.

 

The CNIL noted insufficient robustness of password authentication methods (i.e., criteria for length and complexity/threshold for blocking unsuccessful attempts/complementary restriction mechanisms), a lack of logging measures to detect abnormal behavior, and overly broad access authorizations allowing advisors to access data of individuals they were not assisting.

 

This decision highlights the importance for public and private organizations to secure their information systems, limit access to authorized personnel only, and translate impact assessments into concrete actions to reduce the risk of data breaches.

 

 

2. Apple loses to UFC-Que Choisir over abusive clauses and data protection failures

 

By a ruling on February 27, 2026, the Paris Court of Appeal largely upheld the liability of Apple Distribution International Limited (Apple) following the action brought by UFC-Que Choisir concerning the terms of use of the iTunes service (now Apple Music) and data protection documents.

 

The action, initiated in 2016, aimed to challenge the abusive and unlawful nature of several contractual clauses and provisions relating to the processing of personal data. In the first instance, the Paris judicial court had found breaches of consumer law and the GDPR. Apple had appealed, contesting in particular the admissibility of the action and the merits of the convictions.

 

Regarding admissibility, the Court confirms that an association can act without a mandate under Article 80 of the GDPR, while specifying that the examination must focus solely on clauses still in effect.

 

On the merits, the Court finds several clauses relating to the processing of personal data to be unlawful, particularly concerning data retention periods, purposes and recipients of data, international transfers, profiling, and the exercise of user rights. It notes a lack of clarity and precision, as the information provided did not meet the transparency requirements of the GDPR.

 

In terms of consumer law, certain clauses are also deemed abusive. The Court highlights their standardized nature, lack of readability, and the existence of a significant imbalance to the detriment of users, due to general wording that leaves room for interpretation by the operator.

 

The “Apple Music and Privacy” and “Privacy Commitment” documents were sanctioned for similar reasons.

 

Regarding sanctions, the Court took into account the duration of the breaches, the number of users concerned, and the nature of the disputed clauses, and increased the damages to 50,000 euros for the collective interest of consumers.

 

This decision underscores the importance of clear, precise, and transparent drafting of contractual terms, both under the GDPR and consumer law, to protect consumers.

 

 

3. The French Council of State upholds the 40 million euro fine imposed on Criteo for GDPR violations

 

Criteo provides services that enable targeted advertisements to be displayed on websites managed and hosted by third parties. To this end, it collects and processes, using cookies, the browsing data of individuals visiting websites whose managers or hosts have entered into paid partnership agreements with it.

 

On March 4, 2026, the French Council of State confirmed the CNIL’s reasoning in all respects, holding the following:

 

  • Criteo acts as a joint controller when it places cookies on the terminals of visitors to websites managed by its commercial partners and as a controller when it subsequently processes the data collected (particularly for configuring and improving the algorithmic targeting processes implemented by Criteo).
  • The data processed by Criteo are personal data insofar as they are pseudonymized and the identification of certain individuals would not be technically impossible.

 

Consequently, Criteo failed to meet the following obligations:

 

  • Criteo did not enter into agreements compliant with Article 26 of the GDPR with its partners as a joint controller;
  • Criteo was unable to provide proof that the consent of the data subjects was properly obtained for processing based on consent;
  • Criteo failed to inform data subjects in accordance with Articles 12 and 13 of the GDPR;
  • Criteo failed in its obligations regarding the right to withdraw consent and data erasure: while individuals were no longer exposed to targeted advertisements, their data were retained and processed for algorithm improvement.

 

The French Council of State finally validated the amount of the fine imposed, noting the particular seriousness of the breaches committed, due to the nature of the requirements violated, the number of users concerned (370 million user identifiers in the European Union, including 50 million identifiers in France), the fact that Criteo is a major player in the online advertising services sector, and the circumstance that it derived direct gain from the breaches found.

 

  • To read the decision, click here

 

4. European Commission and Irish authority investigate X and its AI Grok

 

X (formerly Twitter) is the subject of two separate but complementary investigations concerning its artificial intelligence tool Grok, integrated into the platform since 2024: the 1st investigation is conducted by the European Commission, while the 2nd was initiated by the Irish data protection authority.

 

The European Commission opened a formal review procedure under the Digital Services Act (DSA) at the end of January. This new investigation follows up on the one launched in December 2023, which initially concerned the platform’s general compliance. It now extends to the specific evaluation of the platform’s recommendation systems and Grok’s functionalities. These functionalities allow users to produce text and visual content and to integrate contextual elements into their posts.

 

The Commission is examining potential violations of Articles 34(1) and (2) of the DSA, relating to the assessment and mitigation of systemic risks, Article 35(1) on specific mitigation measures, and Article 42(2), which requires the completion and submission of a risk assessment report before deploying new features.

 

As for the Irish data protection authority (DPC), it announced on February 17, 2026, the opening of an investigation into the processing of personal data by Grok. This procedure follows the dissemination of deepfake sexual content generated from photos or videos of real people, including minors. The DPC is verifying, in particular, whether X has complied with the principles of the GDPR: lawfulness, fairness, and transparency of processing, data minimization, accuracy, conducting a data protection impact assessment, and privacy by design and by default.

 

These investigations illustrate the complementarity of the DSA and the GDPR: while the DSA governs the security and responsibility of online services, the GDPR governs the processing of personal data, including that used by AI tools.

 

 

5. The bill to ban social media access for minors under 15 years old has been adopted by the National Assembly

 

Amid growing concerns about cyberbullying, exposure to inappropriate content, and effects on sleep and mental health, the National Assembly adopted on January 26, 2026, a bill to ban social media access for minors under 15 years old.

 

This proposal distinguishes access conditions as follows:

 

  • For services listed on an official list of platforms deemed likely to harm the development of minors under 15 years old, access remains prohibited, even with the consent of legal guardians;
  • For other platforms, access may be authorized only with the explicit consent of at least one parent or guardian, specifying the accessible content, the maximum daily duration, and usage times.

 

Contracts concluded in violation of these provisions would be null and void, and the measure would take effect on September 1, 2026.

 

The effectiveness of this law would thus rely primarily on the platforms, which must verify users’ ages. In this regard, the European Commission recommended adopting age verification methods “precise, reliable, robust, non-intrusive, and non-discriminatory” in its guidelines published on July 14, 2025 (link). An age verification solution has also been developed and is in the testing phase (link).

 

The bill is now submitted to the Senate and will be debated in public session on March 31, 2026.

 

The European framework already imposes requirements on platforms regarding the protection of minors. Article 28 of the DSA establishes obligations adapted to the nature of the service, its functionalities, and the identified risks, without providing for a general age-based prohibition.

 

It is worth noting that WhatsApp has just announced the launch of a “pre-teen” messaging service, dedicated to those under 13 years old (enhanced parental controls, no advertising, and no artificial intelligence features). In our view, this new version is linked to WhatsApp being designated by the European Commission as a very large platform for its channels service in January, and thus needing to comply with stricter obligations under the DSA by the end of May, particularly concerning minors. No age verification or estimation measures are in place for this “pre-teen” version. Meta relies on parental action and age declaration to limit access to WhatsApp channel content by minors. It is not specified whether Meta has also assessed the systemic risks associated with these channels and whether other measures have been implemented to mitigate their consequences. Meta still has 2 months to work on this.

 

  • For more information on the subject, the bill voted by the National Assembly is here and the interview with Emilie de Vaucresson in Les Echos on the pre-teen version of WhatsApp is here.

 

partager sur