DPO NEWSLETTER: AN UPDATE FROM THE IT-DATA TEAM

Download the newsletter here

 

1) CNIL SANCTION: COMPANY SAF LOGISTICS FINED 200,000 EUROS

On 18 September 2023, the Commission Nationale de l’Informatique et des Libertés (CNIL) fined the Chinese air freight company SAF LOGISITIC €200,000 and published the penalty on its website.

The severity of this penalty is justified by the seriousness of the breaches committed by the company:

 

  • Failure to comply with the principle of minimisation (article 5-1 c of the GDPR): the data controller must only collect data that is necessary for the purpose of the processing. In this case, the company was collecting personal data on members of its employees’ families (identity, contact details, job title, employer and marital status), which had no apparent use.

 

  • Unlawful collection of sensitive data (article 9 of the GDPR) and data relating to offences, convictions and security measures (article 10): in this case, employees were asked to provide so-called sensitive data, i.e. blood group, ethnicity and political affiliation. As a matter of principle, the collection of sensitive data is prohibited. By way of exception, it is permitted if it appears legitimate with regard to the purpose of the processing and if the data controller has an appropriate legal basis, which was not the case here. Furthermore, SAF LOGISITIC collected and kept extracts from the criminal records of employees working in air freight, who had already been cleared by the competent authorities following an administrative enquiry. Therefore, such a collection did not appear necessary.

 

  • Failure to cooperate with the supervisory authority (Article 31 of the GDPR): The CNIL also considered that the company had deliberately attempted to obstruct the control procedure. Indeed, SAF LOGISITIC had only partially translated the form, which was written in Chinese. The fields relating to ethnicity or political affiliation were missing. It should be noted that a lack of cooperation is an aggravating factor in determining the amount of the penalty imposed by the supervisory authority.

 

2) THE CONTROLLER AND THE PROCESSOR ARE LIABLE IN THE EVENT OF FAILURE TO CONCLUDE A DATA PROTECTION AGREEMENT

 

On 29 September 2023, the Belgian Data Protection Authority (DPA) issued a decision shedding some interesting light on (i) the data controller’s and processor’s obligations and the late correction of the GDPR breaches. In this regard, the ADP stated that:

 

  • Both the controller and the processor have breached the provisions of Article 28 of the GDPR by failing to enter into a data protection agreement (DPA) at the outset of data processing. The obligation to enter into a contract or to be bound by a binding legal act falls on both the controller and the processor and not on the controller alone.
  • The retroactive clause provided for in the DPA does not compensate for the absence of a contract at the time of the event: only the date of signature of the DPA should be taken into account to determine the compliance of the processing concerned. The ADP pointed out that allowing such retroactivity would allow companies to avoid the application of the obligation outlined in Article 28.3 of the GDPR over time. However, the GDPR itself provided for a period of 2 years between its entry into force and its entry into application for gradual compliance by all the entities concerned with a view to guaranteeing the protection of data subjects’rights.

 

3) A NEW COMPLAINT HAS BEEN LODGED AGAINST THE OPENAI START-UP BEHIND THE CHATGPT GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEM

The Polish Data Protection Office has opened an investigation following the filing of a complaint by Polish researcher Lukasz Olejnik against the start-up Open AI in September 2023. The complaint highlights the chatbot’s many failings to comply with the General Data Protection Regulation (GDPR).

 

Breaches of the GDPR raised by the complaint

 

The complaint identifies numerous breaches of the GDPR, including a violation of the following articles:

 

  • Article 5 on the obligation to ensure data accuracy and fair processing (there is an obligation to limit the purposes);
  • Article 6 on the legal basis for processing;
  • Articles 12 and 14 on information for data subjects;
  • Article 15 on the data subject’s right of access to information on the processing of his or her data;
  • Article 16 on the right of data subjects to rectify inaccurate personal data.

 

The legitimate interests pursued by OpenAI hardly seem to outweigh the invasion of users’ privacy.

 

Repeated complaints against OpenAI

This is not the first time that ChatGPT has been the target of such accusations since it went online. Eight complaints have been lodged worldwide this year for breaches of personal data protection. These include:

 

  • The absence of consent from individuals to the processing of their data
  • Inaccurate data processing
  • No filter to check the age of individuals
  • Failure to respect the right to object.

 

The “scraping” technique used by this artificial intelligence (a technique that automatically extracts a large amount of information from one or more websites) was highlighted by the CNIL back in 2020 in a series of recommendations aimed at regulating this practice in the context of commercial canvassing. These inspections led the CNIL to identify a number of breaches of data protection legislation, including :

 

  • Failure to inform those targeted by canvassing ;
  • The absence of consent from individuals prior to canvassing;
  • Failure to respect their right to object.

 

Towards better regulation of artificial intelligence?

In April 2021, the European Commission put forward a proposal for a regulation specifying new measures to ensure that artificial intelligence systems used in the European Union are safe, transparent, ethical and under human control. The regulation classifies systems as high risk, limited risk and minimal risk, depending on their characteristics and purposes.

Pending the entry into force of this regulation, the CNIL is working to provide concrete responses to the issues raised by artificial intelligence. To this end, in May 2023 it deployed an action plan designed to become a regulatory framework, the aim of which is to enable the operational deployment of artificial intelligence systems that respect personal data.

 

Repeated complaints against OpenAI

This is not the first time that ChatGPT has been the target of such accusations since it went online. Eight complaints have been lodged worldwide this year for breaches of personal data protection. These include:

 

  • The absence of consent from individuals to the processing of their data
  • Inaccurate data processing
  • No filter to check the age of individuals
  • Failure to respect the right to object.

 

The “scraping” technique used by this artificial intelligence (a technique that automatically extracts a large amount of information from one or more websites) was highlighted by the CNIL back in 2020 in a series of recommendations aimed at regulating this practice in the context of commercial canvassing. These inspections led the CNIL to identify a number of breaches of data protection legislation, including :

 

  • Failure to inform those targeted by canvassing ;
  • The absence of consent from individuals prior to canvassing;
  • Failure to respect their right to object.

 

Towards better regulation of artificial intelligence?

In April 2021, the European Commission put forward a proposal for a regulation specifying new measures to ensure that artificial intelligence systems used in the European Union are safe, transparent, ethical and under human control. The regulation classifies systems as high risk, limited risk and minimal risk, depending on their characteristics and purposes.

Pending the entry into force of this regulation, the CNIL is working to provide concrete responses to the issues raised by artificial intelligence. To this end, in May 2023 it deployed an action plan designed to become a regulatory framework, the aim of which is to enable the operational deployment of artificial intelligence systems that respect personal data.

 

4) TRANSFER OF DATA TO THE UNITED STATES

On 10 July 2023, the European Commission adopted a new adequacy decision allowing transatlantic data transfers, known as the Data Privacy Framework (DPF).

Since 10 July, it has therefore been possible for companies subject to the GDPR to transfer personal data to US companies certified as “DPF” without recourse to the European Commission’s standard contractual clauses and additional measures.

It should be noted that the United Kingdom has also signed an agreement with the United States on the transfer of data, which will come into force on 12 October.

As a reminder, on 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the Privacy Shield, the previous adequacy decision allowing the transfer of personal data to the United States.

 

1)The content of the Data Privacy Framework

The decision of 10 July 2023 formalises a number of binding guarantees in an attempt to remedy the weaknesses of the Privacy Shield, which was invalidated two years earlier.

 

a)The new obligations

In order to benefit from this new framework and receive personal data from European residents, American companies will have to :

 

  • Declare that you adhere to the DPO’s personal data protection principles (data minimisation, retention periods, security, etc.).
  • Indicate a certain amount of mandatory information: the name of the organisation concerned, a description of the purposes for which the transfer of personal data is necessary, the personal data covered by the certification and the verification method chosen.
  • Formalise a privacy policy in line with the CFO principles and specify the type of relevant independent recourse available to primary data holders, as well as the statutory body responsible for ensuring compliance with these principles.

 

On Monday 17 July, the US Department of Commerce launched the Data Privacy Framework website, offering companies a one-stop shop for signing up to the DPF and listing the companies that have signed up.

Participating US companies must conduct annual self-assessments to demonstrate their compliance with the DPF requirements. In the event of a breach of these principles, the US Department of Commerce may impose sanctions.

It should be noted that companies already affiliated to the Privacy Shield are automatically affiliated to the DPF provided that they update their privacy policy before 10 October 2023.

 

  1. b) The creation of a Data Protection Review Court

The DPF is innovative in that it establishes a Data Protection Review Court (DPRC) to provide EU residents with easier, impartial and independent access to remedies, and to ensure that breaches of the rules under the EU-US framework are dealt with effectively. The Court has investigative powers and can order binding corrective measures, such as the deletion of illegally imported data.

 

  1. c) A new appeal mechanism for EU nationals

The planned appeal mechanism will operate at two levels:

 

  • Initially, the complaint will be lodged with the competent national authority (for example, the CNIL in France). This authority will be the complainant’s point of contact and will provide all information relating to the procedure. The complaint is forwarded to the United States via the European Data Protection Committee (EDPS), where it is examined by the Data Protection Officer, who decides whether or not there has been a breach.
  • The complainant may appeal against the decision of the Civil Liberties Protection Officer to the DPRC. In each case, the DPRC will select a special advocate with the necessary experience to assist the complainant.

 

Other remedies such as arbitration are also available.

 

2) Future developments: new legal battles?

This new legal framework will be subject to periodic reviews, the first of which is scheduled for the year following the entry into force of the adequacy decision. These reviews will be carried out by the European Commission, the relevant American authorities (U.S. Department of Commerce, Federal Trade Commission and U.S. Department of Transportation) and by various representatives of the European data protection authorities.

Despite the introduction of these new safeguards, the legal response has already taken place.

On 6 September 2023, French MP Philippe Latombe (MoDem) lodged two complaints with the CJEU seeking the annulment of the DPF.

Max Schrems, president of the Austrian privacy protection association Noyb, which brought the actions against the previous agreements (Safe Harbor and Privacy Shield), is likely to follow suit.

 

5) ISSUES SURROUNDING THE MATERIAL SCOPE OF THE GDPR

A divisive position by an Advocate General concerning the material scope of the GDPR could, if followed by the CJEU, clearly limit the application of the GDPR to many sectors of activity (Case C-115/22).

In this case, the full name of an Austrian sportswoman, who had tested positive for doping, was published on the publicly accessible website of the independent Austrian Anti-Doping Agency (NADA).

The sportswoman has asked the Austrian Independent Arbitration Commission (USK) to review this decision. In particular, this authority questioned the compatibility with the GDPR of publishing the personal data of a doping professional athlete on the Internet. A reference for a preliminary ruling was therefore made to the CJEU.

The Advocate General considers that the GDPR is not applicable in this case insofar as the anti-doping rules essentially regulate the social and educational functions of sport rather than its economic aspects. However, there are currently no rules of EU law relating to Member States’ anti-doping policies. In the absence of a link between anti-doping policies and EU law, the GDPR cannot regulate such processing activities.

 

This analysis is based on Article 2.2.a) of the GDPR, which states:

 

“This Regulation shall not apply to the processing of personal data carried out :

a)in the context of an activity that does not fall within the scope of Union law;”.

The scope of the Union’s intervention is variable and imprecise, leading to uncertainty as to its application to certain sectors.

In the alternative, and assuming that the GDPR applies, the Advocate General believes that the Austrian legislature’s decision to require the public disclosure of personal data of professional athletes who violate anti-doping rules is not subject to a proportionality test under the terms of the regulation.

However, the Advocate General’s conclusions are not binding on the CJEU. The European Court’s decision is therefore eagerly awaited, as it will clarify the application of the GDPR.

 


1Last March, the Italian CNIL went so far as to temporarily suspend ChatGPT on its territory because of a suspected breach of European Union data protection rules.

OpenAI failed to implement an age verification system for users. Following on from this event, on 28 July a US class action denounced the accessibility of services to minors under the age of 13, as well as the use of “scraping” methods on platforms such as Instagram, Snapchat and even Microsoft Teams.

2Proposal for a Regulation laying down harmonised rules on artificial intelligence

The “cyber-score” law comes into force: what are the new obligations for platform operators?

In the Senate report of 16 February 2022 on the introduction of cybersecurity certification for digital platforms aimed at Senator Anne-Catherine Loisier pointed out that, despite a growing increase in cyber attacks[1] , companies were not changing their behaviour in the face of the threat[2] .

 

In recognition of the fact that cybersecurity is an essential counterpart to the digital economy and, more broadly, to the digitalization of all areas of society, the legislator has imposed new obligations on platform operators.

Act no. 2022-309 of 3 March 2022 for the introduction of cybersecurity certification for digital platforms aimed at the general public (known as the “Cyber-score Act“) introduced into the consumer code[3] an obligation to inform consumers about the level of security of platform operators and the data they host.

This law introduces an obligation for digital operators to inform users of their services about the level of security of their data, which was not provided for in the General Data Protection Regulation (GDPR). The latter only provides for personal data security measures to be put in place but does not inform data subjects of their robustness[4] .

The new article L.111-7-3 of the French Consumer Code states that:

 

“Operators of online platforms (…) whose activity exceeds one or more thresholds defined by decree shall carry out a cybersecurity audit, the results of which shall be presented to the consumer (…), covering the security and location of the data they host, directly or via a third party, and their own security (…)”.

The audit referred to in the first paragraph is carried out by audit service providers qualified by the Agence nationale de la sécurité des systèmes d’information.

(…)

The result of the audit is presented to the consumer in a legible, clear and comprehensible manner and is accompanied by a complementary presentation or expression, by means of a colour information system.”

 

The cyber-score law came into force on 1er October 2023.

The implementing decree and the order specifying its application are awaiting publication.

 

1. Who is affected by this communication obligation?

 

The scope of application is particularly broad as it concerns (i) online platform operators as defined in Article L111-7 of the French Consumer Code and (ii) persons providing non-number-based interpersonal communications services whose activity exceeds a certain threshold set by decree. The draft decree provides for a threshold of 25 million unique visitors per month from French territory by 2024 [5]. The legislator’s aim is not to penalise very small businesses (VSEs), SMEs or innovative start-ups in terms of online services.

 

In concrete terms, digital platforms (marketplaces, comparison sites, search engines, social networks, etc.), messaging services and videoconferencing software intended for the general public are affected by the obligation to carry out cyber-security audits and to communicate the results to the public, provided they exceed the threshold of 25 million unique visitors per month from French territory by 2024.

 

 

2. How does a cyber sucurity audit work?

 

The operators concerned will have to use an information systems security audit service provider (PASSI) qualified by the French National Agency for Information Systems Security (ANSSI).

The audit will be carried out based on information that is open, freely accessible, and non-intrusive by the service provider, and will cover the security and location of the data. In this respect, a location within the European Union is a guarantee of data security, in terms of the application of the RGPD, but also in terms of digital sovereignty.

However, data location is not the only criterion to be considered. The draft decree provides for the following control points [6]  :

 

  • Organisation and governance (cyber insurance, security certification, etc.)
  • Data protection (security measures relating to data hosting, exposure of data to extraterritorial legislation, sharing of data with third parties)
  • Knowledge and control of the digital service (mapping of information processed by the digital service and sensitivity, mapping of service providers, existence of network partitioning mechanisms to protect the digital service from a rebound attack on shared environments).
  • Level of outsourcing (location of digital service hosting infrastructures in the EU, etc.)
  • Level of exposure on the Internet (regular security scans, implementation of a solution to protect against denial of service (DDoS), user identification/authentication management, etc.).
  • Security incident handling system
  • Digital service audits (Carrying out regular security audits before the digital service is implemented (audit/Bug bounty/etc.))
  • Raising awareness of cyber-risks and the fight against fraud (raising awareness of cyber-security risks, warning users of cyber-risks of scams and fraud and recommendations for precautions, etc.).
  • Secure development (OWASP rules, etc.)

 

It should be noted that the control points mentioned above must already be considered by businesses as part of their GDPR compliance.

 

We will have to wait for the publication of the decree before we have an exhaustive list of the cyber security audit checkpoints.

 

3. How does a cyber sucurity audit work?

 

Following the example of the “nutriscore”, the legislator stipulates that economic operators must publish a “cyberscore” on their website. The draft decree states that the marking must be displayed prominently on the home screen and that the cyberscore audit score and the date on which it was carried out must appear prominently in the online service’s legal notices.

 

Screenshot taken from the draft order setting the criteria for the application of Law 2022-309 of 3 March 2022 for the introduction of cybersecurity certification of digital platforms intended for the public.
Screenshot taken from the draft order setting the criteria for the application of Law 2022-309 of 3 March 2022 for the introduction of cybersecurity certification of digital platforms intended for the public.

 

The result of any cyber-audit must be clearly displayed and accessible on the operator’s website.

The aim is to enable consumers to be better informed about the protection of their online data.

 

4. How do i display my cyber-score?

 

In the event of failure to comply with this obligation, and in accordance with Article L131-4 of the French Consumer Code, the operator is liable to an administrative fine imposed by the DGCCRF of up to €75,000 for an individual and €375,000 for a legal entity.

In addition, a low cyber-score will inevitably damage the image of the operator concerned and reduce the confidence of users of its site.

 

***

 

In this context, it is essential for the companies concerned to put in place the appropriate technical and organisational security measures now.

The IT/Data department at Joffe & Associés can help you ensure that your platforms are compliant (GDPR compliance, securing relations with third parties, cyber-security awareness, etc.).


 

[1] According to the report, 54% of businesses said they had suffered at least one cyber-attack in 2021, and 30% of cyber-attacks led to the theft of personal, strategic or technical data.

[2] Senate report n°503 p6 https://www.senat.fr/rap/l21-503/l21-5031.pdf

[3] Article L.111-7-3 of the Consumer Code

[4] Article 32 of the RGPD

[5] https://www.entreprises.gouv.fr/files/files/secteurs-d-activite/numerique/ressources/consultations/projet-decret-cyberscore.pdf

[6]https://www.entreprises.gouv.fr/files/files/secteurs-d-activite/numerique/ressources/consultations/projet-arrete-cyberscore.pdf

DIGITAL FINANCE: FINANCIAL SECTOR PLAYERS MUST ANTICIPATE THE NEW DORA REGULATION NOW

European regulation no. 2022/2554 on Digital Operational Resilience for the financial sector (“DORA“) was adopted on December 14, 2022 and will apply from January 17, 2025.

 

The aim of this regulation is to reinforce the technological security and smooth operation of the financial sector. It lays down security requirements so that financial services can withstand and recover from disruptions and threats linked to information and communication technologies (“ICT“) throughout the European Union.

It applies to a wide range of players in the financial sector and their technology partners, including credit institutions, investment firms, payment institutions, asset management companies, insurance companies and third-party ICT service providers operating in the financial services sector.

 

The DORA regulation is structured around five chapters, which lay down a set of rules with a major impact on internal security procedures and the contractual relations of players in the financial sector.

 

The main measures are as follows:

 

1° ICT risk management

 

The DORA regulation requires the adoption of internal governance and control frameworks to ensure effective and prudent management of all ICT risks.

Financial entities will also need to put in place an ICT risk management framework tailored to their activities, enabling them to deal with ICT risks quickly and efficiently.

 

As a preventive measure, they must :

 

  • Use and maintain appropriate, reliable and technologically resilient ICT systems, protocols and tools;
  • Identify all forms of ICT risk;
  • Ensure permanent monitoring and control of the operation of ICT systems and tools;
  • Implement mechanisms to detect abnormal activity;
  • Define continuous improvement processes and measures, a business continuity policy, a backup policy, and restoration and recovery procedures and methods.

 

The companies concerned will need to have the capacity and manpower to gather information on vulnerabilities, cyber threats and ICT-related incidents. As part of this, they will have to carry out post-incident reviews following major incidents that have disrupted their core activities.

 

The new regulations also require the formalization of crisis communication plans to promote responsible disclosure of major ICT-related incidents.

 

It should be noted that the regulation provides a simplified ICT risk management framework for certain small players, such as small non-interconnected investment companies

 

2° ICT-related incident reporting

 

Financial entities are required to formalize and implement an ICT-related incident management process for the management, classification and reporting of incidents. The DORA regulation introduces a standard methodology for classifying security incidents according to specific criteria (duration of the incident, criticality of services affected, number of clients or financial counterparts affected, etc.).

 

Financial entities will be obliged to report ICT-related incidents classified as major to competent national authorities designated according to the type of financial entity (notably the ACPR and AMF in France). These notifications will have to be made within deadlines subsequently set by the European supervisory authorities.

 

In the event of a “major” incident affecting the financial interests of clients, financial entities will also have to inform the latter, as soon as they become aware of the incident, of the measures taken to mitigate its effects.

 

3° Digital operational resilience testing

 

In order to assess their preparedness in the event of ICT-related incidents, and to implement corrective measures where necessary, financial sector players will need to formalize a robust digital operational resilience testing program, comprising a series of assessments, tests, methodologies, practices and tools to be applied.

 

Every three years, they will also have to carry out threat-based penetration tests, performed by independent, certified testers.

 

Managing of ICT third-party risks

 

The DORA regulation introduces general principles to be respected by financial entities in their relations with ICT third-party service providers.

 

They will need to adopt a third-party risk strategy, and keep a record of information relating to all contractual agreements concerning the use of ICT services provided by ICT third-party service providers.

 

At least once a year, financial entities must provide the competent authorities with information on new agreements relating to the use of ICT services, and must inform them of any draft contractual agreements concerning the use of such services supporting critical functions.

It also requires companies to enter into contracts with such ICT third-party service providers only if they meet appropriate information security standards.

 

The rights and obligations between financial entities and ICT third-party service providers must be defined in a written contract, which must include the following conditions:

 

  • A clear and exhaustive description of the services provided;
  • Where the ICT services will be provided and what data will be processed;
  • Provisions on the accessibility, availability, integrity, security and protection of personal data;
  • Service level descriptions ;
  • The obligation for the ICT third-party service providers to provide the financial entity with assistance in the event of an ICT incident, at no extra cost or at a cost determined ex ante;
  • The ICT third-party service providers obligation to cooperate fully with the competent authorities;
  • Right of termination and minimum notice period.

 

Where ICT third-party service providers supply ICT services supporting critical or important functions, contracts will need to define additional conditions including:

 

  • The provider’s obligation to cooperate in threat-based penetration testing;
  • The obligation for the service provider to implement contingency plans and put in place security measures providing an appropriate level of security;
  • Unlimited rights of access, inspection and audit by the financial entity;
  • Exit strategies, such as setting an appropriate mandatory transition period.

 

In addition, the regulation introduces a supervisory framework for critical ICT third-party service providers, based on a series of criteria (systemic effect on service provision in the event of failure, systemic importance of financial entities dependent on the provider, degree of substitutability of the provider, etc.). Critical ICT third-party service providers will be subject to a monitoring framework based on a set of criteria: security requirements, risk management processes, availability, continuity, governance arrangements, etc.

 

These service providers will be assessed by the supervisory bodies, which will have the power to request information, carry out general inspections and on-site checks, and make recommendations.

 

5° Information-sharing

 

The DORA regulation introduces guidelines for the exchange of information between financial entities on cyber threats. These exchanges should aim to improve the digital operational resilience of financial entities in particular, and should be carried out in full respect of business confidentiality.  In addition, financial entities will be required to notify the competent authorities when participating in information exchange schemes.

 

Lastly, the regulation provides for the various competent authorities to have powers of supervision, investigation and sanction in the event of non-compliance with its provisions.

 

The Member States will be responsible for laying down the rules providing for administrative sanctions and appropriate remedies in the event of a breach of the DORA regulation, and for ensuring their effective implementation. It should be noted that, unlike the GDPR, the DORA regulation does not provide for a ceiling in the event of a pecuniary penalty but requires that penalties be “effective, proportionate and dissuasive“.

 

Our IT-Digital and Data team at Joffe & Associés is at your disposal to support you in your compliance process in order to best anticipate the implementation of this regulation, particularly when negotiating contracts with ICT service providers but also to audit current contracts. Note that the DORA regulation has a broader scope than the French decree of November 3, 2014.

ADOPTION OF THE ARTIFICIAL INTELLIGENCE ACT BY THE EUROPEAN PARLIAMENT : WHAT DOES IT MEAN ?

On Wednesday 14 June 2023, the European Parliament adopted the Artificial Intelligence Act (“AI Act“), a regulation regulating the development and use of artificial intelligence (AI) within the European Union. The text, which is said to hold the record for legislative amendments, is now being discussed by the Member States in the Council. The aim is to reach an agreement by the end of the year.

 

While the date on which the AI Act will come into force remains uncertain, companies involved in the AI sector have every interest in anticipating this future regulation.

 

What are the main measures?

 

Objectives

 

The regulation harmonises Member States’ legislation on AI systems, thereby providing legal certainty that is conducive to innovation and investment in this field. The text is intended to be protective but balanced, so as not to hinder the development of the innovation needed to meet the challenges of the future (the fight against climate change, the environment, health).

 

Like the General Data Protection Regulation (GDPR), which follows the same logic throughout its articles, the AI Act sets itself up as a global benchmark.

 

The scope of application is deliberately broad in order to avoid any circumvention of the regulations. It applies both to AI suppliers (who develop or have developed an AI system with a view to placing it on the market or putting it into service under their own name or brand) and to users (who use an AI system under their own authority, except where the system is used in the context of a personal non-professional activity).

 

In practical terms, it applies to :

  • suppliers, established in the EU or in a third country, who place AI systems on the market or put them into service in the EU;
  • users of AI systems located in the EU;
  • suppliers and users of AI systems located in a third country, where the results generated by the system are used in the EU.

 

A risk-based approach

 

Artificial intelligence is defined as the ability to generate results such as content, predictions, recommendations or decisions that influence the environment with which the system interacts, whether in a physical or digital dimension. The regulation adopts a risk-based approach and introduces a distinction between uses of AI that create an unacceptable risk, a high risk and a low or minimal risk:

 

 

Regarding high-risk AI systems:

 

The following minimum requirements must be met:

 

  • Establish a risk management system: this system consists of a continuous iterative process which takes place throughout the life cycle of a high-risk AI system and which must be periodically and methodically updated.
  • Ensuring the quality of the datasets: the training, validation and test datasets will have to meet quality criteria and in particular be relevant, representative, error-free and complete. In particular, the aim is to avoid “algorithmic discrimination”.
  • Formalise technical documentation: technical documentation containing all the information needed to assess the conformity of a high-risk AI system must be drawn up and kept up to date before the system is placed on the market or put into service.
  • Providing for traceability: the design and development of high-risk AI systems should include features for automatic recording of events (“logs”) during the operation of these systems.
  • Provide transparent information: high-risk AI systems will be accompanied by a user manual containing information on the characteristics of the AI (identity and contact details of the supplier, characteristics, capabilities and performance limits of the AI system, human control measures, etc.) that is accessible and understandable to users.
  • Provide for human control: effective control by natural persons must be provided for during the period of use of the AI system.
  • Ensuring system security: the design and development of high-risk AI systems will have to achieve an appropriate level of accuracy, robustness and cybersecurity, and operate consistently in this respect throughout their lifecycle.

 

All players in the supply chain – suppliers, importers and distributors alike – are subject to these obligations, so everyone will have to assume their responsibilities and be even more vigilant.

 

In particular, suppliers must:

  • demonstrate compliance with the above minimum requirements by maintaining technical documentation;
  • subject their AI systems to a conformity assessment procedure before they are placed on the market or put into service;
  • take the necessary corrective measures to bring the AI system into compliance, withdraw it or recall it;
  • cooperate with national authorities
  • onotify serious incidents and malfunctions involving a high-risk AI placed on the market to the supervisory authorities of the Member State where the incident occurred no later than 15 days after the supplier becomes aware of the serious incident or malfunction.

It should be noted that these obligations also apply to the manufacturer of a product that incorporates a high-risk AI system.

 

  • The importer of a high-risk AI system will have to ensure that the supplier of this AI system has followed the appropriate conformity assessment procedure, that the technical documentation is established and that the system bears the required conformity marking and is accompanied by the required documentation and instructions for use.
  • Distributors will also have to check that the high-risk AI system they intend to place on the market bears the required CE conformity marking, that it is accompanied by the required documentation and instructions for use, and that the supplier and importer of the system, as the case may be, have complied with their obligations.

 

Enforcement and governance

 

At national level, the Member States will have to designate one or more competent national authorities, including the national supervisory authority responsible for monitoring the application and implementation of the Regulation.

 

A European Artificial Intelligence Committee (made up of the national supervisory authorities) will be set up to provide advice and assistance to the European Commission, in particular on the consistent application of the Regulation within the EU. Notified bodies will carry out the conformity assessment of AI systems. Notified bodies should be designated by the competent national authorities, provided that they comply with a set of requirements relating in particular to their independence, competence and absence of conflicts of interest.

 

 

Support SMEs and start-ups through the establishment of AI regulatory sandboxes and other measures to reduce the regulatory burden.

 

Regulatory AI sandboxes will provide a controlled environment to facilitate the development, testing and validation of innovative AI systems for a limited time before they are brought to market or commissioned according to a specific plan.

 

Penalties

 

The AI Act provides for three penalty ceilings depending on the nature of the offence:

 

  • Administrative fines of up to €30,000,000 or, if the offender is a company, up to 6% of its total worldwide annual turnover in the previous financial year for:

— non-compliance with the ban on artificial intelligence practices;

— non-compliance of the AI system with the requirements relating to data quality criteria.

  • Failure of the AI system to comply with the requirements or obligations of the other provisions of the AI Act will be subject to an administrative fine of up to €20,000,000 or, if the offender is a company, up to 4% of its total worldwide annual turnover in the previous financial year.
  • Providing incorrect, incomplete or misleading information to notified bodies and competent national authorities in response to a request is subject to an administrative fine of up to €10,000,000 or, if the offender is an undertaking, up to 2% of its total worldwide annual turnover in the preceding business year, whichever is the greater.

Influencers under regulatory scrutiny

Law no. 2023-451 of June 9, 2023 aimed at regulating commercial influence and combating the abuses of influencers on social networks was published in the Journal Officiel on June 10.

 

New definitions

 

The law now defines influencers as “natural or legal persons who, for remuneration, communicate content to the public by electronic means with a view to promoting, directly or indirectly, goods, services or any cause whatsoever, engage in the activity of commercial influence by electronic means”, as well as the activity of influencer agent, which consists of “representing or putting in contact, for remuneration” persons engaging in the activity of commercial influence.

 

Certain activities prohibited or more tightly supervised, and in all cases an obligation of transparency

While influencers must already comply with existing legal provisions governing advertising practices for product placements, they must also refrain from any direct or indirect promotion of medicinal treatments, cosmetic surgery, alcoholic or nicotine-containing products, certain financial products and services (notably crypto-currencies), sports betting subscriptions or products involving wild animals. They will also have to comply with provisions governing the promotion of gambling.

 

In addition, to better inform subscribers and young users of social networks, influencers will have to indicate, in a clear, legible and identifiable manner, the terms “advertising” or “commercial collaboration” in the case of partnerships, and “retouched images” or “virtual images” on their photos and videos affected by the use of filters or artificial intelligence processes.

 

Greater responsibility for influencers to combat drop-shipping

 

In order to adapt to the dropshipping phenomenon, influencers will henceforth be fully liable to buyers, within the meaning of article 15 of the LCEN, for the products they sell on their social networks. They will therefore have to provide the buyer with the information stipulated in article L. 221-5 of the French Consumer Code, as well as the identity of the supplier, and ensure that the products are available and legal, in particular that they are not counterfeit.

 

More formal contracts, including for influencers based abroad

 

Influencers will have to formalize written contracts with their agents and advertisers, when the sums involved exceed a certain threshold, to be defined within an implementing decree. These contracts will have to include several mandatory clauses (concerning, for example, remuneration conditions, submission to French law, missions entrusted, etc.). The law also stipulates that the advertiser, its agent and the influencer will be “jointly and severally liable for any damage caused to third parties in the performance of the influencing contract between them”.

 

These obligations apply to all influencers targeting a French audience, including foreign-based influencers. The latter will be required to designate a legal or natural person within the European Union who will be criminally liable in the event of an infringement. The text also requires influencers operating outside the European Union or the European Economic Area to take out civil liability insurance in the Union.

 

As for the platforms hosting influencer content, they must allow Internet users to report any content that does not comply with the new provisions on commercial influence.

 

Greater powers for the DGCCRF

 

In addition to its supervisory role, the DGCCRF (Direction Générale de la Concurrence, de la Consommation et de la Répression des Fraudes) now has enhanced powers to impose injunctions, fines and formal notices against influencers. The DGCCRF has set up a 15-strong commercial influence unit.

 

In the event of infringement of the obligations laid down in this text, influencers risk up to 2 years’ imprisonment and a fine of 300,000 euros, and may be banned from exercising their profession.

 

They may also be banned, permanently or temporarily, “from exercising the professional or social activity in the exercise or on the occasion of the exercise of which the offence was committed”.

 

 

Publication of a guide to good conduct

 

In order to assist influencers in bringing their content and activities into compliance, the government has published a guide to good conduct. The sector is now awaiting the implementing decrees, which should provide details of the changes made for the activity of content creators.

 


Article by Véronique Dahan, Emilie de Vaucresson, Thomas Lepeytre and Romain Soiron.

PUBLICATION OF A FRENCH DECREE ON ELECTRONIC TERMINATION OF CONTRACTS

Article L. 215-1-1 of the Consumer Code introduced by the law of 16 August 2022 on emergency measures to protect purchasing power has created an obligation to facilitate the electronic termination of contracts.

 

The French decree no. 2023-417, published on 31 May 2023 and entered into force on 1 June 2023, sets out the terms and conditions for terminating contracts electronically.

 

It requires professionals to provide fast, easy, direct and permanent access enabling consumers and non-professionals to notify a professional of the termination of a contract.

 

In concrete terms, this functionality must be presented as “terminate your contract” or a similar unambiguous wording and be directly and easily accessible on the interface from which the consumer can conclude contracts electronically. The professional may include a reminder of the information on cancellation conditions, but must refrain from requiring the consumer to create a personal space in order to access it.

 

This cancellation feature must also include sections enabling the consumer to provide the professional with information enabling him to prove his/her identity, identify the contract and, where appropriate, justify any legitimate grounds for his/her request for early cancellation. In such cases, the professional must provide a postal address and an e-mail address or include a feature for sending proof of legitimate grounds.

Finally, the decree stipulates that once these sections have been completed, consumers must be able to access a page presenting a summary of the termination, enabling them to check and amend the information provided before notifying their request.

 

As a reminder, any failure to comply with the provisions of this article L. 215-1-1 is punishable by an administrative fine of up to €15,000 for an individual and €75,000 for a legal entity.

 


 

Article by Emilie de Vaucresson, Amanda Dubbary and Camille Leflour.

Data transfers to the United States – Record €1.2 billion fine for Meta Ireland

Article written byEmilie de Vaucresson, Amanda Dubarry and Camille Leflour.

 

On 22 May 2023, the Irish Data Protection Commission (the “DPC”), acting as the lead supervisory authority, announced that it has fined Meta Ireland a record €1.2 billion for violating Article 46(1) of the GDPR by transferring personal data to the U.S. without implementing the appropriate safeguards.

 

Since the invalidation of the Privacy Shield, Meta Ireland had been implementing these transfers on the basis of the standard contractual clauses, in conjunction with additional measures that the DPC considered insufficient in light of the risks to the rights and freedoms of data subjects. The data of its European users is indeed stored in the United States, exposing them to potential surveillance by the US authorities.

 

The investigation was initially launched in August 2020 as part of a cooperation procedure. The draft decision prepared by the DPC was then submitted to its counterpart regulators in the EU/EEA, who rejected it and referred it to the European Data Protection Committee (the “EDPS”).

 

On the basis of the EDPB’s decision, the DPC adopted the final decision under which Meta Ireland is required:

  • to suspend any future transfers of personal data to the United States within 5 months from the date of notification of the decision to Meta Ireland;
  • to pay an administrative fine of €1.2 billion – the highest fine ever imposed under the GDPR – justified by the seriousness of the alleged breaches by Facebook’s parent company, which has millions of users in Europe, involving a huge volume of data transferred in violation of the GDPR; and
  • to bring its processing operations into compliance with the GDPR by ceasing the unlawful processing, including storage, in the United States of personal data of EU/EEA users transferred without safeguards, within 6 months from the date of notification of the DPC’s decision to Meta Ireland.

In the words of Andrea Jelinek, President of the EDPS, “this sanction is a strong signal to organizations that serious breaches have considerable consequences”. Indeed, it comes in a context of increasing controls on GAFAMs, this sanction being the fourth fine imposed on Meta Ireland in 6 months.

 

For its part, Meta Ireland describes this fine as “unjustified and unnecessary” and wants to request its suspension in court. In this context, the social network hopes that the European Commission will soon adopt the new draft adequacy decision for data transfer to the United States.

 

For the time being, as long as no agreement has been reached between Europe and the United States on the framework for data flows to the United States, we would like to remind you that the simple signing of standard contractual clauses is not sufficient to ensure a data transfer that complies with the GDPR. It is necessary to verify that additional guarantees have been implemented by the recipient of data in the United States to ensure the confidentiality of data and the impossibility of access for the American authorities.

WHAT COPYRIGHT ON (AND AGAINST) THE CREATIONS OF ARTIFICIAL INTELLIGENCE?

Read the original article in “Village de la Justice”, written by Véronique Dahan and Jérémie Leroy-Ringuet here.

 

 

 

Chat-GPT, Dall-E 2, Stable Diffusion… Are the creations of artificial intelligence protectable works? Who could claim to be the author? But above all, do the authors of pre-existing works have rights against the use of their style and their works by AI?

 

The use of artificial intelligence (AI) by companies, especially for their communication, is becoming more and more widespread. Software such as Stable Diffusion, Midjourney, Craiyon, but especially Dall-E 2, developed by OpenAI and launched in January 2022, make it possible to create images from natural language instructions (text-to-image). It is also possible to create music or text in the same way, for example by asking a software program to write a description of a landscape of fjords at sunset, with tools such as the Chat-GPT robot, launched in November 2022 by OpenAI.

 

Beyond their playfulness, the possible artistic or professional applications of such software are quite varied: illustration of an article, creation of a brand, a logo, a slogan, a jingle, texts for a website, for an advertising medium or for a post on social networks, etc., and soon perhaps complex literary works or films. Artists have seized upon it to develop an art form called AI Art, Prompt Art or GANism (in reference to Generative Adversarial Networks) and, sometimes, by transforming the results obtained into NFT [1].

 

AI can thus be of significant help, whether by providing ready-to-use content or simple starting ideas to be developed by “human” means or by using other more “traditional” software. The image, the text or the group of words obtained with an economy of time and effort can thus be reworked and perfected, the results obtained still being sometimes imperfect.

 

In order to produce a custom image, software needs to be fed with pre-existing images and metadata about these images (this is called “deep-learning”).
For example, in order to create an image of Mona Lisa in the style of Van Gogh, the software needs to be fed with 1° images reproducing Leonardo da Vinci’s Mona Lisa, 2° information that these images represent Mona Lisa, 3° images of Van Gogh’s paintings and 4° information that these images represent Van Gogh’s paintings. The more reliable information the software has, the more convincing the result will be:

 

 

(Image créée avec Stable Diffusion.)

 

 

It would also be possible, for example, to create images that do not incorporate pre-existing works but refer generally to the style of artists whose works are either in the public domain or still protected (i.e., whose author is alive, or has been dead for less than seventy years), such as an image of a Jeff Koons-style sculpture.

 

The same principle applies to texts: if one asks a text generator to create a Shakespearean dialogue between two tax lawyers who meet in front of a London Underground station and talk about Brexit, the text will reproduce the English archaisms typical of Elizabethan theater.

 

Like any technological novelty, the use of such software raises many legal questions.
The purpose of this article is to answer in particular the central question: who owns the rights (if any) on the content generated by AI?

 

 

Under French law, a work is protectable if it is original. Originality is defined as revealing the imprint of the personality of the author, who can only be a human being. It is therefore necessary to determine who is the author, or who are the authors of an image or a text created via an instruction given to a software. It is also necessary to determine who can be the owner of the rights since a person who is not the author can be, by the effect of the law, the contract or by presumption, owner of the exploitation rights of the work.

 

In the process of creating a version of the Mona Lisa by Vinci in the style of Van Gogh, several people may or may not have voluntarily contributed to the creation of the image (being authors or co-authors) or owned the rights:

  • The authors of the pre-existing works, i.e. Leonardo da Vinci and Vincent Van Gogh,
  • We ourselves, when we gave as instruction to the software: “Mona Lisa in the style of Van Gogh”,
  • The author of the software Stable Diffusion and the company operating the site Stable Diffusion.

 

 

The rights of the software operators (Stable Diffusion, Dall-E 2, Midjourney…).

 

The entities exploiting the sites of Stable Diffusion, Dall-E 2, etc. claim in their general conditions their ownership of the rights related to their software. They are thus able to authorize or prohibit the use that the Internet users make of their software.

 

These softwares contribute to the process of obtaining new texts and images, insofar as it is these image generators that, in our example, have selected a bluish nocturnal atmosphere with the spirals of The starry night rather than, for example, the green and yellow decor of Wheat field with cypresses which would have been equally and perhaps better suited. We can also notice that the software has chosen to raise the right arm of Mona Lisa as in L’Arlésienne (Madame Ginoux) or in Portrait of Dr. Gachet, and to make her sit on a chair that evokes by its color and the shape of its ornaments the chair of Gauguin.

 

We are not in the situation of a purely passive participation (as would be that of a paintbrush for a painter or a word processing software for a writer): it is precisely the part of “autonomy” of AI software that throws the traditional conception of copyright into confusion. Nevertheless, the software’s contribution is automated and, in our view, the technical use of software to create an image or text does not give the owner of the software any rights over the image or text: in the absence of human intervention in the choice of colors and shapes, no copyright or co-authorship can be claimed on behalf of the software operator.

 

The terms of use of these text and image generators confirm this. In the case of Dall-E 2, the terms of use expressly state that OpenAI transfers all rights to the texts and images obtained to the user and even requests that the content be attributed to the person who “created” it or to his company.
Stable Diffusion does the same by granting a perpetual, worldwide, non-exclusive, free, royalty-free and irrevocable copyright license for all types of use, including commercial. But in the absence, in our opinion, of any transferable rights, these provisions seem to us to be mere precautions.

 

Other sites such as Craiyon do not provide for the transfer of rights to the benefit of the user on the results obtained but only frame the use of the software, by providing for specific licenses in the case of commercial uses of the images created. The paying nature of these licenses depends on the turnover of the company using the images created on its site. We understand that it is more a question for Craiyon to monetize the use of a software that represented an investment for the company than to determine the contours of a transfer of copyright.

It is therefore essential for anyone wishing to use, commercially or not, the images created via AI tools, to check whether the company operating the site where he creates them gives him the rights and under what conditions, even if it is not a question of conditions relating to the ownership of rights on the contents.

 

 

The rights of the person using the software.

 

Since the creative contribution of the person who gives instructions to the image or text generator is limited to the production of an idea implemented by the software, and since the ideas are not protectable by copyright, it is doubtful that this person is recognized as the author.

 

This is especially true since, when an instruction is given to the software, the result of the instruction is unknown until it appears on the screen, and even very precise instructions can give very different results – as would be the case if the instructions were given to human beings. Since the user of the software does not mentally design the resulting image in advance, it is difficult to argue that the image bears the “imprint of his or her personality”.

 

This is particularly evident in the case of succinct instructions or instructions containing abstract terms.
Thus, the results obtained by us on Dall-E 2 by giving as instruction “the unbearable lightness of being” could present images, certainly evoking lightness, but as visually different – and thus unexpected and disconnected from our “personalities” – as the following: 

 

 

(Creations by Dall·E 2)

 

But above all, one could go as far as to deny the qualification of intellectual work to images and texts created by AI. Indeed, if the Intellectual Property Code (IPC) does not define what a work is, it only grants copyright protection to “works of the mind” created by humans. In the absence of a positive creative action by a human between the time the instructions are given and the time the results appear on the screen, it could be argued that no “mind” is mobilized, and therefore no copyrightable “work of the mind” is created. For this reason, the authors of such software and the companies exploiting them could not claim to be authors or co-authors.

 

If they are not “works of the mind”, the texts and images created by AI would then be intangible goods under common law as can be non-original creations. They are appropriable not by copyright (by the sole fact of their creation, article L. 111-1 of the CPI) but by possession (article 2276 of the civil code) or by contract (general conditions granting the property to the user).

 

It is then a question of creations free of rights, belonging to the public domain even though they could have been considered as original and protectable if they had been created by human hands.
This echoes other types of authorless “works” such as the paintings of the Congo chimpanzee or the famous selfies taken in 2008 by a macaque. The U.S. courts ruled that a self-portrait taken by a monkey who grabbed a camera and clicked the shutter was not a protectable work because it was not created by a human, who is the subject of rights.
If the question had been presented before a French court, it would have most certainly judged that this selfie is not even a “work of the mind” in the sense of the CPI.

 

On the other hand, as soon as the result obtained is reworked and a formal personal contribution transforms this result, the qualification of intellectual work can be retained, but only because of the original modification brought to the result produced by the software. This case is also provided for in the Dall-E 2 Sharing & Publication Policy, which asks its users who modify the results obtained not to present them as having been entirely produced by the software or entirely produced by a human being, which is more of an ethical rule, of transparency, than a legal requirement.

 

The US Copyright Office [2] has recently published guidelines in this sense, with a clearly legal scope: it announces that it will refuse protection for content, or parts of works created exclusively by AI and will eventually grant it only for elements on which a human being has intervened [3].

 

 

The rights of authors of pre-existing works.

 

In French law, a new work that incorporates a pre-existing work without the participation of its author is said to be “composite” [4]. If the pre-existing works are in the public domain, such as those of Vinci and Van Gogh, their free use is permitted (subject to the possible opposition of the moral right by the right holders). On the other hand, to incorporate without authorization a pre-existing work still protected constitutes an act of infringement.

 

In our example, we consider that the Mona Lisa in the style of Van Gogh cannot however be qualified as a composite work since it cannot be a “work of the mind”. This does not mean, however, that the authors of pre-existing works do not have rights on, or against, the texts or images created by reusing their styles or their works.

 

Indeed, if we replace our Mona Lisa with an image obtained, for example, by entering the instructions: “Guernica by Picasso in colors”, we will obtain an image that integrates and modifies a pre-existing work. Picasso’s works are not in the public domain. The painter’s heirs therefore have rights over the image that would be created in this way. They must be able to authorize or prohibit not only the exploitation of the image obtained and request its destruction, but perhaps also prohibit or authorize the use of Picasso’s works by the software – which, let us recall, draws on its “knowledge” of a considerable number of images, necessarily including reproductions of Picasso’s works, in order to respond to the instructions it is given.

 

The production and publication by a user of a “Guernica in color” could therefore constitute an infringement; but the integration of Guernica in the software’s database could also, by itself, constitute an infringing act.

 

Indeed, the sites proposing image generators by AI fed with protected works could theoretically be considered as infringers by the CPI which punishes the fact of “publishing, making available to the public or communicating to the public, knowingly and in any form whatsoever, software obviously intended for the unauthorized making available to the public of protected works or objects” [5].
The “manifest” nature of the making available, and the qualification of “making available” itself, could be discussed.

 

But it is mainly Directive 2019/790 of April 17, 2019 that comes to the aid of operators of image and text generators by offering security for their use of protected pre-existing works.
The directive imposed a European harmonization of the “text and data-mining” exception (articles 3 and following). It provides a framework for the exploitation of protected works for any purpose, including commercial purposes, in order to extract information, particularly in the case of text and image generators. But the directive also provides for the possibility for the owners of rights on these works to authorize or prohibit their use, except for academic purposes. Such authorization can hardly be prior and operators, OpenAI, for example, are therefore setting up procedures for reporting the creation of infringing content (article 3d of OpenAI’s general conditions). But artists are already complaining about the complexity of obtaining such a removal when they are faced with a profusion of images imitating their style, some having noticed that the Internet offers more images created by AI imitating their style than images of their own works [6].

 

The operators of such software could therefore be condemned for infringement, possibly on the basis of article L335-2-1 of the CPI, when the owners of rights on works have requested their removal and the operators have not complied. They could have to compensate the users of the texts and images thus produced since the latter are not supposed to know if a holder has exercised his right of “opt-out”.

 

The risk represented by the incorporation of pre-existing works has thus been anticipated and assumed by certain players such as Adobe, which plans to compensate its customers who have purchased images created by AI, in the event of a claim by the authors or right holders [7].

 

 

Imitating the style of authors of other pre-existing works: an infringing act?

 

Authors of pre-existing works can be harmed by the multiplication of texts imitating their style or images representing “works” that they could have conceived but did not create, like our Mona Lisa imitating Van Gogh’s style, which Van Gogh never painted. The artists thus imitated mobilize themselves by launching slogans such as #SupportHumanArtists. On what basis could they oppose the creation of this type of content and what risks are there in producing such texts or images?

 

The basis of artistic forgery seems to be ruled out.
Artistic forgeries are sanctioned in French law by the “Bardoux” law of February 9, 1895, still in force. They are distinguished from counterfeits in the sense of the CPI in that they are not the unauthorized reproduction of a pre-existing and protected work but the imitation of a style, in order to attribute to an author a work that he did not create, or to associate his style with a work whose market value is much lower than that of a work of the author’s hand.

 

But strictly speaking, the image of a 3D balloon that imitates the style of Jeff Koons, or that of a painting in the style of Frida Kahlo are not artistic forgeries since they are only the digital representation of a fake that does not exist in reality. But the photographs are not concerned by the law of February 9, 1895. But especially, the qualification of artistic forgery is excluded because the text of law, of penal nature and thus of strict interpretation, represses the affixing of a usurped name on a work as well as the imitation of the signature of the author. It does not therefore prohibit the making of images “in the manner of”.

 

Infringement is also an imperfect basis. Strictly speaking, producing an image of a balloon “in the style of” Jeff Koons and presenting it as such might not constitute infringement because the image does not reproduce that of a pre-existing work.
The work created “in the style of” is therefore neither an artistic forgery nor an infringement [8].  There is therefore counterfeiting only if there is “not simply an imitation of the processes, genre or style of an artist, but a copy of a specific work by that artist”.[9]

 

Thus, as Professor Alexandra Bensamoun [10] reminds us, the most appropriate basis seems to us to be that of common law, of article 1240 of the Civil Code, on which a court could condemn the “creators” of these texts and images imitating the style of living authors to compensate for the moral prejudice they have suffered, or even an economic prejudice in specific cases of parasitic use of the style of an author of protected works.

 

 

To conclude.

 

As we can see, the irruption of AI creations disturbs intellectual property law, whose tools are insufficient to answer the questions raised. But the questions are not only legal. AI is now capable of beating world champions in chess or go.

 

We can imagine that AI will one day be able to produce “fake” sculptures of Camille Claudel, using 3D printing technology, or to make Rimbaud or Mozart write poems and symphonies of an artistic level approaching or equivalent to what they could have written if they had not died so young. A possible future of art could be in the dehumanization of creation, which would not only make it indispensable to modify the first book of the IPC (which could happen under the impulse of the European regulation under discussion on AI, the “AI act”[11]) but would also raise ethical questions.

 

If the public takes as much pleasure in reading a novel written by a machine or in admiring an exhibition of pictorial works created by software [12], will the artistic professions survive this competition?

 


Article Notes:

 

[1« Intelligence artificielle : ces artistes qui en font leur big data », Libération, 30 décembre 2022.

[4Article L. 113-1 du CPI

[5Article L. 335-2-1

[6« Illustrateurs et photographes concurrencés par l’intelligence artificielle : ‘‘Il n’y a aucune éthique’’ », Libération, 29 décembre 2022

[8Laurent Saenko et Hervé Temime, Quel droit pénal pour le marché de l’art de demain ?, AJ Pénal 2020, p. 108 ; Christophe Caron, Droit d’auteur – la « contrefaçon hommage », Communication Commerce électronique n° 7-8, juillet 2021.

[9Cour d’appel de Paris, 9 juin 1973, JCP 1974, II. 17883.

[10« Intelligence artificielle : ‘‘Le droit d’auteur protège une création précise, mais pas une manière de créer’’, Libération, 31 décembre 2022, interview par Clémentine Mercier.

IP NEWSLETTER MARCH 2023: COPYRIGHT PROTECTION OF IA-GENERATED CONTENT

Download our newsletter in french here.

 

 

The U.S. Copyright Office has just published its guidelines for Artificial Intelligence and copyright protection.

 

Artificial Intelligence : The technologies, described as “generative AI”, raise the central issue of copyright protection for the content they produce.

 

Since companies such as OpenAI and StabilityAI, Dall-E, Midjourney… started publishing AI-based text and image generators in late 2022, copyright applications in the US for works using AI have increased dramatically. In response to this craze, the US Copyright Office (USCO) recently issued guidelines for copyright protection of AI-enabled works.

 

 

Are these creations copyrightable ?

 

Last year, author Kris Kashtanova claimed to be the first person to have been granted a copyright for a work created via AI. Indeed, the application to register her comic book “Zarya of the Dawn”, whose images were created exclusively by AI, was approved by the USCO.

 

The USCO then reconsidered its decision and asked for additional information since the images were created with the help of Midjourney.

 

After re-examining the file, the USCO decided to cancel its decision to grant copyright and issued a new modified certificate:

 

➡ the elements created by Kris Kashtanova, namely the writing, will be protected by copyright ;

➡ on the other hand, the images generated by the AI are not copyrightable, as only human creations can be copyrighted.

 

Thus, the same work can have a partial protection regime according to the different sources of the creation.

 

 

In this case, the images were totally generated by the AI. But, in other cases, it could be possible that the human selects, adapts and arranges the content generated by the AI in a sufficiently creative way so that the resulting work as a whole could constitute an original work protectable by copyright.

 

So there will be a case-by-case assessment by the USCO. Applicants who submit their works for registration in the US will have to be precise in the explanation of their creative process (how was the AI used? for which part of the work? …).

 

 

Zarya of the Dawn, par Kris Kashtanova.

CHATGPT, MIDJOURNEY, FLOW MACHINES … : WHAT COPYRIGHT ON GENERATIVE IA CREATIONS ?

Faced with the onslaught of creative and generative AIs, copyright law is somewhat destabilized on its traditional bases. The qualification of “work of the mind” stumbles on these dehumanized robots. The intellectual property code risks losing its Latin, unless it is rewritten.

 

The use of artificial intelligence (AI) by companies, especially in communication, is becoming more and more widespread. Software such as Stable Diffusion, Midjourney, Craiyon, or Dall-E 2 can create images from natural language instructions (text-to-image). It is also possible to create text with tools such as ChatGPT, a conversational robot launched in November 2022 by OpenAI (1), or even music with Flow Machines from Sony (2).

 

 

Artistic blurring of copyright

 

The uses are quite varied: illustration of a newspaper, creation of a brand, texts for a website, an advertising medium or for a post on social networks, musical creation, publication of a complex literary work, …, and soon to produce films.
musical creation, publication of a complex literary work, …, and soon to produce films. Artists have seized upon it to develop an art form called “AI art”, “prompt art” or “GANism” (3). And, sometimes, artists transform the results obtained into NFTs (4), these non-fungible tokens authenticating a unique digital asset on a blockchain. To produce a text, image or music on command, the software needs to be fed with pre-existing texts, images or music and metadata on these contents (“deep learning”). The more reliable information the software has, the more convincing the result will be. As with any technological innovation, the use of such software raises many legal issues. The central question in terms of intellectual property is to know who owns the rights – if they exist – on the content generated by AI?

 

Under French law, a work is protectable if it is original. Originality is defined as revealing the imprint of the personality of the author, who can only be a human being. It is therefore necessary to determine who is the author, or who are the authors of an image, a text or a music created through an instruction given to a software. It is also necessary to determine who can be the owner of the rights. It could be the authors of pre-existing works, ourselves when we gave an instruction to the software, or the author of the software (for example, the company Stability AI that develops Stable Diffusion). The entities operating these softwares contribute to the process of obtaining unpublished texts, images or music, insofar as it is these content generators that propose a result comprising a set of choices rather than another.

 

Thus, it is the “autonomy” of AI software that throws the traditional conception of copyright into disarray. A court in Shenzhen, China, had ruled in 2019 that a financial article written by Dreamwriter (AI developed by
Tencent in 2015) had been reproduced without permission, recognizing that the creation of an AI could benefit from copyright. Nevertheless, the software’s contribution is automated and, in our view, the technical use of software to create an image, text or music does not give the owner of the software any rights in the image, text or music: in the absence of human intervention in the choice of colors, shapes or sounds, no copyright or co-authorship can be claimed on behalf of the software.

 

On February 21, 2023, in the United States, the Copyright Office decided that cartoon images created by the AI Midjourney could not be protected by copyright (5). The conditions of use of these text, image or music generators can confirm this. In the case of Dall-E 2, the “Terms of use” expressly state that OpenAI transfers to the user all rights on the texts and images obtained, and even asks that the content thus generated be attributed to the person who “created” it or to his company. Stability AI grants a perpetual, worldwide, non-exclusive, royalty-free, irrevocable copyright license for all types of use by Stable Diffusion, including commercial use. But in the absence, in our opinion, of any transferable rights, these provisions seem to be mere precautions.

 

 

Rights of the person using the software

 

It is therefore essential, for any person who wishes to use, commercially or not, the contents created via generative or creative AI tools, to check if the company operating the online site where he creates them gives him the rights and under what conditions. Since the creative contribution of the person who gives instructions to the image, text or music generator is limited to the production of an idea implemented by the software, and since the ideas are not protectable by copyright, it is doubtful that a court would recognize the quality of author to this person. Since the user of the software does not mentally conceive, in advance, the content obtained, it is difficult to argue that this content bears the “imprint of his personality”. But above all, one could go as far as denying the qualification of intellectual work to the images, texts or music created by the AI. Indeed, the code of the intellectual property (CPI) grants the protection of the copyright only to “works of the spirit” created by humans.

 

 

“Work of the mind” inherent to the human being

 

In the absence of a positive creative action on the part of a human, one could argue that no “spirit” is mobilized, and therefore no “work of the mind” protectable by copyright is created. If they are not “works of the mind”, the contents thus created would then be intangible goods under common law. They are appropriable not by copyright (6) but by possession (7) or by contract (general conditions granting ownership to the user). They are then creations free of rights, belonging to the public domain. This echoes other types of authorless “works” such as the paintings of the Congo chimpanzee or the famous selfies taken in 2008 by a macaque monkey. In the latter case, the American courts decided that the self-portrait taken by a monkey was not a protectable work because it was not created by a human, who is the subject of rights. On the other hand, as soon as the result obtained is reworked and a formal personal contribution transforms this result, the qualification of “work of the mind” can be retained, but only because of the original modification brought to the result produced by the software.

 

This case is moreover provided for in the “Sharing & Publication Policy” of Dall-E 2 which asks its users modifying the results obtained not to present them as having been entirely produced by the software or entirely produced by a human being, which is more an ethical rule, of transparency, than a legal requirement. In French law, a new work that incorporates a pre-existing work without the participation of its author is said to be “composite” (8). If the pre-existing works are in the public domain, their free use is allowed (subject to the possible opposition of the moral right by the right holders). On the other hand, incorporating without authorization a pre-existing work that is still protected constitutes an act of infringement. If, for example, one gives the instruction “Guernica by Picasso in color”, one will obtain an image that integrates and modifies a pre-existing work. Picasso’s works are not in the public domain and the rightful owners must be able to authorize or prohibit not only the exploitation of the image obtained and request its destruction, but perhaps also prohibit or authorize the use of Picasso’s works by the software.

 

The production and publication by a user of a “Guernica in color” could therefore constitute an infringement; but the integration of Guernica in the software’s database (deep learning) alone could also constitute an infringing act (9). Indeed, the CPI punishes the fact of “publishing, making available to the public or communicating to the public, knowingly and in any form whatsoever, software obviously intended for the unauthorized making available to the public of protected works or objects” (10). The “manifest” character of the making available, and the qualification of “making available” itself could be discussed. But it is above all the European directive “Copyright” of 2019 (11) that could come to the aid of the operators of AI generating content by offering a securitization of their use of protected pre-existing works. It provides a framework for the exploitation of protected works for any purpose, including commercial, in order to extract information, particularly in the case of text, image or music generators. It also provides for the possibility for the holders of rights on these works to authorize or prohibit their use, except for academic purposes.

 

Such an authorization can hardly be prior and the operators, OpenAI for example, therefore set up procedures for reporting the creation of infringing content (12). The site Haveibeentrained.com offers to check whether an image has been provided as input to image generators and to report one’s wish to remove the work from the database. But artists are already complaining about the complexity of obtaining such a removal (13). As we can see, the irruption of AI creations disturbs the intellectual property law, whose current tools are insufficient to answer the questions raised. We can imagine that AI will one day allow us to produce “fake” sculptures of Camille Claudel, by using 3D printing technology, or to make Rimbaud or Mozart write poems and symphonies of an equivalent – or even superior! – that they could have written and played if they had not died so young. The question of the imitation of the style of still living authors is not
without raising other debates.

 

Note: (1) – On 14-03-23, OpenAI introduced the GTP-4 version of ChatGPT. (2) – See EM@295, p. 4. (3) – “GANism” refers to Generative Adversarial Networks. (4) – Non-Fungible Tokens (NFT). (5) – https://lc.cx/ CopyrightGov 21-02-23, (6) – Only because of their creation, article L. 111-1 of the CPI. (7) – Article 2276 of the civil code. (8) – Article L. 113-1 of the CPI. (9) – Getty Images announced on 17-01-23 that it had filed a complaint against Stable Diffusion for having processed photos belonging to it in a deep learning process. (10) – Article L. 335-2-1 of the CPI. (11) – https://lc.cx/ Copyright17-05-19 (12) – Article 3d of the OpenAI general conditions. (13) – https://lc.cx/ Libération29-12-22 (14) – https://lc.cx/Procé

 


 

Article written by Véronique DAHAN and Jérémie LEROY-RINGUET for the magazine Edtion Multimdedia n° 297 10 avril 2023.