Summary
After the presentation of a general “European Approach to Artificial Intelligence” by the EU Commission in March 2021, a detailed draft regulation aimed at safeguarding fundamental EU rights and user safety was published today (“Draft Regulation”). The Draft Regulation’s main provisions are the following:
- A binding regulation for AI Systems (defined below) that directly applies to Providers and Users (both defined below), importers, and distributors of AI Systems in the EU, regardless of their seat.
- A blacklist of certain AI practices.
- Fines of up to EUR 30 million or up to 6% of annual turnover, whichever is higher.
- Transparency obligations and, for High-Risk AI Systems (defined below), registration and extensive compliance obligations
In Detail
1. Scope of Application
The Draft Regulation applies to the placing on the market, putting into service, and use of AI Systems by Providers and Users in the EU, as well as other parties in specific cases.
- AI Systems: The Draft Regulation defines “AI Systems” very broadly as software that is developed with machine learning, logic, and knowledge-based or statistical approaches, and that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
- Providers and Users: A “Provider” may be any entity that develops or has an AI System developed with a view to placing it on the market (i.e., first making it available) or putting it into service under its own name or trademark, whether for payment or free of charge. A “User” in the Draft Regulation refers to any entity using an AI System under its authority, except where the AI System is used in the course of a personal non-professional activity. Providers and Users that are established outside the EU are nevertheless subject to the Draft Regulation to the extent that the output produced by the AI System is used in the EU. The Draft Regulation does not apply to AI Systems that are exclusively used to operate weapons or for other military purposes, or to public authorities of third countries or international organizations using AI Systems under international agreements.
2. Blacklisted AI Practices
The Draft Regulation includes a list of prohibited AI practices that are understood to contravene the EU’s values and fundamental rights:
- Distorting human behavior: AI Systems distorting a person’s behavior that cause or are likely to cause physical or psychological harm by deploying subliminal techniques or by exploiting vulnerabilities due to the person’s age or physical or mental disability.
- Social scoring: The use of AI Systems for social scoring by public authorities or on their behalf (i.e., the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behavior or characteristics) that leads to the detrimental or unfavorable treatment of certain groups of persons.
- Biometric identification: AI Systems used for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, unless it is strictly necessary for a targeted crime search or the prevention of substantial threats.
3. Transparency Obligations
Natural persons must be notified when they are interacting with an AI System where it is not obvious from the circumstances and the context of the use. This obligation does not apply where the AI System is used to detect, prevent, investigate, or prosecute criminal offenses. Where AI Systems are used to generate audio or video content that resembles existing persons, objects, places, etc. (so-called “deep fakes”), the artificial creation of such content must be disclosed.
4. Regulation of High-Risk AI Systems
The Draft Regulation introduces three categories of “High-Risk AI Systems” and subjects Providers and Users as well as importers and distributors of such AI Systems to specific obligations. High-Risk AI Systems include:
- AI Systems intended to be used as a product or as a component of products covered by a set of pre-existing EU Directives on, for example, machinery, safety of toys, lifts, radio equipment, and medical devices. Concerning these AI Systems, the Draft Regulation largely refers to the provisions and conformity assessments under these specific Directives.
- AI Systems intended to be used as a product or as a component of products covered by pre-existing EU Regulations on aviation, motor vehicle, and railway safety.
- AI Systems explicitly listed by the Draft Regulation, that are intended to be used to:
- Perform biometric identification and categorization of natural persons.
- Work as safety components used in the management and operation of critical infrastructure (e.g., for road traffic and the supply of water, gas, or electricity) or to dispatch or establish priority in the dispatching of emergency first response services, e.g., firefighters and medical aid.
- Determine access to educational and vocational training institutions as well as for recruitment (e.g., advertising job vacancies, screening or filtering applications, and evaluating candidates), make decisions on promotions, allocate tasks, and monitor work performance.
- Evaluate the creditworthiness or establish the credit score of persons or evaluate their eligibility for public assistance benefits and services by public authorities or on their behalf.
- Make predictions intended to be used as evidence or information to prevent, investigate, detect, or prosecute a criminal offense or adopt measures impacting the personal freedom of an individual; work with polygraphs or other tools to detect the emotional state of a person; or predict the occurrence of crimes or social unrest in order to allocate patrols and surveillance.
- Process and examine asylum and visa applications to enter the EU or verify the authenticity of travel documents.
- Assist judges in court by researching and interpreting facts and the law, and applying the law to a concrete set of facts.
The list is not conclusive. When the EU Commission identifies other AI Systems generating a high level of risk of harm, those AI Systems may be added to this list.
5. Obligations Linked to High-Risk AI Systems
With regard to obligations linked to High-Risk AI Systems, the Draft Regulation provides for different sets of obligations for Providers, Users, importers, and distributors, respectively.
a. Providers’ obligations for High-Risk AI Systems
- Technical parameters and transparency
- Risk management system: Providers must establish, implement, document, and maintain a risk management system, including specific steps such as the identification of foreseeable risks of the AI System and analysis of data gathered from a post-market monitoring system. The risk management system must ensure that risks are eliminated or reduced as far as possible by the AI System’s design and development and adequately mitigate risks that cannot be eliminated.
- High quality data sets: The Draft Regulation requires High-Risk AI Systems to be trained, validated, and tested by high quality data sets that are relevant, representative, free of errors, and complete. This requirement must be ensured by appropriate data governance and data management.
- Technical documentation and record keeping: The design of High-Risk AI Systems must enable tracing back and verifying their outputs. For that purpose, the Provider is obliged to retain technical documentation that reflects the conformity of the AI System with the requirements of the Draft Regulation.
- Quality management system: Further, the Provider is required to put a quality management system in place that, among other obligations, includes a written strategy for regulatory compliance, systematic actions for the design on the AI System, technical standards, and reporting mechanisms.
- Transparency and information for Users: Users must be able to understand and control how a High-Risk AI System produces its output. This must be ensured by accompanying documentation and instructions for use.
- Human oversight: High-Risk AI Systems must be designed in such a way that they can be effectively overseen by competent natural persons. This requirement is aimed at preventing or minimizing the risks to health, safety, and fundamental rights that can emerge when a High-Risk AI System is used.
- Robustness, accuracy, and cybersecurity: High-Risk AI Systems must be resistant to errors as well as attempts to alter their performance by malicious third parties and meet a high level of accuracy.
- Authorized representative: Providers established outside the EU must appoint an authorized representative (a natural or legal person established in the EU with a written mandate form by the Provider) that has the necessary documentation permanently available.
- Conformity and registration process
- Conformity assessment: The Provider must perform a conformity assessment of the AI System to demonstrate its conformity with the requirements of the Draft Regulation. With each substantial modification of the AI System, the Provider must undergo a new conformity assessment. High-Risk AI Systems intended to be used for remote biometric identification and public infrastructure networks are subject to a third-party conformity assessment. Certificates issued by the notified governmental entities will be valid for a maximum of five years. With regard to other High-Risk AI Systems, the Provider may opt to carry out a self-assessment and issue an EU declaration of conformity. The Provider must continuously update the declaration as appropriate.
- CE marking: The Provider must indicate the AI System’s conformity with the regulations by visibly affixing a CE marking so the AI System can operate freely within the EU.
- Registration: Before placing it on the market or putting it into service, the Provider must register the AI System in the newly set up, publicly accessible EU database of High-Risk AI Systems.
- Monitoring
- Post-market monitoring: Providers must implement a proportionate post-market monitoring AI System to collect, document, and analyze data provided by Users or others on the performance of the AI System.
- Reporting obligations: Providers must notify the relevant national competent authorities about any serious incident or malfunctioning of the AI System, as well as any serious breaches of obligations.
b. Users’ obligations for High-Risk AI Systems: Users must use High-Risk AI Systems in accordance with the instructions indicated by the Provider, monitor the operation for evident anomalies, and keep records of the input data.
c. Importers’ obligations for High-Risk AI Systems: Importers must, among other obligations, ensure that the conformity assessment procedure has been carried out and technical documentation has been drawn up by the Provider before placing a High-Risk AI System on the market.
d. Distributors’ obligations for High-Risk AI Systems: Distributors must, among other obligations, verify that the High-Risk AI System bears the required CE conformity marking and is accompanied by the required documentation and instructions for use.
e. Users, importers, distributors, and third parties becoming Providers: Any party will be considered a “Provider” and subject to the relevant obligations if it (i) places on the market or puts into service a High-Risk AI System under its name or trademark, (ii) modifies the intended purpose of the High-Risk AI System already placed on the market or put into service, or (iii) makes substantial modifications to the High-Risk AI System. In any of these cases, the original Provider will no longer be considered a Provider under the Draft Regulation.
6. Fines:
The Draft Regulation provides for substantial fines in cases of non-compliance as follows:
- Developing and placing a blacklisted AI System on the market or putting it into service (up to EUR 30 million or 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher).
- Failing to fulfill the obligations of cooperation with the national competent authorities, including their investigations (up to EUR 20 million or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher).
- Supplying incorrect, incomplete, or false information to notified entities (up to EUR 10 million or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher).
Next Steps
It is expected that stakeholders will present various concerns and modification requests to the EU Commission which will likely cause a debated and challenging legislative process. We will keep you updated.