Privacy and the EU’s Regulation on AI: What’s New and What’s Not?
Privacy and the EU’s Regulation on AI: What’s New and What’s Not?
Republished in The Journal of Robotics, Artificial Intelligence & Law.
The draft EU Regulation on Artificial Intelligence (the “Regulation,” available here) imposes a broad range of requirements on both the public and private sectors, which are summarized in this alert. Some of these requirements already apply (in a similar form) under the EU General Data Protection Regulation (GDPR). This begs the question: What is the impact of the Regulation on the privacy sector, and what requirements already apply?
The GDPR applies to the processing of personal data in the context of an EU establishment, or when offering goods or services to, or monitoring the behavior of, individuals in the EU. The GDPR applies regardless of the means by which personal data are processed, and therefore applies when an AI system is used to process personal data (e.g., when using an AI system to filter applications for a job vacancy).
Under the GDPR, profiling is the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.
Automated decision-making is any decision made without meaningful human involvement that has legal effects on a person, or similarly significantly affects him or her. This may partially overlap with, or result from, profiling, but this is not always the case.
The GDPR imposes specific requirements on profiling and automated decision-making. The use of an AI system in relation to individuals often involves profiling, and sometimes automated decision-making. For example, when using an AI system to filter applications for a job vacancy, profiling is used to determine whether the applicant is a good fit for the vacancy. If the AI system filters out the applicants that it considers a good fit, and the other applicants are not considered for the position, this is automated decision-making towards the latter group. Their applications were removed from consideration for the position, with no meaningful human involvement.
The GDPR imposes legal requirements on whoever uses the AI system for profiling and/or automated decision-making purposes, even if they acquired the system from a third party. These requirements include:
If a company acquires an AI system from a vendor, the company is often not in a position to comply with the above-mentioned requirements on its own. For example, the company may not know whether the AI is trained to prevent discrimination, or know the logic that the AI system relies on. In order to be able to comply with its obligations under the GDPR, a company therefore needs to rely on the vendor and will want to impose contractual obligations on the vendor to secure its cooperation and support.
Compared to the GDPR, the Regulation introduces new obligations for vendors of AI systems, prohibits certain very high-risk AI systems, and introduces more specific requirements for high-risk AI systems and users thereof. We highlight the key differences below.
The Regulation defines AI systems as any software that, for a set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions influencing the environments where they interact. Such software qualifies as an AI system if it is developed using one or more of the following approaches and techniques:
- Machine learning approaches, including supervised, unsupervised, and reinforcement learning, using a wide variety of methods including deep learning;
- Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and/or
- Statistical approaches, Bayesian estimation, and search and optimization methods.
The Regulation applies to vendors (“providers”) of AI systems and the users of AI systems. A provider is the developer that offers an AI system on the market, whereas a “user” is using an AI system under its own authority. In respect of providers and users, the Regulation applies to:
Although the GDPR imposes stringent requirements on certain processing activities, it does not outright prohibit any activities. This is different under the Regulation, which prohibits a number of AI systems that are deemed too risky under any circumstances. Most of these prohibitions are limited to AI systems used by public authorities or law enforcement. The prohibited AI systems that are relevant to the private sector are those that cause physical or psychological harm to an individual by:
The majority of the requirements of the Regulation apply to high-risk AI systems only. The Regulation lists a number of AI systems that qualify as high-risk. The European Commission can add AI systems to this list, taking into account the criteria set out in the Regulation. The key high-risk AI systems for the private sector are AI systems used for:
The Regulation imposes the following general requirements on high-risk AI systems:
- Establish a risk management system and maintain it continuously throughout the lifetime of the system to identify and analyze known and foreseeable risks, estimate and evaluate such risks, and adopt suitable risk management measures;
- Provide training, validation, and testing data, including the relevance, representativeness, accuracy, and completeness thereof, and bias monitoring, detection, and correction, for which special categories of personal data may be used based on the substantial public interest exemption of Article 9(2)(g) GDPR;
- Draw up technical documentation before the AI system is placed on the market that demonstrates that the AI system complies with the Regulation;
- Create automatic logs to ensure a level of traceability of the system’s functioning;
- Ensure transparency to enable the user of the system to interpret the AI system’s output and use it appropriately;
- Enable human oversight on the AI system aimed at minimizing the risks to health, safety, or fundamental rights, by an individual who fully understands the system’s capabilities and limitations and can decide not to use the system or its output in any particular situation; and
- Ensure accuracy, robustness, and cybersecurity to foster resilience regarding errors, faults, inconsistencies, technical faults, unauthorized use, or exploitation of vulnerabilities.
The Regulation introduces general requirements for high-risk AI systems, which are more specific than the requirements under GDPR. For example, the Regulation imposes specific requirements on training, validation, and testing data in order to prevent bias and discrimination, while the GDPR merely requires that any processing of personal data is fair (including not being discriminatory). Another example is the requirement of human oversight. Although the GDPR grants individuals the right to obtain human intervention in cases of automated decision-making (as set out above), this requirement applies only to the company that makes the automated decision, and not to company that provided the (AI) system. This is different under the Regulation.
The GDPR applies to the processing of personal data, and not (directly) to the provider of the systems that enabled this processing. This means that companies that use third-party systems to process personal data are responsible for such systems. This is different under the Regulation, which imposes the following specific requirements on the provider of the high-risk AI system:
The Regulation imposes fewer obligations on users of high-risk AI systems than on the providers thereof, which are different from the requirements that apply under the GDPR. The Regulation requires users of high-risk AI systems to:
The Regulation imposes specific requirements on AI systems for remote identification (such as facial recognition) in public spaces, which are only allowed with prior authorization of the competent authority. In addition, the Regulation imposes transparency requirements on AI systems that interact with individuals, recognize emotion, and create or alter image, audio, or video (e.g., “deepfakes”).
The Regulation provides for the establishment of a European Artificial Intelligence Board (the “Board”) tasked to issue guidance and opinions to ensure a consistent application of the Regulation, and to collect and share best practices and standards. In addition, each EU member state will designate a competent authority that is responsible for the implementation of the Regulation.
The Regulation provides the following penalties for noncompliance in the private sector:
The Regulation provides for an implementation period of 24 months after entering into force, and a 12-month transition period for AI systems that are placed on the EU market before the application of the Regulation.
The Regulation is now under consideration by the European Parliament and the Council of the European Union, who will debate the proposal and can propose amendments. Together with the European Commission, these three legislative bodies will then work towards finalizing the Regulation, which is a time-consuming process.
Practices
Industries + Issues
Regions