Interpreting the EU’s Political Agreement on the AI Act: What Does It Mean and What’s Next?
Interpreting the EU’s Political Agreement on the AI Act: What Does It Mean and What’s Next?
The European Parliament and Council of the European Union have reached a political agreement on the EU AI Act and increased exposure to potential fines of up to the higher of 7% of global annual turnover or EUR 35 million. Rather than agreeing on a final text, the Parliament and Council dealt with the final outstanding topics by agreeing on them at a “principles” level, with the drafting of the full text at a technical level to follow. Given the uncertainty over some important aspects of the agreed upon principles, in this briefing, we look at where the negotiations settled, what this means in practice, and what crucial issues still need to be monitored.
Following three days of marathon negotiations, on December 8, 2023, the European Parliament (EP) and Council of the European Union (Council) reached a political agreement on the EU Artificial Intelligence Act (AI Act). Contrary to common practice in EU legislative proceedings, the EP and Council did not agree on specific text for the AI Act. Instead, the EP and Council agreed, at a principles level, on solutions for the final topics they have been debating.
Over the coming weeks, work will continue at a technical level to draft proposed text in line with these principles, after which the full final text will need to be reviewed, then agreed to by the EP and Council negotiators. Subsequently, the text will need to pass a vote in both the EP and Council. According to some sources, the full text of the AI Act is expected to be confirmed no earlier than February 2024.
There has been enormous pressure for agreement to be reached, with the EU seeking to keep its reputation as a tough legislator for all matters relating to tech and data. If this in principle agreement had not been reached prior to the close of Parliament, there was widespread concern that the text would not be finalized for another year.
With extra-territorial reach for organizations outside the EU that are conducting business within the EU as well as increased potential fines of up to EUR 35 million or 7% of global annual turnover for non-compliance, this is a seminal agreement which will pave the way for wide-reaching legislation that the world has been waiting to see. It will in many ways set a high bar for businesses to reach when they are designing, building, licensing, selling, and/or deploying AI systems.
If the full text passes the EP and Council votes, it will be published and enter into force 20 days after publication. It looks like there will be a further transition period of two years, with exceptions for specific provisions (which are, at the time of the writing of this article, not yet made public).
The team at MoFo has been following negotiations on the ground. While there is no final text to comment on, this briefing aims to give a sense of where the final negotiations settled in terms of what is agreed upon, what has been agreed to in principle but needs further thought and detail, and what this means in practice and what it doesn’t.
Without a full text for the AI Act, the agreement in principle is currently described in the press releases of the EP and of the Council.
Although the technical drafting of the AI Act is incomplete, there appears to be provisional agreement on the following key topics:
Further details on these topics are set out below, together with a reminder around the scope of the AI Act and some other noteworthy points.
2. The list of banned AI systems has been expanded
3. There is agreement on a risk-based two-tier approach for GPAI
4. AI & Copyright on the AI input level and output level
5. A blanket exemption for open-source AI models is unlikely
7. Fundamental rights impact assessments may be required
8. Enforcement and increased penalties
9. Interaction with data protection laws
10. Rights of individuals will be provided for
11. The AI Act and National Security
12. Allocation of responsibility across the value chain
The AI Act will apply to “AI systems”, with a definition which has been updated to align with the definition set out by the Organisation for Economic Cooperation and Development (OECD).
The OECD itself updated its definition of AI systems in November 2023. In summary terms, the definition now describes an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
This updated definition is a positive change. Earlier proposals were, in our view, overly wide in scope.
The EU Commission’s proposal for the AI Act predates the public launch of the first generative AI tools (GenAI). The earlier draft therefore did not include any specific requirements for GenAI. The EP and Council drafts were issued later on and did include specific requirements for foundation models (FM) and general purpose AI (GPAI). The term FM was used to refer to AI models designed for generality of output, which can be adapted to a wide range of distinctive tasks, and the term GPAI was used to refer to AI systems that can be used in and adapted to a wide range of applications. It is still unclear whether the EP and Council have agreed on a specific term, given that the Council refers to both GPAI and FMs in its press release. That said, there are reports that suggest that the term FM has been dropped and will be replaced with the term “GPAI with systemic risk”, which is the higher-risk level GPAI, as described further below.
The AI Act has extra-territorial scope. Organizations outside the EU will have to comply with the upcoming regulation when their businesses are “established” within the EU (including group companies or branch offices, for example) and/or are offering goods or services within the EU. Where the AI Act’s provisions apply, organizations outside the EU will face enforcement actions for non-compliance with its provisions.
The previous drafts of the AI Act already included several AI practices that should be considered as prohibited, such as using AI to exploit the vulnerabilities of individuals, to manipulate them using subliminal techniques, social scoring, and use of biometric categorization systems that use sensitive characteristics (e.g., political preferences).
As part of the provisional agreement, two additional AI practices will be prohibited:
The second of these prohibitions, in particular, may have wide-reaching impacts for existing trained models which have incorporated these practices already as well as for the necessary engineering approach going forwards.
The regulation of GPAI was one of the main points of negotiation between the EP and Council. This issue threatened the AI Act as a whole, after the Council representatives of Germany and France took the position that GPAI should not be regulated at all because this would put German and French GPAI providers at a competitive disadvantage.
The conclusion of the negotiations between the EP and Council is that GPAI will be subject to a risk-based two-tier approach:
It is not yet clear what the criteria are for a GPAI to qualify as a high-impact GPAI. The EP indicates that high impact GPAI systems should be those that may cause a systemic risk. Previous drafts of the AI Act made this determination based on the computational power required to train the GPAI model, and suggested the computing power threshold required to train the GPAI model should be 10^25 FLOPS. This would be a lower threshold than the 10^26 FLOPS threshold for the reporting obligation under the U.S. Executive Order on AI, which we reported in our previous alert.
The drafting of the text around this issue will be critical and organizations that offer GPAI solutions will need to keep a watchful eye on this area. Where the definition of a high-impact GPAI systems is applicable, they will need to ensure their processes are updated and compliant.
The EP’s June 2023 draft required providers of FM to make publicly available a summary of the copyrighted training data used and to put in place effective safeguards to prevent unlawful output (see our Client Alert from August 11, 2023). The conclusion of the negotiations is as follows:
Training data transparency obligation: The final provision is likely to cover all types of data and not only copyrighted works. Practical concerns are partly met by two additions made to the text: (i) a new recital indicates that the summary should identify the sources of the training data (e.g., databases, data archives) rather than listing the individual data, and (ii) the future AI Office will provide a summary template.
Respecting EU copyright law, including TDM opt-outs: The AI Act will oblige GPAI providers to put in place a policy to respect EU copyright law. This seems obvious, as copyright law undoubtfully applies to the field of AI, both to use of copyrighted works for training purposes and the potential infringing outputs of the GPAI models.
For use of copyrighted works for training purposes, the provision explicitly mentions is that the GPAI provider must observe opt-outs made by rights holders under the Text and Data Mining (TDM) exception of Art. 4(3) of Directive (EU) 2019/790. This exception entails that where such opt-out has been effectively declared in a machine-readable form (e.g., by publishing a robots.txt file on their website), the content may not be retrieved for AI training. This provision clarifies that the TDM exception indeed also applies to the use for training of GPAI, but it leaves a variety of practical uncertainties open, e.g., technical standards for opt-out, scraping of content from websites where the rights holder is unable to place an opt-out, declaring an opt-out after the AI has already been trained with the data (relating to the impossible “unlearning” of AI), evidentiary challenges when enforcing rights in training data. The final text does mention the use of “state of the art technologies” to respect opt-outs. It remains to be seen whether this provision will lead to similar technological advances in facilitating lawful scraping of training data, or rather licensing solutions between rights holders and AI developers and providers. It is noteworthy that a recital underlines that any provider placing a GPAI model on the EU market must be copyright compliant in the meaning of this provision, even indicating that AI training conducted outside the EU must observe TDM opt-outs.
As to potential copyright issues relating to the output of the AI models, the provisional agreement does not provide clarifications. Therefore many follow-on questions remain outstanding, such as whether prompts likely to cause infringing outputs should be blocked from processing, how to reliably assess AI output under copyright law (e.g., as a parody or pastiche), the allocation of liability between provider and user, notice and takedown procedures, etc.
Regardless of the different copyright laws across the globe, the bottom line remains that the existing copyright frameworks and the technical side do not yet have a tailor made response to copyright infringements of copyrighted training data and the impact on the usability of the respective AI system. Over time, courts may shape solutions. However, as to create a safe environment for businesses now, the more recommendable approach is to find joint solutions between larger groups of copyright owners on the one hand and providers and deployers of AI on the other.
The EP had previously proposed an exception for open-source AI models, unless they were included in high-risk AI systems. This was another particularly contentious area which took significant time to reach an agreement on. During the first stages of the final negotiations between the EP and Council, it appeared that such exception for open-source AI models would indeed be included in the AI Act, as was described in certain reports. The press releases of the EP and Council, however, include no reference to open-source AI models. Given that such exception would have been a big win for the EP, it appears unlikely that such exception - if indeed agreed upon, would not have been reported in the EP’s press release. At this point, it therefore does not appear likely that open-source AI models will be generally exempt from (certain parts of) the AI Act. We do, however, expect that there will be some rules benefiting collaborative developments of AI models under free and open licenses in publicly accessible repositories if they do not qualify as high-impact GPAI. We will need to await the full text of the AI Act, before it can be positively confirmed whether, and to what extent, a special regime will be created for open-source AI models. Again, this will be a crucial issue to monitor.
The previous drafts of the AI Act provided for enforcement by national supervisory authorities of the Member States only. However, the provisional agreement provides for enforcement of the “common rules” by a new body, the AI Office, which will be set up within the EC. Clarity on what exactly the scope is of the “common rules” will still have to be forthcoming.
There are three key aspects of future governance under the AI Act that have become known so far:
Once the text on these areas is available, we will be able to comment on the potential impacts for businesses going forward.
The EP has managed to include an obligation to conduct a fundamental rights impact assessment for public bodies and private entities providing essential public services, such as hospitals, schools, banks and insurance companies deploying high-risk systems. This is an obligation separate from and in addition to the risk management obligations of providers that were already included in previous drafts of the AI Act.
Depending on how much of the EP’s original position will be implemented in the final version of the text, the following will likely be required to be included as part of the assessment:
Other stakeholders, such as equality bodies and consumer protection agencies, may need to be involved during the impact assessment. Where the deployer is already required to carry out a data protection impact assessment under the EU General Data Protection Regulation (GDPR), the fundamental rights impact assessment will be conducted in conjunction with the data protection impact assessment.
The maximum penalties for non-compliance with the AI Act have been increased, and now range from a maximum of EUR 7.5 million or 1.5% of the organization's global annual turnover to EUR 35 million or 7% of global annual turnover. The EC and Council have agreed to implement more proportionate penalty amounts for small and mid-sized enterprises (SMEs) and startups.
As with the GDPR, these eye-watering levels of fines mean that organizations have a strong financial imperative to comply with the AI Act’s provisions and with ethical and societal rationales.
Since the first draft of the AI Act, it was made clear that it would act as a “top up” of the GDPR in relation to personal data, and that the GDPR remains applicable. Many of the risk management obligations under the AI Act are similar to those under the GDPR. The AI Act, however, has a far broader scope and will also apply if the AI system:
As explained in one of our previous alerts, many requirements of the AI Act are similar to the requirements of the GDPR, albeit they are both more detailed and wider in scope (applying to all data). By way of example:
Controllers under the GDPR who currently train (or further train) AI systems with personal data or use AI systems processing personal data will therefore be able to leverage their GDPR compliance efforts towards complying with the AI Act’s risk management obligations. Currently, it appears that the only specific requirement in the AI Act that overlaps with the GDPR is the right granted to individuals to an explanation for decisions based on high-risk systems that impact the rights of individuals (see below).
Previous drafts of the AI Act included the possibility of regulatory enforcement but did not provide direct rights to individuals. The EP and Council have now agreed to include a right for individuals to complain about AI systems, but without specifying to whom such a complaint should be addressed. In addition, individuals have the right to receive explanations for decisions based on high-risk AI systems that impact the rights of individuals. The fact that the EP refers to decisions based on high-risk AI systems, could imply a broader explainability right than under the GDPR.
Again, these changes have important implications, and we await to see the final text so we can derive more insight for you in this area.
The initial draft of the AI Act contained an exception for use of AI for military purposes but did not exempt other national security related activities. To provide more flexibility for national governments, the Council pushed for a broad exception for the use of AI for military, defense and national security purposes, and regardless of whether a public or private entity would use the tool. Critics considered this blanket exception too broad, creating loopholes especially for the private sector.
According to the Council’s press release, its proposal made it into the provisional agreement, providing that the regulation should not apply to areas outside the scope of EU law and should therefore not affect member states’ competences in national security or any entity entrusted with tasks in this area. The latter is, however, more limited than originally proposed.
The provisional agreement further recognizes the interests of law enforcement agencies to quickly respond in emergency situations by using high-risk AI tools which have not passed the conformity assessment.
Although the AI Act will not affect member states’ competences in national security, it still may have an indirect impact on how AI tools will be viewed by member state authorities regulating their national security interests. For example, the AI Act may help framing the – undefined – notion of artificial intelligence included in the EU framework for screening of foreign direct investments. According to this framework, member states may take critical technologies, including artificial intelligence, into account when determining whether an investment is likely to affect security or public order.
Another key aspect that the AI Act includes is the allocation of compliance obligations along the AI value chain. All draft versions of the AI Act provided a mechanism where the obligations of the AI provider automatically transfer to certain deployers or to other operators. The details of this transfer, however, differed between the EU Commission, EP, and Council versions of the AI Act. It seems to be the case that such a transfer will occur if downstream operators place a third-party AI system on the market under their own brand or if they make a substantial modification to a high-risk AI system. However, it is still unclear whether any obligations remain with the original AI provider in case of such a transfer and how the AI Act will regulate the relationship between the operators involved. In this context, it seems the parties to the trialogue opted for a solution that is closer to the EP position, meaning that the original provider must at least provide all information necessary for compliance with the AI Act’s obligations to the downstream operator. The fate of additional supply chain obligations that the EP position brought to the trialogue table is yet unclear. This particularly applies to the scope of value chain obligations for providers of GPAI models or systems, the additional value chain obligations to provide technical access or other assistance to downstream providers, and the proposed list of unfair contractual terms that must not be used for SMEs or startups.
The final text in this area will be important both in terms of understanding risk and how it will be allocated on a statutory basis as well as driving decisions about how contracting with counterparties may need to be updated.
As mentioned above, work now continues at a technical level to draft the proposed text in line with the principles, after which the full final text will need to be reviewed and agreed to by the negotiators of the EP and Council, and subsequently will need to pass a vote in both the EP and Council. Sources are suggesting this is unlikely to be confirmed earlier than February 2024.
While there will likely be a two-year transitional period to comply with the EU AI Act (which may be tiered), given the AI Act impacts – in so many ways – how AI solutions are designed, built, trained and deployed, it is important to take stock of the way the regulation is shaping up now.
At the same time, AI laws around the world continue to promulgate. During the week of December 11, for example, the UK government confirmed its latest plans for an AI law. Whilst the Rt Hon Michelle Donelan confirmed the UK’s aim to keep a pro-innovation approach to the AI law, it seems that the UK AI bill is likely to go further than the current White Paper whilst keeping its multi-regulator approach (more to follow in a later briefing on this development).
At Morrison Foerster, we continue to help both suppliers and adopters of AI systems around the world grapple with the laws that currently apply to AI solutions as well as those that are upcoming. Additionally, we continue to help businesses, industry and government address the societal and ethical challenges and opportunities this technology derives as well as help with the overall regulatory and contractual impacts.
We strongly recommend that organizations, both on the customer and provider side, continue to address - as far as permissible by local mandatory laws - the multi-faceted and potentially conflicting risks and issues within their contractual documentation in order to create certainty between parties as far as possible.
For those buying or investing in AI companies, the due diligence approach requires additional focus to that of more traditional technology investments to ensure the full value of the company or technology is realized. If currently looking to buy or invest in AI solutions or businesses, taking account of the implications of these latest known positions in the AI Act is advisable.
We will both seek to keep you updated on the progress of the final text of AI Act in the EU as the text develops and the implications that arise as well as providing updates relating to other countries. In the coming weeks, we will be issuing a series of deeper dives into some of the areas above as well as looking at certain M&A specific impacts, industry impacts and more.
If you would like to receive AI-specific updates, please sign-up. Additionally, you can keep track via our AI pages.