Changing the Safety and Liability Rules on AI – What is the European Commission planning?
Changing the Safety and Liability Rules on AI – What is the European Commission planning?
A White Paper on Artificial Intelligence from the European Commission provides insight on how governments might change product safety and liability rules to address the issues arising from AI systems.
Five years ago, the European Commission (EC) established a key policy agenda designed to deliver significant legal changes to the “Digital Single Market” by 2019. That agenda led to a series of material regulatory changes across the EU.
Now, the EC has established the creation of a “Europe fit for the digital age” as a key political goal and has published a series of documents intended to shape Europe’s digital future. Two of these documents relate to AI systems: a white paper titled “On Artificial Intelligence – A European approach to excellence and trust” and a report on the safety and liability implications for AI, the Internet of Things and robotics (the “Reports”).
Among other issues, the Reports discuss new proposals for changing the regulatory framework on product safety and liability in the EU to address the changes brought by AI systems.
As identified in the Reports, and in an EC expert report on AI published last year, the EU has regulated product safety and liability in three ways:
The EU Product Liability Directive imposes liability for any damage caused by a defective product. Any injured individuals must show a causal link between the damage and the defect, but they do not have to prove the negligence or fault of the producer or importer. There are certain exemptions to this regime, including a defense for producers if the defect appeared after the product has entered into circulation.
The GPSD only applies when a product is not subject to a sector-specific safety regime, such as the regime for medical devices. It places a requirement on producers and distributors not to put any product on the market unless it’s safe. For both producers and distributors, it is a criminal offense to put an unsafe product on the market; however, a distributor must have known or should have known that the product was unsafe in order to breach this requirement.
Additional rules on product safety currently vary by EU Member State.
We have summarized seven of the Reports’ key recommendations in relation to AI product safety and liability below:
The Reports note that software is a key part of any AI system, and that the existing EU product safety regime only takes into account the risks stemming from software integrated in a product at the time it goes to market. Therefore, the Reports consider whether requirements should be introduced for ensuring the safety of stand-alone software applications.
The Reports consider that certain AI systems pose additional risks for EU citizens and recommend that producers of certain high-risk AI systems should be required to ensure that those systems meet a “conformity assessment” before being put on the market. The Reports discuss that the assessment should facilitate:
High-risk AI systems may include systems that:
The Reports propose that obligations to ensure the safety of an AI system should be distributed across the different economic actors involved in its supply chain; including developers, distributors, service providers and even users. The EC believes that each obligation should be addressed to the actors who are best placed to address any potential risk. This reflects a shift from the existing regime, which targets producers and importers.
As AI systems commonly undergo continuous development after they have entered the market, the Reports consider whether the concept of “putting into circulation” in the EU Product Liability Directive should be revisited to take into account how AI systems may change over time.
The EC notes that EU product safety legislation does not address the risks that are derived from the use of faulty training data (e.g., the Reports provide an example of a computer vision system that is not trained to detect objects in poorly lit environments). Therefore, the Reports consider whether specific provisions are required to address the risks of faulty training data during the design phase of an AI system and whether additional provisions are needed to maintain the quality of training data while the product is in use.
To address the issue of the “black-box effect” noted in AI systems which makes it difficult for users to trace the decisions made about them by AI systems, the EC states that it is necessary to consider implementing requirements to improve the transparency of algorithms. The Reports state that one way to tackle this issue would be to require developers to disclose the design parameters and metadata of datasets in the event of accidents caused by AI systems.
The EC notes that it is seeking views on whether it is required to reverse the burden of proof required under national liability rules for damage caused by AI systems, through an EU initiative.
For AI systems with a specific risk profile, the EC considers whether strict liability may be appropriate, coupled with the requirement to obtain appropriate insurance. This would follow the existing requirements under the Motor Insurance Directive, where drivers are required to insure their cars to ensure that individuals receive compensation in the event of an accident.
Organizations that develop, supply or use AI systems should continue to monitor the EC’s progress on introducing regulation on these issues. Once the EC publishes more concrete proposals, it may also be useful for organizations to:
London-based Trainee Solicitor Danial Alam contributed to the writing of this Alert.
Practices
Industries + Issues