đ˘ Will Liability Wreck AI Companies?
AI Governance Professional Edition | Paid-Subscriber Only | #149
đ Hi, Luiza Jarovsky here. Welcome to the 149th edition of this newsletter on AI policy, compliance & regulation, read by 39,500+ subscribers in 155+ countries.
đ This is an exclusive AI Governance Professional Edition, with my critical analyses of AI compliance and regulation topics, helping you stay ahead in the fast-paced field of AI governance. I hope you enjoy reading it as much as I enjoy writing it.
đď¸ Only 10 days left! This December, join me for a 3-week intensive AI Governance Training program (8 live lessons; 12 hours total), already in its 15th cohort. Join over 1,000 people who have benefited from our programs and step into 2025 with a new career path. Learn more and register here.
đ˘ Will Liability Wreck AI Companies?
This week, there was a major development in AI liability, as the new EU Product Liability Directive (PLD) was published in the Official Journal of the European Union, and it will enter into force in early December.
The new directive is not only about AI (we are still waiting for the final text of the AI Liability Directive), but it expressly acknowledges AI-related challenges. It highlights that AIâand the need to compensate victims of AI-related harmâwas one of the factors that made it necessary to update the old product liability directive. It also applies to AI. According to Recital 3:
"Directive 85/374/EEC [the old product liability directive] has been an effective and important instrument, but it would need to be revised in light of developments related to new technologies, including artificial intelligence (AI), new circular economy business models and new global supply chains, which have led to inconsistencies and legal uncertainty, in particular as regards the meaning of the term âproductâ. Experience gained from applying that Directive has also shown that injured persons face difficulties obtaining compensation due to restrictions on making compensation claims and due to challenges in gathering evidence to prove liability, especially in light of increasing technical and scientific complexity. That includes compensation claims in respect of damage related to new technologies. The revision of that Directive would therefore encourage the roll-out and uptake of such new technologies, including AI, while ensuring that claimants enjoy the same level of protection irrespective of the technology involved and that all businesses benefit from more legal certainty and a level playing field."
Considering the Brussels effect, I would argue that this was likely a globally influential development. Lawmakers worldwide often pay close attention to the EU's regulatory approach and specific policies on AI, frequently adopting similar measures.
Liability is an especially sensitive topic, given the public debate around AI accountability and its potential economic consequences. As I mentioned on Monday, if you have been following the news, you've likely heard tech executives argue that to support AI development, countries must avoid holding AI companies liable for the outputs of their AI systems. Otherwise, they claim that such liability would bankrupt them.
In the EU specifically, when we think about the interplay between the new PLD, the EU AI Act (which entered into force on August 1st), and the proposed EU AI Liability Directive (not approved yet), the consequences might be drastic, and some AI companies might consider liability issues a dealbreaker.
Before I continue, let's understand why liability is a sensitive topic in AI.
âĄď¸ What is AI Liability?
Liability, from a legal perspective, is the legal responsibility over one's acts or omissions. Product liability is a more specific legal concept and closer to AI, and it refers to the legal responsibility of manufacturers and other actors in the product's value chain over damages caused by a product.
When we are discussing AI liability, we are discussing the rules that will be applicable when an AI system causes harm. Let's use a hypothetical example:
An AI-powered chatbot falsely accuses a person of being a convicted criminal and then describes the crimes this person has committed, and shows pictures related to it. Every time someone types the name of this person or refers to themâdifferent prompts from different users worldwideâthe AI system repeats the same allegations as if they were factual.
The AI-powered chatbot is being integrated into the same company's search engine, and the false allegations continue showing up when people search this person's name.
The person is in deep psychological distress, and it has been also economically damaging to them. They're currently job hunting and suspect that the constant negative feedback from recruiters might be related to the false accusations being shown on the AI chatbot and on the same company's search engine.
I invented the case above, but technically, it's plausible and possible, and something similar has already happened. As we know, AI systems often hallucinate; they output false information. As these systems are trained with information scraped from the internet, their training dataset also includes personal information, so they may output false information about people. We are also observing the integration between AI chatbots and search engines, so everything I described above could potentially happen.
Going back to AI liability, in the case above, will the AI company be held responsible for the harm the person suffers? Liability rules applicable to AI systems will determine it.
If the person sues the company and the judge determines that it must be held liable and compensate the victim in this case, potentially thousands of similar cases might be brought to courts demanding compensation, especially having in mind that the most popular general-purpose AI systems are used by hundreds of millions of people every day.
If compensation for each lawsuit is set at hundreds of thousandsâor even millionsâof dollars, and more countries begin adopting similar AI liability rules, it becomes clear that AI companies could indeed face bankruptcy.
The hypothetical case above makes it clear why liability rules matter and why AI companies are scared of AI liability rules.
âĄď¸ AI vs. Software
When we compare AI with software, there is an important difference that makes liability rules even more important. Pay attention to the definition of AI system in the EU AI Act:
âAI systemâ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.â
The AI system is trained by its developer, but the developer does not pre-program each individual output that will answer each user input. The AI system will âinfer, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.â
How do we create a liability system where the product itself âwill inferâ? Is it possible to have effective enough built-in filters and guardrails so that the developer can, in advance, guarantee that any potential output by the system, even if unpredictable, will be harmless?
An important aspect of this discussion will be the rules on the burden of proof or the elements that will have to be proved by the victim before they will be entitled to compensation. And here, I want to highlight some provisions of the new Product Liability Directive (PLD) so that you can understand the major impact it might potentially have.Â
âĄď¸ How the New PLD Will Change the AI Ecosystem
Article 10 of the new PLD covers burden of proof, and it establishes:
â1. Member States shall ensure that a claimant is required to prove the defectiveness of the product, the damage suffered, and the causal link between that defectiveness and that damage.â
The new PLD follows what we call âstrict liability,â and when AI is involved, the victim is only required to prove that:
â that the AI system was defective
â the victim suffered harm
âthe harm was caused by the AI system
â°â⤠Defective AI systems
When AI is involved, one of the difficulties will be to prove that the AI system was defective. What is a defective AI system? Will AI hallucinations be considered a form of defectiveness, even though they are expected and mentioned in the AI System Card? Will biased AI systems be considered defective? In this context, provisions on presumption of defectiveness will be extremely important.
Paragraphs 2 and 4 of this article establish situations in which the defectiveness of the product will be presumed:
â2. The defectiveness of the product shall be presumed where any of the following conditions are met:
(a) the defendant fails to disclose relevant evidence pursuant to Article 9(1);
(b) the claimant demonstrates that the product does not comply with mandatory product safety requirements laid down in Union or national law that are intended to protect against the risk of the damage suffered by the injured person; or
(c) the claimant demonstrates that the damage was caused by an obvious malfunction of the product during reasonably foreseeable use or under ordinary circumstances.â
Item (b) above is extremely important, as the victim can potentially obtain a presumption of defectiveness if they demonstrate that the AI system does not comply with the EU AI Act or other relevant Union or national law.
The AI Act contains numerous rules regarding high-risk AI systems. If, for example, a provider of a high-risk AI system does not comply with them, it might be fined under AI Act penalties, which are up to 35 million Euros or 7% of the company's total worldwide annual turnover.
đ But that's not all: failing to comply with the AI Act might as well create a presumption of defectiveness against the company, opening the door for major compensation claims under the new PLD. This is a game-changing factor and could be a dealbreaker for some AI companies.
The fourth paragraph contains additional conditions for the presumption of defectiveness of the causal link between the defectiveness and the harm which might have a meaningful impact on the AI ecosystem:
â(âŚ) 4. A national court shall presume the defectiveness of the product or the causal link between its defectiveness and the damage, or both, where, despite the disclosure of evidence in accordance with Article 9 and taking into account all the relevant circumstances of the case:
a) the claimant faces excessive difficulties, in particular due to technical or scientific complexity, in proving the defectiveness of the product or the causal link between its defectiveness and the damage, or both; and
(b) the claimant demonstrates that it is likely that the product is defective or that there is a causal link between the defectiveness of the product and the damage, or both.
The presumption here is even broader, and it acknowledges the âblack boxâ paradox or the difficulty in explaining AI's inner workings and decisions.
đ As a consequence, according to this article, if it's too difficult to prove that the AI system was defective when it caused harm, national courts might presume that it was defective. This is a game-changing factor and could be a dealbreaker for some AI companies.