Thou Shalt Pay
All you need to know about AI liability in 2026 | Edition #261
👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to the 261st edition of my newsletter, trusted by more than 88,600 subscribers worldwide.
Welcome to our first edition of 2026. Some are saying this is the year AI will transition from hype to pragmatism. I sincerely hope this will be the year when enforcement and oversight finally catch up.
🎓 Now is a great time to learn and upskill in AI. Here is how I can help:
Join my AI Governance Training [or apply for a discount]
Discover your next read in AI and beyond in my AI Book Club
Sign up for our Learning Center’s free educational resources
Watch my AI governance talks and learn from world experts
Subscribe to our job alerts for open roles in AI governance
Thou Shalt Pay
Many in AI seem to have a hard time understanding legal liability and the fact that companies often have to pay for the harm they cause.
Every now and then, I read baseless comments that sound more like wishful thinking from people who have absolutely no idea how liability actually works and who think that, for some reason, AI should be immune from legal oversight.
So let me break down where we currently are and what we should aim for in terms of holding AI companies accountable for the harm their products cause.
-
First, regardless of how shiny and ‘AGI-like’ an AI system might be, it is, from a legal perspective, a product (and sometimes also a service), just as a car, a computer, and a toothbrush are.
There are rules applied to all sorts of products, such as the types of materials or manufacturing techniques allowed, the standards to be followed, the required design, and their intended performance.
There are also product liability rules, based on the general product rules, that specify what will happen when a product causes harm.
Product liability addresses questions such as when a company must compensate victims or their families if its products cause harm.
These rules will differ depending on the product and the state or country. They are usually proportional to the known or expected risks posed by the product.
In general, there are two main liability systems: strict liability and fault-based liability.
Strict liability means that even if there was no negligence or intent on the part of the company, it may still be held responsible for the harm its product causes (provided that other elements are proven, usually the ‘defect’ and the causal relationship between the harm and the defect).
In the fault-based liability system, on the other hand, negligence, recklessness, or intent by the company must be proven by the victim or the victim’s legal representative.
Now, here is something that seems to blow the minds of many in AI:
In both liability systems, there may be cases in which a person was careless in using a product or used it in a risky or ‘dumb’ way, and the company that manufactured it will still be held responsible for the harm its product caused.
Yes, you read it correctly.
Even if the person used the product inadequately, the company behind it might have broken product rules (such as mandatory warnings, product design, or safety measures), which would still make it responsible.
This is true especially with new technologies, where most people do not know how to use them properly or navigate the risks. (I would say even more when general-purpose AI systems such as ChatGPT are involved, as they are designed for many possible uses).
In these cases, misuse is often expected, and there will be greater scrutiny of warnings, guardrails, built-in safety, and design requirements.
The details will depend on the product’s regulatory requirements and the product liability rules of the specific jurisdiction, but, in general, that is how product liability works.
In AI, it means that even if a person uses an AI system in an inadequate, risky, or dumb way, the AI company might still be held responsible for the harm the AI system caused.
So, what are the accepted liability rules for AI systems today? If a person takes their own life after being encouraged by an AI chatbot to do so, does the AI company have to compensate the family?



