🤖 AGI: Potential Legal Risks
Emerging AI Governance Challenges | Paid Subscriber Edition | #161
☀️ Hi, Luiza Jarovsky here. Welcome to this newsletter's 161st edition, read by 47,200+ subscribers in 160+ countries.
🌍 We are a leading AI governance publication helping to shape the future of AI policy, compliance, and regulation.
💎 This is a paid subscriber edition, where I cover emerging AI governance challenges. If you're not a paid subscriber yet, upgrade here.
🤖 AGI: Potential Legal Risks
The year has barely started, and the debate around AGI—Artificial General Intelligence—is already in full swing. It started with a January 6 blog post by Sam Altman, the CEO of OpenAI, where he wrote:
“As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.”
and also
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.”
For those unfamiliar with the issue, what exactly AGI is and when it will arrive are disputed topics, with opinions ranging from “we already have AGI” to “AGI will never be possible.”
It's beyond my lane to discuss what AGI is or when it might arrive. From an AI governance perspective, this is a speculative question that doesn't change our focus: whether AGI emerges or not, any cutting-edge technology must be governed. We must ensure that people remain at the center of its development and that fundamental rights are respected.
In today's article, I want to explore existing definitions of AGI and the specific legal challenges it might pose if/when it is achieved.
1️⃣ Definitions of AGI
Below are some of the widespread definitions of what AGI means:
→ Google Cloud's definition
“AGI refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of AI that aims to mimic the cognitive abilities of the human brain.
In addition to the core characteristics mentioned earlier, AGI systems also possess certain key traits that distinguish them from other types of AI:
Generalization ability: AGI can transfer knowledge and skills learned in one domain to another, enabling it to adapt to new and unseen situations effectively.
Common sense knowledge: AGI has a vast repository of knowledge about the world, including facts, relationships, and social norms, allowing it to reason and make decisions based on this common understanding.”
→ Amazon's definition
“An AGI system can solve problems in various domains, like a human being, without manual intervention. Instead of being limited to a specific scope, AGI can self-teach and solve problems it was never trained for. AGI is thus a theoretical representation of a complete AI that solves complex tasks with generalized human cognitive abilities.”
→ OpenAI's definition
“(…) highly autonomous systems that outperform humans at most economically valuable work.” *OpenAI says its mission is to ensure AGI benefits all of humanity
→ Ilya Sutskever's definition of superintelligence (co-founder of OpenAI)
It looks like “superintelligence” is a type of AGI rebranding, so I'm adding his approach here, too:
“Superintelligent systems are actually going to be agentic in a real way, as opposed to the current crop of ‘very slightly agentic’ AI. They’ll ‘reason’ and, as a result, become more unpredictable. They’ll understand things from limited data. And they’ll be self-aware”
→ Gary Marcus’ definition
“a shorthand for any intelligence ... that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence”.
2️⃣ Potential Legal Risks
From the definitions above, I extracted a list of characteristics commonly used to describe AGI, focusing on those with legal relevance. Based on each characteristic, I've highlighted some of the potential legal risks:
→ Unpredictability: AGI may exhibit behavior or decision-making that is difficult to foresee due to the complexity of its reasoning processes or emergent properties.
╰┈➤ Among the most important legal issues related to unpredictability are liability risks. For example, product liability laws are often based on the "normal" and the "defective" functioning of a certain product. Take the new EU Product Liability Directive (PLD), which entered into force last year and applies to AI. Here's what it says in Art. 7(2):
"In assessing the defectiveness of a product, all circumstances shall be taken into account, including:
(a) the presentation and the characteristics of the product, including its labelling, design, technical features, composition and packaging and the instructions for its assembly, installation, use and maintenance;
(b) reasonably foreseeable use of the product;
(c) the effect on the product of any ability to continue to learn or acquire new features after it is placed on the market or put into service;
(d) the reasonably foreseeable effect on the product of other products that can be expected to be used together with the product, including by means of inter-connection;
(e) the moment in time when the product was placed on the market or put into service or, where the manufacturer retains control over the product after that moment, the moment in time when the product left the control of the manufacturer;
(f) relevant product safety requirements, including safety-relevant cybersecurity requirements;
(g) any recall of the product or any other relevant intervention relating to product safety by a competent authority or by an economic operator as referred to in Art. 8;
(h) the specific needs of the group of users for whose use the product is intended;
(i) in the case of a product whose very purpose is to prevent damage, any failure of the product to fulfil that purpose."
╰┈➤ If an AI system is expected to be unpredictable, and the unpredictability leads to harm, it will be difficult for the victim to prove that this was a defect.
╰┈➤ Yes, there is the potential presumption of defectiveness if it's excessively complex to prove it (Art. 10). However, it must be declared by a court, and the AI company will have the right to rebut any of the presumptions.
╰┈➤ We are already witnessing legal challenges arising from unpredictability: lawsuits against AI companions/characters will often have to prove the existence of a defect and sometimes also negligence by the AI company. When the unpredictable nature of these AI chatbots is the rule, legal challenges emerge from a liability perspective.
→ Transfer Learning: AGI will apply knowledge and skills learned in one context to new and unfamiliar situations or tasks.
╰┈➤ Transfering learning between different fields might lead to new data protection challenges, as personal data will also be transferred from one context to another.
╰┈➤ Among the data protection-related challenges are personal data leaks, reputational harm, breach of contextual integrity (as Prof. Helen Nissenbaum approaches the topic - see her book Privacy in Context), and privacy harm in general, as personal data is transferred between contexts without transparency or consent (or other lawful bases to process personal data when the GDPR applies).
╰┈➤ Transfering learning might also lead to increased manipulation and consumer harm, as people will be unaware that the AI system learned about them from another context/application and might use that knowledge to persuade, manipulate, or harm.