⚖️ Legal Challenges of AI Agents
AI Governance Professional Edition | Paid-Subscriber Only | #153
👋 Hi, Luiza Jarovsky here. Welcome to the 153rd edition of this newsletter on AI policy, compliance & regulation, read by 41,500+ subscribers in 155+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 This is an exclusive AI Governance Professional Edition featuring my in-depth analyses of AI compliance and regulation topics, which you won't find anywhere else. It's an excellent way to stay ahead in the fast-paced field of AI governance.
💼 Level up your career! This January, join me for the 16th cohort of our acclaimed AI Governance Training (8 live lessons; 12 hours total). Over 1,000 professionals have already benefited from our programs—don’t miss this opportunity. Students, NGO members, and professionals in career transition can request a discount.
⚖️ Legal Challenges of AI Agents
AI agents—or agentic AI—are the latest buzzwords in the world of AI. It seems like every major tech company is now racing to develop the next disruptive AI agent:
Google launched a Google Cloud AI agent ecosystem program;
Microsoft announced new agentic capabilities for Copilot;
Amazon launched its Amazon Bedrock Agents;
Nvidia developed Generative AI-Powered Visual AI Agents;
Apple launched Apple Intelligence and tech commentators have classified it as a type of AI agent;
Meta built CICERO and announced it as “the first AI agent to achieve human-level performance in the complex natural language strategy game Diplomacy”;
and the list goes on.
For those new to the topic, let's start with a definition. According to this article by IBM, an AI agent:
“(…) refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.
AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions.”
AWS’ website provides an interesting example of a potential agentic AI application:
“Consider a contact center AI agent that wants to resolve customer queries. The agent will automatically ask the customer different questions, look up information in internal documents, and respond with a solution. Based on the customer responses, it determines if it can resolve the query itself or pass it on to a human.”
While Generative AI applications can, for example, answer questions, summarize text, and create synthetic images and videos, agentic AI applications would be able to execute more complex and multi-step tasks; they would not only create content but take additional steps such as implementing that content in a specific practical context.
We can also think about more complex scenarios, such as those involving Generative AI systems and AI agents or multiple AI agents executing even more complex tasks.
According to McKinsey, before foundation models (or general-purpose AI models), it was extremely complex to implement AI agents. Now, especially with the advent of Large Language Models (LLMs), AI agents:
“have the potential to adapt to different scenarios in the same way that LLMs can respond intelligibly to prompts on which they have not been explicitly trained. Furthermore, using natural language rather than programming code, a human user could direct a gen AI–enabled agent system to accomplish a complex workflow. A multiagent system could then interpret and organize this workflow into actionable tasks, assign work to specialized agents, execute these refined tasks using a digital ecosystem of tools, and collaborate with other agents and humans to iteratively improve the quality of its actions.”
Going back to the legal perspective, it's easy to see how AI agents, especially multi-agentic systems, would not only exacerbate many of the existing ethical and legal issues behind Generative AI but create new ones. Let's take a look at some of them.
1️⃣ Privacy
To train and program an AI agent to perform tasks aligned with the user's desired goals, more data—particularly personal data—will be required. This goes beyond the traditional data protection issues associated with training LLMs, such as massive scraping and disregard for data protection principles and data subjects’ rights.
Let's imagine an AI agent designed to plan and book trips for the user. The agent will likely have to be trained with large amounts of personal data from other users to understand how to execute each step of the planning and booking task in accordance with each user's preferences.
In addition, the agent will likely have to have access to additional personal and even sensitive information of the user requesting the tasks, such as financial details, the user's calendar and all existing events, contact list, personal preferences, family profile, previous trips, location history, booking details, loyalty/discount cards, and more.
In that context, how do we ensure that data protection principles such as data minimization and purpose limitation are respected? How can we avoid personal data being leaked within an agentic AI system? Will users of AI agents be able to exercise data subjects’ rights, such as the right to be forgotten, if they decide to stop using the AI agent?
2️⃣ Bias
In the last two years, we have seen numerous examples of biased Generative AI systems, such as when you prompt an AI-image generator requesting images of successful professionals and all results show white male professionals.
In the context of AI agents, any existing bias will be transmitted through the task execution chain. Let's continue using the example of the AI agent to plan and book trips. Let's imagine this AI agent has a bias that it never suggests, as part of the travel itinerary, areas with lower socio-economic status, as it considers them “dangerous.” Consequently, the AI agent will make the socio-economic issues worse, as fewer and fewer tourists will eat, sleep, or look for leisure activities there. The people living in those areas, without knowing, could have their livelihoods negatively affected by the AI system's biases.
How can we prevent discrimination or enforce legal provisions ensuring fairness when the bias is baked into the AI agent? How do we make sure that AI agents will not exacerbate existing bias built in a certain foundation model?
3️⃣ Manipulation
In the context of AI companions and anthropomorphized chatbots like Replika and Character AI, we are seeing, sadly, how quickly people can become deeply attached to and dependent on AI systems, leading to manipulation and harm.
When we consider agentic AI systems, especially those coupled with anthropomorphic characteristics—including persuasive, emotionally charged, and “empathetic” language, as well as visual elements like avatars or a humanized appearance—their manipulative potential increases exponentially.