👋 Hi, Luiza Jarovsky here. Welcome to our 165th edition, read by 49,300+ subscribers in 160+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance, and regulation. This is a pivotal moment for advancing AI governance—it's great to have you here!
🥀 AI Agents: RIP Autonomy
Three days ago, OpenAI launched Operator, “an agent that can use its own browser to perform tasks for you.” It's currently available to ChatGPT Pro users in the U.S. ($200/month) but is expected to eventually become part of ChatGPT. The company published a 23-minute demo video, which you can watch here.
From an AI governance perspective, the launch of Operator is arguably the most significant AI-related announcement since the release of ChatGPT in November 2022, as we enter the “agentic wave” and start facing its legal and ethical challenges.
Today, I want to discuss the agentic wave and explore why I call it the beginning of the death of human autonomy as we know it, especially through the lens of AI governance.
1️⃣ ChatGPT's AI Governance Disruption
If most big tech companies have already implemented some form of “agentic AI” features, as I discussed in a recent edition, why is OpenAI's launch of Operator significant?
I explain this by referring back to ChatGPT: by November 2022, large language models, general-purpose AI chatbots, and generative AI applications were not novel in and of themselves. Numerous studies and real-world applications in this area had been released years earlier. However, in terms of the combined technological, social, cultural, economic, and legal impact it created, the launch of ChatGPT was unmatched.
Probably due to smart product, marketing, and PR strategies—which created massive hype worldwide—two months after its launch, ChatGPT reached 100 million monthly active users, becoming the fastest-growing consumer application in history. It dominated headlines, and from lawyers to engineers, everyone was talking about “the chat,” including its downsides, overpromises, privacy shortcomings, and “hallucinations.” The hype wave devoured everything, prompting companies, investors, and entrepreneurs to pivot and change priorities while people everywhere started questioning whether their job security was threatened.
AI copyright issues have created a significant cultural divide, one that persists to this day. This divide is evident in the numerous AI copyright lawsuits that continue to pile up. Creators and copyright holders—book authors, visual artists, news media companies, and more—were furious when they discovered that their copyrighted works were used to train ChatGPT without any consent or compensation. More lawsuits against other AI companies were filed, and many people have begun to see AI companies as essentially unethical and exploitative of human creators.
The launch of ChatGPT also changed the course of AI regulation. After EU lawmakers noticed the magnitude of ChatGPT's impact, they modified the existing EU AI Act draft to attempt to tackle some of the newly observed legal challenges. Looking at their European counterparts, many other countries woke up to the urgent need to regulate AI and kicked off official lawmaking efforts. From a US state law perspective, there is still an ongoing regulatory boom around AI, with hundreds of new laws being introduced.
The hype around ChatGPT had a profound impact on AI governance, serving as the initial trigger that led to industry-wide transformation. I'm usually optimistic, and I see this transformation as a wake-up call that has inspired us to take action. I'm encouraged by the growing number of excellent professionals getting directly involved in AI governance, which gives me hope for the future.
In my view, OpenAI's Operator will be the initial trigger for a second wave within the broader AI wave: the agentic wave. Given its characteristics, this wave will likely be even more challenging from ethical and legal perspectives, and it will pose a direct threat to human autonomy as we know it. I explain what I mean below.
2️⃣ The Agentic Wave
Before I discuss Operator more specifically, let's start with a recap of the definition of AI agent. According to this article by IBM, an AI agent:
“(…) refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.
AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions.”
Amazon Web Services’ website provides an interesting example of a potential agentic AI application:
“Consider a contact center AI agent that wants to resolve customer queries. The agent will automatically ask the customer different questions, look up information in internal documents, and respond with a solution. Based on the customer responses, it determines if it can resolve the query itself or pass it on to a human.”
While Generative AI applications can, for example, answer questions, summarize text, and create synthetic images and videos, agentic AI applications can execute more complex and multi-step tasks. They can not only create content but also take additional steps, such as implementing that content in a specific practical context.
At this point, we are at the very beginning of the agentic wave, and many of the major societal challenges we'll face in 10 years—resulting from the widespread deployment of AI agents—don't yet exist and are difficult to visualize.
We can compare the 2020s with the 1990s: the commercial internet became widely available in the 1990s, and I'm confident that almost no one using the internet at the time could have foreseen the massive impact of social media on teens’ mental health, political polarization, and the spread of misinformation during the 2010s and 2020s.
In the same way, it's difficult to imagine what the technological reality will be in 2035 or 2045. Still, we can start by observing these early agentic AI applications, such as OpenAI's Operator, some of their early legal and ethical challenges, and where their developers seem to be heading.
From a legal perspective, there are four main legal areas already visibly impacted by the new trend of “agentic AI”: privacy, bias/fairness, manipulation, and liability. On the topic, check out my recent deep dive into the Legal Challenges of AI Agents, where I explain each of them in more detail.
Beyond legal compliance, AI agents are essentially a threat to human autonomy, which is a particularly challenging aspect to regulate and govern broadly.
As I noted in a recent social media post, recent AI applications have been fostering disempowerment and dependency, reducing people's autonomy and making them more vulnerable to misinformation and manipulation.
For example, in recent weeks, Google has started prompting Gmail users to summarize their emails using AI. Additionally, every time I click “reply” on an email, the screen shows me a hyperlinked “Help me write,” which leads to an AI-powered functionality that drafts the reply for me. For business account users, when you open a Google Docs file, the “Help me write” feature now appears prominently at the top. These are subtle UX design changes made by Google, but they have a profound impact on how millions of people incorporate automation into their daily lives.
With these changes, Google and other companies are taking the Generative AI wave to the next level, laying the foundation for the agentic wave and the loss of our autonomy.
They are also driving a cultural shift around technology. Your email provider is not only the platform that intermediates your communications but can also replace you. It will summarize and write your replies. Google Docs is no longer just an online word processor that allows you to create documents. It will write the documents instead of you.
OpenAI understands this and is openly intentional about this massive shift. If you have doubts, watch this 30-second clip where Sam Altman, OpenAI's CEO, states that given how powerful they expect AI to become, changes to the social contract might be needed—whatever he means by that.
OpenAI's Operator takes the “let AI do everything for you” approach to the next level. You can prompt it to buy groceries for you, book a restaurant table or a hotel room, conduct financial operations, reply to emails, and more. One of the key breakthroughs is that no APIs are needed. According to OpenAI:
“Operator can ‘see’ (through screenshots) and ‘interact’ (using all the actions a mouse and keyboard allow) with a browser, enabling it to take action on the web without requiring custom API integrations.”
The agentic wave promotes the idea that you should relax, disengage, and let AI take over. Is there anything wrong with that?
When you let AI do your tasks for you, including the decision-making behind planning and executing them, it will soon be doing your thinking for you.
If strict AI governance measures aren't implemented now, by the time the agentic wave matures and AI companies’ plans are fully realized (in a few years), we’ll be living in a strange world where people have little individual autonomy left, merely following the flow of AI agents directing their lives.
3️⃣ The Beginning of the End of Autonomy
As I mentioned above, Operator marks the beginning of the agentic wave, especially from social, cultural, legal, and governance perspectives. As in the 1990s, it's difficult to get the full picture of what this wave might become in a few years or decades.
However, the “full implementation” of the essential tenet of the agentic wave (“for the sake of maximum productivity, humans should relax and let AI take control") has alarming consequences, and we, as free individuals and societies, may never want to see it fully realized. So this is the time to understand what’s at stake and take immediate action.
Based on what OpenAI announced three days ago—my guess is that, given the massive competition, they'll aim to “move fast and break things,” quickly integrating Operator into ChatGPT for all users and expanding its capabilities—I want to highlight some of the features we should pay attention to, as they might be important factors negatively impacting autonomy.
A) Self-assessment and self-correction
Regarding how Operator detects and corrects mistakes, OpenAI stated: