👋 Hi, Luiza Jarovsky here. Welcome to our 162nd edition, read by 47,600+ subscribers in 160+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
🎩 AI Guardrails: A Magic Solution?
Most people missed it, but Anthropic and Universal Music Group - the largest music company in the world - have quietly reached an agreement that might shape the future of AI copyright litigation. Anthropic's lawyers were strategic (and might win); here's why:
1️⃣ The 2023 Lawsuit
In October 2023, Universal Music and others filed an AI copyright lawsuit against Anthropic. They argued that the AI company trained its AI model using songs to which they hold the rights and that, when prompted, Claude, Anthropic's AI chatbot, would reproduce the lyrics.
2️⃣ The 2025 Agreement
On January 2, 2025, Anthropic and the music companies reached a deal in which Anthropic agreed to keep its existing guardrails to prevent copyright-infringing outputs. Here's what the agreement says:
"Anthropic will maintain its already implemented Guardrails in its current AI models and product offerings. With respect to new large language models and new product offerings that are introduced in the future, Anthropic will apply Guardrails on text input and output in a manner consistent with its already-implemented Guardrails. Nothing herein prevents Anthropic from expanding, improving, optimizing, or changing the implementation of such Guardrails, provided that such changes do not materially diminish the efficacy of the Guardrails."
3️⃣ Why It Matters
While this lawsuit has not been fully concluded, this deal is extremely important and shows Anthropic's lawyers' strategy. Why?
The "fair use" debate - if it's fair use to train AI with copyrighted works - is still open, and there are dozens of AI copyright lawsuits in various courts worldwide currently discussing it from various angles.
With this agreement, Anthropic's lawyers took the judge's attention away from the training phase - and AI companies' questionable practice of obtaining content without consent or compensation - and focused on the deployment phase instead. They obtained a legal acknowledgment that their existing technical guardrails might be effective tools to prevent infringing outputs.
By focusing on effective technical guardrails during the deployment phase, they minimize issues related to the lack of consent and compensation. How? If the guardrails are effective enough to ensure that the AI system's outputs don't infringe on copyrights, training AI with copyrighted material might be fair use. The argument would be that it's a backstage process and doesn't harm anyone's livelihood.
If other courts follow suit, we might end up with the official interpretation that, in the US, it's fair use to train AI with copyrighted works as long as the AI company proves that its guardrails effectively prevent infringing outputs.
Will AI guardrails be the ‘magic solution’ that AI companies have been waiting to resolve the growing number of AI copyright lawsuits? It's too early to tell, especially since this lawsuit is still ongoing and several claims remain unanswered.
*I personally don't think AI guardrails will solve the main AI copyright issues; licensing agreements or other types of consent-based deals that compensate creators will also have to be drafted.
I'll keep you posted.
🎓 AI Governance Careers Are Surging: Get Ahead
Are you aiming to become a leader in AI governance or responsible AI? Are you a lawyer interested in tackling AI-related issues? Or perhaps you’re an AI developer keen to explore the ethical and legal challenges of AI?
I invite you to join the 17th cohort of our AI Governance Training and gain the skills to meet the emerging challenges in this dynamic field. This 12-hour live online program goes beyond standard certifications and offers:
8 live sessions with me, curated self-study materials, and quizzes.
A training certificate and 13 CPE credits pre-approved by the IAPP.
A meet-and-greet session with peers and an optional office hours meeting to discuss your questions or career goals.
The 17th cohort begins on February 4, and only a few spots remain. Learn more about the program and hear from recent participants here.
Don't miss it: join 1,000+ professionals from 50+ countries who have advanced their careers through our programs. Save your spot:
*If cost is a concern, we offer discounts for students, NGO members, and individuals in career transition. To apply, please fill out this form.
🏛️ Regulating Multifunctionality
The paper "Regulating Multifunctionality" by Cary Coglianese and Colton Crum is an excellent read for everyone in AI governance and a great way to kick off 2025. Key quotes to understand the paper:
1️⃣ Use Heterogeneity in AI
"The uses for which AI can be deployed are virtually limitless. It is this use heterogeneity—especially with the multifunctional nature of foundation models and generative AI tools—that presents the most substantial challenge when it comes to regulating AI. Use heterogeneity arises from a) the existence of a vast array of different AI tools being designed for specific but varied purposes in mind, and b) AI tools that are themselves designed for multiple purposes, perhaps only some of which can be fully anticipated."
2️⃣ Heterogeneity and AI Governance
"The heterogeneity of problems posed by AI creates a core challenge for AI governance. This is true for both varied single-function AI tools as well as for multifunctional AI tools. The problems presented by a specialized AI tool that misreads MRIs used to identify cancer, for example, will obviously be quite different from those presented by a specialized AI tool assisting in the automatic braking of an autonomous vehicle. Likewise, the problems created by an LLM tool hallucinating about legal cases will be different from these same foundational LLM tools put to use in social media that contribute to teenagers engaging in self-harm. The question thus becomes how society should approach regulation when both uses and problems can be so varied. In other words, how does society regulate the AI equivalent of the Swiss army knife?"
3️⃣ Flexible Regulatory Strategies
"Instead of rigid, prescriptive rules, the future of AI regulation will likely depend on more flexible regulatory strategies. At least 4 more feasible regulatory strategies exist that could be considered for the governance of multifunctional AI: performance standards, disclosure regulation, ex post liability, and management-based regulation. (...) Moreover, regulators can rely on a combination of these strategies depending on their overarching goals and context. Importantly, none of these strategies necessarily demand a 'one size fits all' approach."
4️⃣ Vigilance and Agility
"(...) But selecting a particular regulatory strategy—or a combination of strategies—will be only the beginning. Perhaps the most difficult work in regulating multifunctionality will lie in providing ongoing vigilance and agility. Effective AI governance will require constantly adapting, issuing alerts, and prodding action. Regulators need to see themselves as overseers of dynamic AI ecosystems, staying flexible, remaining vigilant, and always seeking ways to improve."
🚀 Daily AI Governance Resources
Thousands of people receive our daily emails with educational and professional resources on AI governance, along with updates about our live sessions and programs. Join our learning center for free here:
🎙️ Today: AI and Copyright Infringement
AI copyright lawsuits are piling up, and companies are racing to adapt. Is there a way to protect human creativity in the age of AI? You can't miss my first live talk of 2025 happening today - here’s why:
I invited Andres Guadamuz, associate professor of Intellectual Property Law at the University of Sussex, editor-in-chief of the Journal of World Intellectual Property, and an internationally acclaimed IP expert to discuss with me:
- Copyright infringement in the context of AI development and deployment;
- The legal debate unfolding in the EU and U.S.;
- Potential solutions;
- Protecting human creativity in the age of AI.
👉 Hundreds of people have already confirmed; register here and join us live today.
🎬 Join 24,600+ people who subscribe to my YouTube Channel, watch all my previous live talks, and get notified when I publish new videos.
💬 Unpopular Opinion
Technical advances will take a back seat, and AI Governance will dominate the headlines in 2025. Why? Read my newsletter article with this year's top 5 AI governance trends.
👉 Share your thoughts on LinkedIn, X/Twitter, Bluesky, or Substack Notes.
📚 AI Book Club: What Are You Reading?
📖 More than 2,000 people have already joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The 16th recommended book was “The Tech Coup: How to Save Democracy from Silicon Valley” by Marietje Schaake.
📖 Ready to discover your next favorite read? See our previous reads and join the AI book club here:
⚙️ A Primer: Evolution & Impact of AI Agents
The paper "Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents" by the World Economic Forum is a must-read for everyone who wants to learn more about AI agents.
╰┈➤ The paper begins with a definition (based on the one proposed by the International Organization for Standardization):
"An AI agent can be broadly defined as an entity that senses percepts (sound, text, image, pressure, etc.) using sensors and responds (using effectors) to its environment. AI agents generally have the autonomy (defined as the ability to operate independently and make decisions without constant human intervention) and authority (defined as the granted permissions and access rights to perform specific actions within defined boundaries) to take actions to achieve a set of specified goals, thereby modifying their environment."
╰┈➤ One of the most interesting parts is the section about risks and challenges. Some highlights:
1️⃣ Over-reliance
"Increasing autonomy of AI agents could reduce human oversight and increase the reliance on AI agents to carry out complex tasks, even in high-stakes situations. Malfunctions of the AI agents due to design flaws or adversarial attacks may not be immediately apparent if humans are not in the loop. Additionally, disabling an agent could be difficult if a user lacks the required expertise or domain knowledge."
2️⃣ Disempowerment
"Pervasive interaction with intelligent AI agents could also have long-term impacts on individual and collective cognitive capabilities. For example, increased reliance on AI agents for social interactions, such as virtual assistants, AI agent companions, therapists, and so on, could contribute to social isolation and possibly affect mental well-being over time."
3️⃣ Societal resistance
"Resistance to the employment of AI agents could hamper their adoption in some sectors or use cases."
4️⃣ Employment implications
"The use of AI agents is likely to transform a variety of jobs by automating many tasks, increasing productivity and altering the skills required in the workforce, thus causing partial job displacement. Such displacement could primarily affect sectors reliant on routine and repetitive tasks, in industries such as manufacturing or administrative services."
5️⃣ Challenges in ensuring AI transparency and explainability
"Many AI models operate as “black boxes,” making decisions based on complex and opaque processes, thereby making it difficult for users to understand or interpret how decisions are made. A lack of transparency could lead to concerns about potential errors or biases in the AI agent’s decision-making capabilities, which would hinder trust and raise issues of moral responsibility and legal accountability for decisions made by the AI agent."
📢 Spread AI Governance Awareness
Enjoying this edition? Share it with friends and colleagues:
🤖 Paid Edition: AGI's Potential Legal Risks
If you're curious about Artificial General Intelligence (AGI) and the legal challenges it might pose, you can't miss this newsletter's latest paid edition; here's why:
For those unfamiliar with the issue, what exactly AGI is and when it will arrive are disputed topics, with opinions ranging from “we already have AGI” to “AGI will never be possible."
Regardless of the timeline, it's extremely important to discuss AGI's potential legal risks: some of them will be amplifications of legal issues we are already observing; others will be new and might have individual and collective consequences.
In last week's paid edition, I explored existing definitions of AGI and the legal challenges it might pose if/when it is achieved. For those interested in emerging AI governance challenges, it's an important exercise - even if we don't have AGI yet.
Paid subscribers can access all past and upcoming paid editions of this newsletter, where I cover emerging AI governance challenges. if you're not a paid subscriber yet, you can upgrade here.
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇺🇸 SweetRush: AI Governance Freelance Consultant - apply
🇺🇸 Contentful: Senior Legal Counsel, AI Governance & IP - apply
🇳🇱 SkillLab: Data Science Internship, Responsible AI - apply
🇺🇸 Capco: Data Protection & AI Governance Advisor - apply
🇮🇹 Accenture: Responsible AI, Advisor Manager - apply
🇨🇭 Xebia: Data & AI Governance Consultant - apply
🇺🇸 Mastercard: Program Manager, AI Governance Program - apply
🇷🇴 GlobalLogic: AI Governance Specialist - apply
🇸🇬 BCG X: Senior Applied Scientist, Responsible AI - apply
🇨🇦 Clarivate: Senior AI Counsel - apply
🔔 More job openings: Join thousands of professionals who subscribe to our AI governance & privacy job board and receive weekly job opportunities.
Thank you for being part of this journey!
If you enjoyed this edition, here's how you can keep the conversation going:
1️⃣ Help spread AI governance awareness
→ Start a discussion on social media about this edition's topic;
→ Share this edition with friends, adding your critical perspective;
→ Share your thoughts in the comment section below 👇
2️⃣ Go beyond
→ If you're not a paid subscriber yet, upgrade your subscription here and start receiving my exclusive weekly analyses on emerging AI governance challenges;
→ Looking for an authentic gift? Surprise them with a paid subscription.
3️⃣ For organizations
→ Organizations promoting AI literacy can purchase 3+ subscriptions here and 3+ seats in our AI Governance Training here at a discount;
→ Organizations promoting AI governance or privacy solutions can sponsor this newsletter and expand their reach. Fill out this form to get started.
See you soon!
Luiza
The issue of governance is not new by any stretch. When I was teaching knowledge based(or expert) systems development in the late ‘80’s, and early ‘90’s, AI was in its infancy. We called it AI in general but more specifically by whatever problem we were solving, or mitigating, through the use of KBS’ and early neural networks. I.e. Contract Development Expert System. AI was just a field of computer science with KBS’ and Neural Networks being named for the purpose under that same umbrella. We taught ethical software development as part of our core and advanced courses. Integrating, in the design, from beginning to end, an ethical set of rules was part and parcel of our courseware. In fact, if there were no boundaries written in the student failed. So it’s not a new issue. It must be engrained from the beginning of development through implementation. I don’t know how all of these current systems ended up be generic AI when they were developed to solve a particular problem. I guess it’s because the Wild West prevails and the bandits have overtaken the towns. Just my 10cents worth.