☀️ Hi, Luiza Jarovsky here. Welcome to this newsletter's 160th edition, read by 46,600+ subscribers in 160+ countries.
🌍 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. Not a subscriber yet? Join us here.
🔥 You can now like, comment & share directly in the newsletter. Give it a try!
🔮 Top 5 AI Governance Trends for 2025
This year, AI governance is set to expand, consolidate, and be among tech companies’ main concerns. Below, I highlight the top 5 AI governance trends that will shape the future of AI in 2025
1️⃣ Data Protection Authorities Will Start Catching Up
After the European Data Protection Board (EDPB) published Opinion 28/2024 in December, the enforcement gates of the GDPR were thrown open. This trend will continue strong in 2025, with more data protection authorities expected to issue fines against AI companies.
With Opinion 28/2024, the EU clarified its stand on core data protection issues, such as compliance with Article 6 of the GDPR. To understand what it means in practice, read my in-depth analysis here.
Only two days after the EDPB opinion was published, the Italian Data Protection Authority issued a 15 million Euro fine against OpenAI (read my post about the topic). More fines will come this year - stay tuned.
2️⃣ AI Copyright Lawsuits Will Slow Down; We'll See More Licensing Deals
If you have been reading this newsletter in the last two years, you know there has been a massive wave of copyright lawsuits against AI companies. (*on the topic of AI and copyright, don't miss my live talk with Andres Guadamuz next week: register here).
Writers, artists, record labels, media companies, content platforms, and others have sued AI companies for using their copyrighted work to train AI without authorization or compensation. To understand the main issues at stake, read my in-depth analysis here.
There are still many open legal issues (e.g., when can fair use be claimed?), and it will take time for courts to decide all the pending lawsuits. My view is that, as we started to observe at the end of 2024, there will be more licensing deals between creators/platforms and AI companies.
There will also be new and creative attempts to find fair, legal, and financially sustainable ways to train AI while ensuring creators can earn a living.
3️⃣ The Brussels Effect Will Spread Further
Last week, I wrote about the new comprehensive AI law enacted in South Korea and some of its similarities with the EU AI Act, such as the risk-based approach, the protection of fundamental rights, transparency obligations in the context of deepfakes, provisions on standardization, and more.
Brazil is also in the process of approving its own AI law, which has many similarities with the EU AI Act - some of its provisions are direct translations of the EU law - but follows a principle-based approach (you can read my analysis here).
In 2025, more countries will enact their AI laws. I suspect many will try to save time, money, and effort by taking a copy-paste approach, borrowing heavily from the EU AI Act's provisions, spreading the Brussels effect further.
4️⃣ The AI Governance Career Will Explode
In 2024, with the IAPP's launch of its AIGP certificate, we saw the first significant wave of individuals interested in becoming AI governance professionals.
I see this firsthand with the participants of our AI Governance Training, where I meet outstanding professionals from fields such as law, privacy, IT, infosec, product management, engineering, and ethics, all interested in shaping the future of AI. We've had over 1,000 participants so far, and demand for our future cohorts continues to grow.
In 2025, the AI governance career will mature further and expand at a much faster pace. Why? On February 2, 2025, the first provisions of the EU AI Act will enter into effect. Beyond the EU AI Act, more countries (and states, in the case of the U.S.) will enact their AI laws, as mentioned earlier. With the risk of hefty fines becoming a reality, more companies will establish AI governance teams and begin hiring.
5️⃣ We'll See More Lawsuits Against AI Characters and Companions
In 2023, we saw the first suicide related to the interaction with an AI companion. In 2024, we saw the second suicide, and the mother of the victim is now suing CharacterAI. Among the claims is the fact that AI characters are not safe.
For more than two years, I've argued that these types of AI systems - AI companions, characters, and equivalent AI-powered applications with a high level of anthropomorphism - are not safe and should be heavily regulated.
They are the equivalent of digital cigarettes, and in a few years, we will be in disbelief as to how we allowed them to be easily available to children and other vulnerable groups.
AI companions and characters remain highly popular and profitable, and companies won't give up on them voluntarily. I foresee that more victims, family members, NGOs, and advocates will sue the companies behind these applications, requesting more safety tools, guardrails, and protective mechanisms. We'll potentially start seeing laws outlawing them, too.
-
👉 If you want to learn more about these and other AI governance topics, join my intensive AI Governance Training program starting next week (use the limited-time coupon below).
🥂 New Year Discount – This Week Only!
As we kick off 2025, we’re excited to offer you an exclusive opportunity to upskill and advance your career with a special discount.
Register this week for our AI Governance Training program and receive a 15% discount:
The Europe & Asia-Pacific cohort begins next week, on January 13.
The Americas & Europe cohort starts on February 4.
This comprehensive and up-to-date training explores the legal, ethical, and regulatory aspects of AI governance. Our carefully developed curriculum provides actionable insights to help you stay ahead in this rapidly evolving field, and most organizations offer reimbursement for this training.
This 15% discount is valid until Friday night. Use the coupon code 2025 to register today and invest in your career.
🪦 Has Legal Work Died?
I disagree with claims like the one below. Lawyers will protect their work from AI by introducing "signed by a licensed lawyer" requirements. Other professions will do the same, delaying AI replacement. You can ask Claude's opinion, but you'll still need a lawyer.
👉 Has legal work died? Join the discussion on LinkedIn, X/Twitter, Bluesky, Substack Notes, or in the comment section below.
🎙️ Next Week! AI and Copyright Infringement
AI copyright lawsuits are piling up, and companies are racing to adapt. Is there a way to protect human creativity in the age of AI? You can't miss my first live talk of 2025; here’s why:
I invited Andres Guadamuz, associate professor of Intellectual Property Law at the University of Sussex, editor-in-chief of the Journal of World Intellectual Property, and an internationally acclaimed IP expert to discuss with me:
- Copyright infringement in the context of AI development and deployment;
- The legal debate unfolding in the EU and U.S.;
- Potential solutions;
- Protecting human creativity in the age of AI.
👉 Hundreds of people have already confirmed; register here.
🎬 Join 24,600+ people who subscribe to my YouTube Channel, watch all my previous live talks, and get notified when I publish new videos.
🦎 The Intersection of Biology & AI Governance
Most people haven't thought about the intersection of biology and AI governance, but this link will be increasingly important.
For example, to protect humans, among many other biological factors, we must also protect human speed and how it differs from machines’ speed. In this context, the paper “The Unbearable Slowness of Being: Why do we live at 10 bits/s?,” by Jieyu Zheng and Markus Meister, offers interesting insights:
"How can humans get away with just 10 bits/s? The tautological answer here is that cognition at such a low rate is sufficient for survival. More precisely, our ancestors have chosen an ecological niche where the world is slow enough to make survival possible. In fact, the 10 bits/s are needed only in worst-case situations, and most of the time our environment changes at a much more leisurely pace. This contributes to the common perception among teenagers that “reality is broken," leading them to seek solace in fast-paced video games (see Appendix A). Previous generations are no exceptions – they instead sought out the thrills of high-speed sports like skiing or mountain biking. It appears that the everyday tasks feel unbearably slow for these thrill-seekers, so pushing themselves to the cognitive throughput limit is a rewarding experience all by itself."
"One species that operates at much higher rates is machines. Robots are allowed to play against humans in StarCraft tournaments, using the same sensory and motor interfaces, but only if artificially throttled back to a rate of actions that humans can sustain. It is clear that machines will excel at any task currently performed by humans, simply because their computing power doubles every two years. So the discussion of whether autonomous cars will achieve human level performance in traffic already seems quaint: roads, bridges, and intersections are all designed for creatures that process at 10 bits/s. When the last human driver finally retires, we can update the infrastructure for machines with cognition at kilobits/s. By that point, humans will be advised to stay out of those ecological niches, just as snails should avoid the highways."
➡️ AI governance efforts should consider what it means to be human, including from a biological perspective, and ensure humans will thrive, even when surrounded by AI systems.
➡️ The paper highlights how a seemingly unrelated element - human speed - might be an essential element when drafting AI policies and laws. If we want humans to thrive while being slower than machines, these differences should be measured, assessed, and reflected in AI governance tools.
➡️ The flow of human existence is naturally slow, and I hope that AI advancements will encourage us to focus on human connections, deeper meaning, and what truly matters to each of us.
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇨🇦 Dropbox: AI Governance Program Manager - apply
🇪🇸 BASF: Data & AI Governance Facilitation - apply
🇺🇸 Sony: Senior Director, AI Governance - apply
🇬🇧 Abcam: Data and AI Governance Lead - apply
🇸🇰 PwC Slovakia: AI Governance Manager - apply
🇬🇧 Billigence: AI Governance Lead - apply
🇨🇦 Autodesk: AI Governance Specialist - apply
🇺🇸 Capco: AI Governance SME - apply
🇨🇦 Mission Lane: Head of AI Governance - apply
🇸🇬 Resaro: Responsible AI Scientist - apply
🔔 More job openings: Join thousands of professionals who subscribe to our AI governance & privacy job board and receive weekly job opportunities.
Thank you for being part of this journey!
If you enjoyed this edition, here's how you can keep the conversation going:
1️⃣ Help spread AI governance awareness
→ Start a discussion on social media about this edition's topic;
→ Share this edition with friends, adding your critical perspective;
→ Share your thoughts in the comment section below 👇
2️⃣ Go beyond
→ If you're not a paid subscriber yet, upgrade your subscription here and start receiving my exclusive weekly analyses on emerging AI governance challenges;
→ Looking for an authentic gift? Surprise them with a paid subscription.
3️⃣ For organizations
→ Organizations promoting AI literacy can purchase 3+ subscriptions here and 3+ seats in our AI Governance Training here at a discount;
→ Organizations promoting AI governance or privacy solutions can sponsor this newsletter and expand their reach. Fill out this form to get started.
See you soon!
Luiza
ItS awesome