👋 Hi, Luiza Jarovsky here. Welcome to the 132nd edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 35,200+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I explore the EU AI Act's market surveillance system and its enforcement mechanisms. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
⏰ The October cohorts start in 2 weeks! Are you transitioning to AI Governance? Join our 4-week AI Governance Bootcamps. 900+ professionals from 50+ countries have already benefited from our programs. 👉 Reserve your spot now and get 20% off with our AI Governance Package.
🌧️ AI Regulation Is Not Uncertain
Last week, Meta, Spotify, and others signed the open letter "Europe needs regulatory certainty on AI."
Below is my open answer, with information the letter doesn't mention. If you are interested in AI regulation and the intersection of privacy & AI, don't miss it.
The title of the open letter refers to “regulatory certainty,” but paying attention to its content, this is the paragraph where you can read between the lines:
“But in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models."
EU regulations, such as the GDPR and the AI Act, are neither fragmented nor unpredictable. Companies are given time to comply and, as EU regulations, they are directly applicable to all Member States.
What's bothering them (especially Meta) is the lack of a firm position from EU data protection authorities on how to apply GDPR in the context of AI training.
Let's use the example of the lawful grounds to process data to train AI. To process personal data in the EU (such as to train AI), you must rely on one of the grounds established in Article 6 of the GDPR. Most companies currently rely on legitimate interest. The other feasible alternatives in this commercial context would be contract or consent.
However, data protection authorities in the EU disagree on whether it's possible to rely on legitimate interests to train AI. The Dutch Data Protection Authority, for example, said that "scraping is almost always illegal." The European Data Protection Board (EDPB)'s ChatGPT Task Force Report gave suggestions on how to comply with the GDPR in this regard. However, it's a preliminary report and not a final stand, and the suggestions are mostly improbable or unfeasible at this point.
It's unclear which additional technical and legal assurances would be necessary to ensure that legitimate interest is lawful and will not lead to a GDPR fine.
As I discussed live with Max Schrems a few weeks ago, Meta and other companies who want to use personal data to train AI could always ask for people's consent. An opt-in mechanism through Facebook and Instagram would make sure that people are informed about this practice and choose to consent or not. This would be legal according to the GDPR, and people would appreciate having the opportunity to choose rather than being drawn into the AI system without their consent.
Additional issues that bother Meta (and the other companies they brought to sign the letter) are data subjects' rights, data protection principles, and privacy by design. How to comply with those in the context of AI training and fine-tuning? There is a lack of a uniform firm stand from Data Protection Authorities on how to interpret and apply the GDPR in the context of AI training (what is feasible, what is not, and what can be concrete workable assurances), but there is also a lack of interest in being more transparent.
Most companies don't want to be fully transparent about the data they use to train AI (remember Mira Murati's interview about the data used to train OpenAI's Sora, where she clearly avoided stating it on record?). Companies are also extremely wary of letting people choose if they want their data used to train AI (opt-in instead of a cumbersome opt-out mechanism).
The letter is extremely dramatic here:
"This means the next generation of open source AI models, and products, services we build on them, won’t understand or reflect European knowledge, culture or languages. The EU will also miss out on other innovations, like Meta’s AI assistant, which is on track to be the most used AI assistant in the world by the end of this year."
It refers to the recent Meta vs. Irish Data Protection Commission case, where Meta decided not to use data from EU users to train AI after noyb's complaints. Well, they could have asked for people's consent, but probably the "yes" rate would be much lower than they wanted, so they didn't even try.
So yes, the EU needs to be more decisive and clear on how exactly companies can comply with the GDPR to train AI. A final document with practical, acceptable measures and guardrails would be a good idea.
However, companies must be much more transparent and ethical. Even this letter is not totally transparent as the specific issue raised (Meta not using EU data) could have been immediately solved if they had asked for people's consent, but they didn't want this option and didn't mention it here.
People feel they are being treated like feedstock to train AI: unconsented, uninformed, and not compensated. Companies must remember that there are people behind the data. If they are open to that, I'm sure they will find a way to comply with EU laws.
📑 AI Policy: OECD AI Papers
In recent months, the OECD published excellent papers on AI-related topics, available for free online. It's a great opportunity to dive deeper into AI governance. Download, read, and share:
➡️ Measuring the demand for AI skills in the United Kingdom - link
➡️ Regulatory approaches to Artificial Intelligence in finance - link
➡️ The potential impact of AI on equity and inclusion in education - link
➡️ AI, data governance and privacy - link
➡️ Using AI to manage min. income benefits & unemployment assist. - link
➡️ Governing with Artificial Intelligence - link
➡️ A new dawn for public employment services - link
➡️ Artificial intelligence and the changing demand for skills in Canada - link
➡️ Artificial intelligence, data and competition - link
➡️ Defining AI incidents and related terms - link
➡️ The impact of AI on productivity, distribution & growth - link
🎓 Go beyond: if you are transitioning to AI governance, don't miss our 4-week AI Governance Bootcamps starting in 2 weeks (900+ people have joined our training programs). Learn more here and save your spot.
📄 AI Research: Algorithmic Disgorgement
The paper "The Deletion Remedy," by Daniel Wilf-Townsend, is an excellent read for those interested in learning about AI & algorithmic disgorgement. Make sure to download and read. Quotes:
"The most specific legal doctrine that has been suggested as a justification for the remedy is the doctrine of disgorgement. FTC officials have referred to the tool as “algorithmic disgorgement,” reflecting their framing of the remedy as serving a disgorgement-like function of depriving wrongdoers of the benefit of their misconduct. (...) Disgorgement is a proportionate remedy, requiring a demonstration that whatever is disgorged is causally attributable to the wrongful conduct at issue. But this requirement will not be satisfied by model deletion in easy-to-imagine scenarios—such as where a defendant has trained a model on a large dataset, and the unlawful data at issue is neither a significant portion of the broader dataset nor a distinctly valuable subset of it. (...)."
"But there are reasons that the law often tries to achieve proportionality in its remedies, and they apply in the context of model deletion just as in other contexts. Most straightforwardly, when a remedy is too harsh, it may deter too much, chilling productive activity due to fears of a loss that is not proportioned to the harm caused. OpenAI, for instance, has generated billions of dollars of economic activity. But the current logic of model deletion would allow the destruction of its main assets—large language models—if it turns out those models were trained on one unlawfully used blog post of 500 words. In an economy governed by such a regime, it would not make sense to invest in the creation of these models in the first place, even if their existence would be a large net benefit to society."
"The costs and difficulty of administering a remedial regime are a legitimate consideration when deciding which remedy to impose. As a result, there might be some situations where model deletion is the preferable remedy simply because it is easier to administer than other alternatives. But these alternatives may become easier to administer over time, as experience with this technology becomes deeper and more widespread, and as courts see more of these cases. And as the value of machine learning tools continues to grow, it will be more necessary for courts to consider alternatives to deletion if they desire to implement fair and proportionate remedies."
📋 UN Report: "Governing AI for Humanity"
The United Nations published the report "Governing AI for Humanity," and it's a must-read for everyone in AI. Quotes:
"Left ungoverned, however, AI’s opportunities may not manifest or be distributed equitably. Widening digital divides could limit the benefts of AI to a handful of States, companies and individuals. Missed uses – failing to take advantage of and share AI-related benefts because of lack of trust or missing enablers such as capacity gaps and ineffective governance – could limit the opportunity envelope."
"There is no shortage of documents and dialogues focused on AI governance. Hundreds of guides, frameworks and principles have been adopted by governments, companies and consortiums, and regional and international organizations. Yet, none of them can be truly global in reach and comprehensive in coverage. This leads to problems of representation, coordination and implementation."
"Our recommendations advance a holistic vision for a globally networked, agile and fexible approach to governing AI for humanity, encompassing common understanding, common ground and common benefts. Only such an inclusive and comprehensive approach to AI governance can address the multifaceted and evolving challenges and opportunities AI presents on a global scale, promoting international stability and equitable development."
"We remain optimistic about the future with AI and its positive potential. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place. The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action."
📋 Dutch DPA Report: "AI & Algorithmic Risks”
The Dutch Data Protection Authority (DPA) published the "AI & Algorithmic Risks Report," and it's a great read for everyone in AI governance. Below are its 8 key messages:
1️⃣ "The AI risk profile continues to call for vigilance from everyone – from Ministers to citizens and from CEOs to consumers – because (i) it is difficult to assess whether AI applications are sufficiently controlled and (ii) AI incidents can occur more and more frequently, especially as AI is increasingly becoming intertwined into society"
2️⃣ "Many new AI systems and risks (or possible risks) stand out. From experimentation by big tech companies to the widespread use of AI in situations where people are vulnerable"
3️⃣ "Information provision is essential for the functioning of democracy, but is under pressure from the deployment of AI systems. This applies to both moderation and distribution of content and, more recently, to content creation with generative AI"
4️⃣ "Conditions for adequate democratic control of AI systems are currently insufficiently met"
5️⃣ "Random sampling is a valuable tool to reduce risks in profiling and selecting AI systems"
6️⃣ "The entry into force of the AI Act (early August 2024) is a milestone, with concerns about (i) the long transition period (up to 2030) for existing high-risk AI systems within the government and (ii) whether robust and workable product standards will be in place in a timely manner"
7️⃣ "With regard to the further elaboration of the coalition agreement, the AP advises to continue to give priority to algorithm registration by government organisations and to discuss registration by semipublic organisations"
8️⃣ "The AP is committed to increasing the control of AI systems, in which (i) a proliferation of frameworks should be avoided and (ii) a recalibration of the national AI strategy can contribute to the further ecosystem for development and control of AI systems"
🇱🇰 AI Policy: Sri Lanka's National Strategy on AI
Sri Lanka published its National Strategy on AI, and it's an excellent read for those interested in global AI governance efforts. Quotes:
"Our AI strategy is guided by seven core principles: inclusivity and responsibility, trustworthiness and transparency, human-centricity, adoption-focus and impact-orientation, agile and adaptive governance, collaboration and global engagement, and sustainability and future-readiness. These guiding tenets will ensure that AI development aligns with national goals and values while safeguarding citizen rights and welfare."
"To fully realize the transformative potential of AI and its role in achieving the SDGs, it is critical that all Sri Lankans engage with this technology and understand how it will shape their lives in the coming years. To achieve this, we must foster a culture of AI literacy and empowerment, ensuring that our citizens are aware of the benefits, challenges, risks, and implications of this technology. By demystifying AI and promoting public understanding, we can create a platform of trust, where citizens feel confident and empowered to make informed choices on incorporating AI-driven solutions into their daily lives. This will also unlock AI’s potential to drive inclusive growth, improve quality of life, and build a more equitable and prosperous society, in line with the SDGs’ overarching goal of leaving no one behind."
"We recognize that while effective AI governance is essential for mitigating risks and promoting trust in AI systems, governance alone is not sufficient. It must be complemented by a proactive approach to fostering responsible AI development and adoption practices across the AI ecosystem. By providing organizations with practical tools and guidance, facilitating safe experimentation, building capacity through training and education, and recognizing responsible AI leadership, we aim to create a culture of ethical AI development that aligns with our governance framework. This multi-pronged approach will help ensure that AI technologies are not only governed appropriately, but also developed and deployed in a manner that prioritizes transparency, fairness, accountability, and user well-being."
📄 AI Research: Regulatory Capture in AI
The paper "How Do AI Companies 'Fine-Tune' Policy? Examining Regulatory Capture in AI Governance," by Kevin Wei, Carson Ezell, Nick Gabrieli, and Chinmay Deshpande, is a must-read for everyone in AI governance, here's why:
➡️ The authors examine how industry influence in AI policy can result in outcomes that negatively impact the public interest, what they call “regulatory capture.” They describe 15 mechanisms that everyone in AI governance should be aware of:
1. Advocacy
2. Procedural obstruction
3. Donations, gifts, and bribes
4. Private threats
5. Revolving door
6. Agenda-setting
7. Information management
8. Information overload
9. Group identity
10. Relationship networks
11. Status
12. Academic capture
13. Private regulator capture
14. Public relations
15. Media capture
➡️ According to the paper:
"These channels of influence operate in heterogeneous ways and can lead to a variety of undesirable outcomes. Researchers should understand the various models of capture and the goals of different actors, and additional work is needed to identify workable solutions to these different models. Although not all industry participation in AI policy is problematic, policymakers must be on guard against both more conspicuous and subtler forms of corporate influence in order to prevent capture."
➡️ This is an extremely important topic, which helps us critically analyze the regulatory process and its outcomes.
➡️ We have observed many of those mechanisms at play during the regulatory process that led to the EU AI Act and are still observing them now (read above my 'open answer' to Meta's open letter "Europe needs regulatory certainty on AI").
🎙️ Global AI Regulation, with Raymond Sun
If you are interested in the current state of AI regulation—beyond just the EU and U.S.—you can't miss my conversation with Raymond Sun this October: register here. Here's what we'll talk about:
➵ Among other topics, we'll discuss the latest AI regulation developments in:
🇦🇺 Australia
🇨🇳 China
🇪🇬 Egypt
🇮🇳 India
🇯🇵 Japan
🇲🇽 Mexico
🇳🇬 Nigeria
🇸🇬 Singapore
🇹🇷 Turkey
🇦🇪 United Arab Emirates
and more.
➵ There can be no better person to discuss this topic than Raymond Sun: a lawyer, developer, and the creator of the Global AI Regulation Tracker. It includes an interactive world map that tracks AI regulation and policy developments around the world: check it out.
➵ This will be the 19th edition of my Live Talks with global privacy & AI experts, and I hope you can join this edition live, participate in the chat, and stay up to date with AI regulatory approaches worldwide.
👉 To participate, register here.
🎬 Find all my previous Live Talks on my YouTube Channel.
🏛️ Transitioning to AI Governance? Register today
Our 4-week AI Governance Bootcamps are live online training programs led by me designed for professionals who want to transition to the AI governance field. 900+ people from 50+ countries have already participated, and the cohorts usually sell out.
🗓️ The October cohorts start in 2 weeks.
🎓 Check out the programs, what's included, and testimonials here.
👉 Save your spot now and enjoy 20% off with our AI Governance Package.
We hope to see you there!
📚 AI Book Club: What Are You Reading?
📖 More than 1,400 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was "Code Dependent: Living in the Shadow of AI," by Madhumita Murgia.
📖 Ready to discover your next favorite read? See the book list & join the book club here.
🔥 Job Opportunities: AI Governance is HIRING
Below are 10 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. RemoteWorker UK 🇬🇧 - AI Governance Lead: apply
2. Virgin Atlantic 🇬🇧 - Manager, Data & AI Governance: apply
3. Slalom 🇺🇸 - AI Governance, Risk & Compliance Leader: apply
4. Sky 🇬🇧 - AI Governance Lead: apply
5. Barclays 🇺🇸 - AI Governance & Oversight VP: apply
6. Swiss Re 🇸🇰 - AI Governance Specialist: apply
7. nbn® Australia 🇦🇺 - AI Governance Expert: apply
8. Boston Quantara 🇺🇸 - Head of AI Governance: apply
9. Dow Jones 🇪🇸 - Analyst, AI Governance: apply
10. Bee Engineering ICT 🇵🇹 - Consultor AI Governance: apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alerts. Good luck!
🚀 Partnerships: Let's Work Together
Love this newsletter? Here’s how we can collaborate:
Become a Sponsor: Does your company offer privacy or AI governance solutions? Sponsor this newsletter (3, 6, or 12-month packages), get featured to 35,200+ email subscribers, and grow your audience: get in touch.
Upskill Your Privacy Team: Enroll your team in our 4-week AI Governance Bootcamps—three or more participants qualify for a group discount: get in touch.
🙏 Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
If you found this edition valuable, consider sharing it with friends & colleagues to help spread awareness about AI policy, compliance & regulation. Thank you!
See you next week.
All the best, Luiza