👋 Hi, Luiza Jarovsky here. Welcome to the 129th edition of this newsletter with the latest developments in AI policy, compliance & regulation, read by 34,400+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I discuss AI regulatory sandboxes in the EU AI Act, their potential impact on innovation, and how AI providers might benefit from them. Paid subscribers received it yesterday and can access it here: 📋 AI Act: AI Regulatory Sandboxes. If you're not a paid subscriber yet, choose a paid subscription, receive two newsletter editions per week, and gain access to my exclusive analyses on AI compliance and regulation.
🍂 Don't miss our October cohorts: upkill & advance your AI governance career with our 4-week Bootcamps. 900+ people from 50+ countries have already joined our training programs. 🌏🌍 New: APAC-EMEA schedule! Get 20% off with our AI Governance Package. 👉 Secure your spot here.
👉 A special thanks to Usercentrics for sponsoring this week's free edition of the newsletter. Check out their course:
Should you be aware of Google's Consent Mode V2? Is it really that important? If your marketing team plans to continue using all the capabilities of Google Analytics and Google Ads, then the answer is yes. Watch this 30-minute course from Usercentrics Cookiebot to understand how Consent Mode V2 impacts your company's campaigns and learn the quick steps to set it up correctly. Enroll for free here.
💾 The AI Copyright Saga: Top 10 Papers
As AI copyright lawsuits continue to pile up, I have compiled a selection of the top 10 AI copyright papers published in recent months, written by some of the world's most renowned copyright experts.
Below are some of the topics covered, central to the ongoing AI copyright saga:
consent
compensation
the limits of fair use
how the existing copyright legal framework might help
the AI Act's impact
AI liability
legal gaps
scraping copyrighted works to train AI
infringing outcomes
guardrails (against copyright infringement)
exceptions
potential remedies
and more
Opinions vary, and you'll notice the authors below don't agree on the best - ethical and lawful - way to proceed. Expectedly, creators and AI developers don't agree either (as the ongoing litigation in the field shows).
Given the current uncertainty and lack of legal clarity, the papers below are an excellent way to expand your knowledge and stay informed about the latest discussions in the field. Download, read, and share:
📄 Title: The Law of AI is the Law of Risky Agents without Intentions
✏️ Authors: Ian Ayres & Jack M. Balkin
🔍 Read it here.
📄 Title: Consent and Compensation: Resolving Generative AI's Copyright Crisis
✏️ Authors: Frank Pasquale & Haochen Sun
🔍 Read it here.
📄 Title: Generative AI, Copyright, and the AI Act
✏️ Author: João Pedro Quintais
🔍 Read it here.
📄 Title: Copyright and the Training of Human Authors and Generative Machines
✏️ Author: Robert Brauneis
🔍 Read it here.
📄 Title: The Files are in the Computer: On Copyright, Memorization, and Generative AI
✏️ Authors: A. Feder Cooper, James Grimmelmann
🔍 Read it here.
📄 Title: A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs
✏️ Author: Andres Guadamuz
🔍 Read it here.
📄 Title: Copyright Safety for Generative AI
✏️ Author: Matthew Sag
🔍 Read it here.
📄 Title: How Generative AI Turns Copyright Upside Down
✏️ Authors: Mark Lemley
🔍 Read it here.
📄 Title: Thinking About Possible Remedies in the Generative AI Copyright Cases
✏️ Author: Pamela Samuelson
🔍 Read it here.
📄 Title: The AI-Copyright Trap
✏️ Author: Carys Craig
🔍 Read it here.
🇦🇺 [AI Policy] Safe & Responsible AI
The Australian Government published two documents on safe & responsible AI, which are excellent reads for everyone in AI governance. Important info about the documents:
1️⃣ Safe and responsible AI in Australia - Proposals paper for introducing mandatory guardrails for AI in high-risk settings
➡️ In this first document, the Australian Government outlines the options it is considering to mandate guardrails for those developing & using AI in high-risk contexts. The paper has 4 parts:
➵ "The case for regulating guardrails: why we need guardrails that focus on the development and deployment of AI to mitigate risks for the use of AI in high-risk settings.
➵ Defining high-risk AI: a principles-based approach to defining high-risk AI with known or foreseeable uses, and a definition to capture general-purpose AI (GPAI) models.
➵ Guardrails ensuring testing, transparency and accountability for AI: proposed mandatory guardrails, their aims, and how they could apply across the AI supply chain and throughout the AI lifecycle by different actors.
➵ Regulatory mechanisms to mandate guardrails: options to mandate guardrails from the adaptation of Australia’s existing legal frameworks through to enacting new AI-specific legislation or framework legislation."
➡️ Another interesting part of this paper is the section “Why is AI different?,” offering an excellent summary which strengthens the case for regulating AI now. Check it out:
➵ "Autonomy: Services and products embedded in AI technology or stand-alone AI applications are becomingly increasingly autonomous. AI systems can make decisions autonomously and pervasively, without human intervention at any stage of the decision-making process, if designed by organisations to function that way.
➵ General cognitive capabilities: General purpose systems like large language models can exhibit behaviour that, in humans, would require general cognitive capabilities, as opposed to sophisticated but specific capabilities to solve a task. This includes the capacity to transfer learning across domains and apply it to unseen or new tasks. (...)
➵ Adaptability and learning: AI systems can improve their performance over time and adapt by learning from data. As noted above, this differs from simpler software programs, which often follow pre-defined rules and need explicit programming. (...) As AI has become capable of generating data – and even programming code – it has also become a creator of information and technology.
➵ Speed and scale: AI has an unparalleled capacity to analyse massive amounts of data in a highly efficient and scalable way. It also allows for real-time decision-making and distribution of outputs at a scale that surpasses the capabilities of, and diversity of those tasks as previously undertaken by humans.
➵ Opacity or lack of explainability: (...) The most advanced AI models are trained on data that is often too vast and too complex for humans to efficiently process, and which may not have been curated or documented prior to ingestion for training. Techniques used to reason from data are multi-layered and understudied, contributing to a limited understanding of their outputs. Decisions that AI systems make are not always traceable.
➵ High realism: AI’s advancement – and particularly, generative AI – has reached a point where AI can emulate human-like behaviours. This includes creating realistic outputs that make it challenging for end-users to identify when they are interacting with AI or a human (the measure used in the Turing Test), or distinguish between artefacts that are AI-generated rather than humangenerated.
➵ Versatility: AI models are a multipurpose technology that can perform tasks beyond those intended by their developers. (...)
➵ Ubiquity: AI, particularly generative AI, has become an increasing part of our everyday lives and continues to be developed and adopted at a significant rate. (...)"
👉 Read the full document here.
2️⃣ Voluntary AI Safety Standard
➡️ The second document published by the Australian Government establishes a standard that complements the broader responsible AI agenda. It consists of 10 guardrails, which are detailed in the document:
➵ "Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
➵ Establish and implement a risk management process to identify and mitigate risks.
➵ Protect AI systems, and implement data governance measures to manage data quality and provenance.
➵ Test AI models and systems to evaluate model performance and monitor the system once deployed.
➵ Enable human control or intervention in an AI system to achieve meaningful human oversight.
➵ Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
➵ Establish processes for people impacted by AI systems to challenge use or outcomes.
➵ Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
➵ Keep and maintain records to allow third parties to assess compliance with guardrails.
➵ Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
👉 Read the full document here.
🇧🇷 [AI Compliance] Meta vs. Brazilian DPA
The Brazilian Data Protection Authority decided that Meta can use personal data from Brazilian users to train AI, with restrictions. Here's what you need to know about the decision, as well as my criticism:
➡️ Below is part of Meta's up-to-date "Compliance Plan," as approved by the Brazilian Data Protection Authority, which aims to ensure more transparency and accountability in the context of Meta's processing operations to train AI:
"➵ Not to include, at this time, publicly available personal data from accounts belonging to its users in Brazil, under the age of 18, in the training of its generative AI products;
➵ Sending a notification to all users on the Facebook and Instagram applications in Brazil;
➵ Sending a notification to the email address registered by the user with Facebook and Instagram in Brazil;
➵ Inclusion of a banner in the article “How Meta uses information for generative AI features and models”;
➵ Inclusion of a banner on the homepage of the Privacy Center in Brazil to ensure prominent information on the processing of personal data;
➵ Inclusion of an easy link to the objection form;
➵ Updating the Privacy Notice for Brazil;
➵ Updating the Privacy Policy banner to inform about the update of its content and include an easy link to the objection form;
➵ Simplification of the completion of the objection form to ensure the facilitated exercise of rights;
➵ Improving transparency in the objection form;
➵ Publication in Meta's Press Room to provide even more information on the improvements in transparency and facilitating access to the objection form;
➵ Reduction in the minimum number of characters for the request for the objection form available to any individual, including non-users;
➵ For data subjects who are not users of Meta products: offer a simplified form for exercising opposition;
➵ Meta undertakes to file a petition with ANPD to confirm each commitment made."
➡️ It saddens me that, in 2024, the conditions above were only implemented after the enforcement measure of the Brazilian Data Protection Authority. If you read the conditions above, they represent basic data protection measures founded on transparency, fairness, user-centered design, and privacy-enhancing design (you can read my previous articles on the topic in the archive).
➡️ These measures should be the minimum assurances made available to all Meta's users, and they shouldn't require legal intervention to happen. Meta should offer them by default worldwide.
👉 Read the decision from the Brazilian Data Protection Authority here (in Portuguese).
🌱 [AI Research] The Environmental Impacts of AI
The paper "The Environmental Impacts of AI - Primer," by Sasha Luccioni, Bruna Sellin Trevelin & Margaret Mitchell, is a great read for everyone in AI. Quotes:
"It can be hard to understand the extent of AI’s impacts on the environment given the separation between where you interact with an AI system, and how that interaction has come to be – most AI models run on data centers that are physically located far away from their users, who only interact with their outputs. But the reality is that AI’s impressive capabilities come with a substantial cost in terms of natural resources, including energy, water and minerals, and non-negligible quantities of greenhouse gas emissions."
"For a full picture of AI’s environmental impact, we need both consensus on what to consider as part of “AI”, and much more transparency and disclosures from the companies involved in creating it. AI refers to a broad set of techniques, including machine learning, but also rule-based systems. A common point of contention is the scoping of what constitutes AI and what to include when estimating its environmental impacts. Core to this challenge is the fact that AI is often a part of, as opposed to the entirety of, any given system – e.g. smart devices, autonomous vehicles, recommender systems, Web search, etc. How to delineate and quantify the environmental impacts of AI as a field is therefore a topic of much debate, and there is currently no agreed-upon definition of the scope of AI."
"Environmental protection is also stated as being one of the core values put forward by the EU AI Act, and appears several times in its text. As provided in the AI Act, the energy consumption of AI models is at the core of this topic, and is stated as one of the criteria that must be taken into consideration when training and deploying them. The AI Act stipulates that the providers of general-purpose AI models (GPAIs) specifically should share the known or estimated energy consumption of their models. It also provides that high-risk AI systems should report on resource performance, such as consumption of energy and of 'other resources' during the AI systems’ life cycle, which could include water and minerals depending on the level of detail of the standards that will guide compliance to this reporting obligation."
👉 Read the full paper here.
📄 [AI Research] The Limits of Explainability in AI
The paper "Lost in Translation: The Limits of Explainability in AI," by Hofit Wasserman-Rozen, Ran Gilad-Bachrach & Niva Elkin-Koren, is an excellent read for everyone in AI governance. Quotes:
"Yet, it is unclear whether XAI techniques can fill the gap in accountability caused by the shift from human to AI-driven decision-making processes. In particular, would a right to explanation by Al be equivalent to a right to explanation by a human? Could XAI satisfy the right to an explanation as provided by law? This article argues that the right to explanation is, at its core, a mechanism designed to fit a human decision-maker and a tool that assumes human-to-human interaction, making it ill-equipped to offer an adequate solution to the potential harms involved in Al decisions. While regulators hope to rip several benefits from explanations generated by XAI techniques, our analysis shows how the significant gaps between Al decision-making processes and human decisions effectively deteriorate the functionalities of XAI." (394)
"In this context, research has shown XAI's potential to cause human over-reliance on the system as well as the opportunity for wrongdoing and manipulation by promoting misguided trust. The phenomenon of nudging users to act according to others' interests is known as a "dark pattern," which benefits from humans' "automation bias" towards trusting machines. Further research has suggested that user manipulation can even occur unintentionally, causing "explainability pitfalls" merely by choosing to present people with one explanation over another. In that sense, promoting XAI's generation of human-understandable explanations may sometimes do more harm than good, opening the door for manipulation by malicious actors." (page 433)
"As this paper demonstrates, the two correlative processes driving XAI - the regulatory push to produce explanations under a right to explanation on the one hand and the ML community's interest in promoting trust in technology on the other hand - culminated in an inadequate solution. XAI currently fails to fulfill the fundamental objectives of reason-giving in law. It does not contribute to higher-quality decisions, facilitate due process, or acknowledge human autonomy. More disconcertingly, XAI appears to excel in reason-giving's final function, promoting the decision-making systems' authority, thus enhancing the risk of promoting unwarranted trust in automatic decision-making systems." (page 437)
👉 Read the full paper here.
🎓 [Corporate] AI Governance Training
To support your AI training efforts, I would welcome the opportunity to lead a live, online AI Governance Bootcamp for your team. 900+ professionals have already participated in our training programs. 👉 Get in touch here.
📚 [AI Book Club] What Are You Reading?
More than 1,300 people have joined our AI Book Club and receive our bi-weekly book recommendations. Ready to discover your next favorite read?
👉 See the book list and sign up here.
🎧 [AI Governance Podcast] Learn Something New
Did you miss my 1-hour conversation with Barry Scannell on AI governance, compliance & regulation? Watch or listen to the recording here. 👉 Find all my conversations with global AI experts on my YouTube channel.
🔥 [Job Opportunities] AI Governance is HIRING
Below are 12 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. 🇳🇱 Nebius AI: Privacy and AI Governance Manager - apply
2. 🇳🇱 Booking.com: Project Manager, Data & AI Governance - apply
3. 🇬🇧 The Weir Group: Head of Data & AI Governance - apply
4. 🇮🇪 Analog Devices: Senior Manager, AI Governance - apply
5. 🇺🇸 Horizontal Talent: AI & data governance - apply
6. 🇬🇧 ByteDance: Senior counsel, AI Governance & Tech Policy - apply
7. 🇺🇸 Lowe's: Director, AI Governance - apply
8. 🇺🇸 SAS: Solution Consultant, AI Governance Advisory - apply
9. 🇺🇸 Blue Cross Blue Shield Association: AI Governance - apply
10. 🇺🇸 M&T Bank: AI Governance Consultant - apply
11. 🇮🇳 EY: Manager, AI Governance (Risk Consulting) - apply
12. 🇨🇿 Collibra: Senior SDET I, AI Governance - apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
🍂 [Fall Bootcamps] Secure your spot in October
Our 4-week AI Governance Bootcamps are live online training programs designed for professionals who want to upskill and advance their AI governance careers. 900+ professionals from 50+ countries have already joined them. In October, we're offering two cohorts:
🌎🌍 Americas-EMEA: 10am PT / 1pm ET / 6pm UK time
🌏🌍 [NEW] APAC-EMEA: 9am UK / 4pm Singapore / 7pm Sydney
👉 With the AI Governance Package you get 20% off. Register here.
🙏 Thank you for reading
➵ If you have comments on this edition, write to me, and I'll get back to you soon.
➵ If you enjoyed this edition, consider sharing it with friends & colleagues and help me spread awareness about AI policy, compliance & regulation.
See you next week!
All the best, Luiza