👋 Hi, Luiza Jarovsky here. Welcome to the 138th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 36,500+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I’ll discuss some of the EU AI Act's provisions dealing with AI bias and the potential weaknesses of its approach. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
🏛️ Registration for the November cohorts is open! If you are transitioning to AI governance and are ready to invest 5 hours a week in learning & upskilling, our 4-week live online AI Governance Training is for you. Join 1,000+ professionals from 50+ countries who have accelerated their careers through our programs. Save your spot!
👉 A special thanks to Usercentrics for sponsoring this week's free edition of the newsletter. Check out their tool:
That neighbor who knows everything that happens on the block? The one who peeks where they aren't invited? You don't want your website to act like that, especially with privacy regulations like GDPR and CCPA in place. Audit your website for privacy compliance risk today to make sure you've obtained your customers' consent.
👁️ AI & Biometrics
According to the U.S. National Institute of Standards and Technology (NIST) publication “Introduction to Information Security,” biometrics can be defined as:
“A measurable physical characteristic or personal behavioral trait used to recognize the identity, or verify the claimed identity, of an applicant. Facial images, fingerprints, and iris scan samples are all examples of biometrics.”
Various AI-powered technologies rely on biometrics, including facial recognition systems, fingerprint scanning, neurotechnology devices that map brain waves, and many others. Biometric AI systems are often used in the context of access control and identity verification, for example, when you scan your fingerprint to enter a secured area or use facial recognition to unlock your phone. Additionally, as noted in the Access Now report below (item 10), some systems are designed to "register and purportedly identify affect or emotions and moods."
Given the sensitivity of biometric data, as well as the often intrusive way it's collected, from both data protection and constitutional perspectives, biometric AI systems can pose an increased risk to the protection of fundamental rights—and deserve special attention.
Here are 10 great resources published in recent months to help you learn more about the topic. Download, read, and share:
1️⃣ "Biometric data: Misuse, use, and collation" (2024), by the UK Parliament.
🔎 Read it here.
2️⃣ "Regulating Algorithmic Harms" (2024), by Sylvia Lu.
🔎 Read it here.
3️⃣ "Biomanipulation" (2024), by Laura Donohue.
🔎 Read it here.
4️⃣ "Automated Decision-making and Artificial Intelligence at European Borders and Their Risks for Human Rights" (2024), by Yiran Yang, Frederik Borgesius, Pascal Beckers & Evelien Brouwer.
🔎 Read it here.
5️⃣ "Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies" (2024), by Jared Genser, Stephen Damianos & Rafael Yuste.
🔎 Read it here.
6️⃣ "Working Paper on Facial Recognition Technology" (2024), by the International Working Group on Data Protection in Technology.
🔎 Read it here.
7️⃣ "Facial recognition and the end of human rights as we know them?" (2024), by Daragh Murray.
🔎 Read it here.
8️⃣ "The protection of mental privacy in the area of neuroscience" (2024), by the European Parliament.
🔎 Read it here.
9️⃣ "From Global Standards to Local Safeguards: The AI Act, Biometrics, and Fundamental Rights" (2024), by Federica Paolucci.
🔎 Read it here.
🔟 Bodily Harms - Mapping the Risks of Emerging Biometric Tech (2023), by Access Now.
🔎 Read it here.
📑 AI Research: Liability for AI-generated outputs
The paper "Infringing AI: Liability for AI-generated outputs under international, EU, and UK copyright law" by Eleonora Rosati is an excellent read on the intersection of AI liability and AI copyright. Quotes:
"It is clear by now that AI-generated outputs may raise questions of actionable reproduction under copyright and related rights. The user inputting the prompt resulting in prima facie infringing output (and subsequently using that output) might be regarded as the one directly undertaking restricted acts. Nevertheless, any resulting liability could also encompass parties other than users. Insofar as AI model developers are concerned, as seen above in Part III, any potentially applicable TDM exception under EU and UK law would only cover the extraction and reproduction during the input/training phase, not other acts. As for providers of AI models, their liability could potentially be established not only on a secondary/indirect/accessory basis, but also on a primary/direct basis, consistent with case law in the UK and EU." (page 15)
"Having considered the above, another issue that arises is the enforceability (and effectiveness) of terms of service used by AI model providers to exclude their liability for infringements committed by users. While the answer will obviously depend on the specific circumstances at hand, in YouTube, C-682/18 and C-683/18, the CJEU referred to the terms of service of a platform operator, and found that – even if the terms require (i) users to respect third-party rights and (ii) that the user holds all the necessary rights, agreements, consents and licences for the content that they upload – the existence of such terms alone could not be enough to exclude the operator’s own liability.” (page 16)
"In conclusion: while it is clear that each case will need to be decided on its own merits and that, therefore one should refrain from making sweeping conclusions, the discussion undertaken here shows how the generative AI output phase raises several questions of liability under copyright law. If the goal of policymakers and relevant stakeholders is to ensure the balanced and sustainable development of AI, including in the context of the seemingly revived proposal for an AI Liability Directive, then the issues related to the generation and dissemination of AI outputs need to be given ample attention and a greater role in the debate than what has been the case so far, whether it is in the context of risk assessment and compliance, licensing initiatives, or in contentious scenarios." (page 21)
🏛️ Advance Your AI Governance Career
➵ Join our 4-week AI Governance Training— a live, online, and interactive program led & designed by me for professionals who want to accelerate their AI governance career and who are ready to dedicate at least 5 hours a week to the program (live sessions + self-learning). Here's what to expect:
➵ The training includes 8 live online lessons with me (90 minutes each) over the course of 4 weeks, totaling 12 hours of live sessions. You'll also receive additional learning material, quizzes, 16 CPE credits pre-approved by the IAPP, and a training certificate. You can always send me your questions or book an office-hours appointment with me. Groups are small, so it's also an excellent opportunity to learn with peers and network.
➵ This is a comprehensive and up-to-date AI governance training focused on AI ethics, compliance, and regulation and covering the latest developments in the field. The program consists of two modules:
Module 1: Legal and ethical implications of AI, risks & harms, recent AI lawsuits, the intersection of AI and privacy, deepfakes, intellectual property, liability, competition, regulation, and more.
Module 2: Learn the EU AI Act in-depth, understand its strengths and weaknesses, and get ready for policy, compliance, and regulatory challenges in AI.
➡️ Over 1,000+ professionals from 50+ countries have already benefited from our programs, and registration for the November cohorts is open. Are you ready?
🏛️ Check out training details, read testimonials, and save your spot here.
We hope to see you there!
🔎 AI Research: Copyright Exceptions for AI Training
As AI copyright lawsuits continue piling up, the paper "The Globalization of Copyright Exceptions for AI Training" by Matthew Sag & Peter K. Yu is a must-read to understand the ongoing debate. Quotes:
"Although the copyright implications of machine learning seemed indistinguishable from TDM prior to generative AI, there are differences between machine learning and generative AI worth noting. First, generative AI does not simply analyze training data to derive useful information; it can produce digital artifacts in the same form as its training data. Even if the outputs do not approach substantial similarity—the threshold for copyright infringement—they may nonetheless compete directly with the works used to train AI models or with the copyright holders of those works. In the United States, this prospect of indirect substitution could complicate the fair use analysis with respect to the fourth factor, “the effect of the use upon the potential market for or value of the copyrighted work. (...)" (page 10)
"The generative AI plaintiffs’ most promising paths are fact specific. For instance, the consolidated cases in Tremblay v. OpenAI present a plausible argument under the fourth fair use factor that commercial AI developers undermine the basic incentive structure of copyright by training on sites of known infringement and thus bypassing the market for access without a compelling justification. The Tremblay plaintiffs alleged that OpenAI and other developers had obtained access to over 100,000 books and other works through shadow libraries, such as Library Genesis, Z-Library, Sci-Hub, and Bibliotik. This relatively novel argument is bolstered by its resonance with copyright laws in other jurisdictions. In the European Union, for example, “lawful access” to the relevant copyrighted works is an essential condition under the TDM exceptions in the DSM Directive.(...)" (page 37)
"Because AI technology will continue to evolve in the near future, sparking further legal, regulatory, technological, and business developments, it remains to be seen whether and how this equilibrium will be maintained. Regardless of the outcome, scrutinizing international copyright law developments in the area of AI training will deepen our understanding of how to better harness the copyright system to advance AI, including generative AI technology. Because some copyright legislation will provide greater affordances for machine learning and AI training than others, policymakers and commentators should pay greater attention to these relative strengths and weaknesses. Like the design of AI models and training processes, the design of the copyright system can play a very important role in the age of generative AI." (page 47)
📋 AI Report: Governance in the Age of Generative AI
The report "Governance in the Age of Generative AI" by the World Economic Forum is an excellent read for everyone in AI governance looking for actionable insights. Important information:
"Successful implementation of national strategies for responsible and trustworthy governance of generative AI requires a timely assessment of existing regulatory capacity – among other governance tools – to tackle the unique opportunities and risks posed by the technology. This includes examination of the adequacy of existing legal instruments, laws and regulations, resolution of regulatory tensions and gaps, clarification of responsibility allocation among generative AI supply chain actors and evaluation of competent regulatory authorities’ effectiveness and capacities. Such assessments must respect the fundamental rights and freedoms already codified in international human rights law, such as the protection of particular groups (e.g. minority rights and children’s rights) as well as legal instruments that are domainspecific (e.g. to cybercrime and climate change)." (page 6)
"Governments are carefully considering how to avoid over- and under-regulation to cultivate a thriving and responsible AI network, where AI developed for economic purposes includes robust risk management, and AI research and development (R&D) is harnessed to address critical social and environmental challenges. Since market-driven objectives may not always align with public interest outcomes, governments can encourage robust and sustained responsible AI practices through a combination of financial mechanisms and resources, clarified policies and regulations, and interventions tailored to industry complexity." (page 12)
"Governmental structures can adopt the dynamics of tech companies to become more agile through: 1) a risk-based approach, 2) regular review of technology and marketplace challenges, 3) agile response to challenges, and 4) review of response effects and adaptation. Still, agile governance should not come at the expense of oversight or separation of powers, nor without regard to human rights and rights-based frameworks that ensure that generative AI development and deployment align with societal values and norms. Governments should avoid adopting a “move fast and break things” form of hyper-agility that has been criticized for prioritizing go-to-market testing over mitigation of harmful consequences." (page 25)
🚩 AI Safety: Guide to Red Teaming Methodology
The Japan AI Safety Institute published its "Guide to Red Teaming Methodology on AI Safety," and it's an excellent read. Important information before downloading:
"(...) the "Guide to Red Teaming Methodology on AI Safety" is intended to help developers and providers of AI systems to evaluate the basic considerations of red teaming methodologies for AI systems from the viewpoint of attackers (those who intend to abuse or destroy AI systems). This document was prepared based on domestic and international studies and precedents, takes into account international alignment taken into consideration, summarizes the issues that are considered important when conducting red teaming." (page 5)
"AI systems, particularly LLM systems, are rapidly scaling up, with their functionalities becoming increasingly advanced and diverse at an accelerated pace. Consequently, attack methods are also becoming more sophisticated and diversified. To provide and operate AI systems safely and securely, it is important to keep abreast of the latest attack methods and technological trends. In addition, it is difficult to sufficiently confirm the adequacy of countermeasures for AI systems only by standard evaluation tools. Therefore, red teaming should be conducted based on the actual system configuration and risks associated with the usage in the environment, when the risks are assumed to be high." (page 12)
"After completion of red teaming, it is desirable to confirm the progress of improvement measures implemented based on the improvement plan should be checked at management meetings as appropriate. After implementing improvements measures, it is advisable to check the configuration status of measures, review documents, or conduct red teaming again if necessary, to confirm that the vulnerability has been properly addressed and the risk has been mitigated. As mentioned above (Section 5.2), red teaming should not be conducted once before release/beginning of operations and then completed. It is desirable to conduct it periodically or as needed after the start of operations as an effective means of ongoing validation. The report on red teaming should be handled with great care to avoid inviting attackers to launch new attacks." (page 67)
🔎 AI Research: AI Nationalism
If you want to understand some of the emerging macro-trends in AI, you can't miss the paper "The Age of AI Nationalism and Its Effects" by Susan Aaronson. Important information:
➡️ The main topic of the paper is AI nationalism, and the author describes some of the unintended consequences of these practices. For example, regarding competition-related implications:
"AI nationalism may further encourage monopolistic markets. According to the US Federal Trade Commission, which, along with the DoJ, regulates competition, only some 20 firms possess the cloud infrastructure, computing power, access to capital and vast troves of data to develop and deploy tools to create LLMs. These firms are also concentrated in a few advanced developed countries — in Asia, Europe and North America. As a result, a few companies with expertise in AI could hold outsized influence over a significant swath of economic activity. Perhaps most importantly, these firms hold considerable political as well as economic clout globally and they often lobby against regulation. At times, they act as de facto private regulators, particularly in technologies such as AI, whereas policy makers are just learning how to govern in these emerging fields. Prowess begets economies of scale and scope, which, in turn, begets ever more digital prowess."
"According to the UK Competition and Markets Authority, these monopolistic markets cause three problems:
➵ firms controlling critical inputs for developing various AI models may restrict access to these models to shield themselves from competition;
➵ powerful incumbents could exploit their positions in consumer- or business-facing
markets to distort choices and restrict competition in deployment;
➵ partnerships among key players could exacerbate existing positions of market power through the value chain."
➡️ In the conclusion, the author states:
"Around the world, policy makers see AI as essential to economic growth and progress. AI is, at bottom, a global product — built over time on large troves of the world’s data and knowledge. Yet some officials in some countries are limiting access to the building blocks of AI — whether funds, data or high-speed computing power — to slow down or limit the AI prowess of their competitors in country Y and/or Z. Meanwhile, some officials are
also shaping regulations in ways that benefit local AI competitors and, in so doing, they may also impede the competitiveness of other nations’ AI developers. These steps, over time, could reduce the potential of AI and data. Moreover, as the author has shown, sovereign AI policies could backfire, alienating potential allies and further dividing the world into AI haves and have not." (page 23)
🎙️ Tomorrow: AI Regulation Around the World
If you are interested in the current state of AI regulation—beyond just the EU and the U.S.—you can't miss my conversation with Raymond Sun next week: register here. This is what we'll talk about:
➵ Among other topics, we'll discuss the latest AI regulation developments in:
🇦🇺 Australia
🇨🇳 China
🇪🇬 Egypt
🇮🇳 India
🇯🇵 Japan
🇲🇽 Mexico
🇳🇬 Nigeria
🇸🇬 Singapore
🇹🇷 Turkey
🇦🇪 United Arab Emirates
and more.
➵ There can be no better person to discuss this topic than Raymond Sun, a lawyer, developer, and the creator of the Global AI Regulation Tracker, which includes an interactive world map that tracks AI regulation and policy developments around the world.
➵ This will be the 19th edition of my Live Talks with global privacy & AI experts, and I hope you can join this edition live, participate in the chat, and stay up to date with AI regulatory approaches worldwide.
👉 To participate, register here.
🎬 Find all my previous Live Talks on my YouTube Channel.
📚 AI Book Club: What Are You Reading?
📖 More than 1,500 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was “Taming Silicon Valley: How We Can Ensure That AI Works for Us” by Gary Marcus.
📖 Ready to discover your next favorite read? See the book list and join the book club here.
🔥 Job Opportunities: AI Governance is HIRING
Below are 8 new AI Governance positions posted in the last few days. Bookmark, share & apply early:
1. Mastercard 🇹🇷 Manager, AI Governance: apply
2. Deliveroo 🇬🇧 AI Governance Lead: apply
3. Royal London 🇬🇧 AI Governance Lead: apply
4. EcoVadis 🇪🇸 AI Governance Expert: apply
5. Changing Social 🇬🇧 Head of Cloud Strategy and AI Governance: apply
6. Microsoft 🇧🇪 Regulatory Counsel, Responsible AI: apply
7. Siemens Energy 🇵🇹 AI Governance Consultant: apply
8. Red Hat 🇬🇧 Senior MLOps Engineer, Responsible AI: apply
🔔 For more AI governance and privacy job opportunities, subscribe to our weekly job alerts. Good luck!
🙏 Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
AI is not only hype: it must be properly governed. If you found this edition valuable, consider sharing it with friends & colleagues, and help me spread awareness about AI policy, compliance & regulation. Thank you!
Have a great day.
All the best, Luiza