š Hi, Luiza Jarovsky here. Welcome to the 144th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 37,900+ subscribers in 155+ countries. I hope you enjoy reading it as much as I enjoy writing it!
š In this week's AI Governance Professional Edition, Iāll discuss the responsible AI approach proposed by Meta, Google, and Microsoft and potential legal implications - don't miss it. Paid subscribers will receive it on Friday. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
ā° Last hours to register! If you are transitioning to AI governance and want to go beyond standard certifications, our 4-week AI Governance Training is for you. Join 1,000+ professionals from 50+ countries who have accelerated their careers through our programs. The 14th cohort starts tomorrow, Wednesday (9am UK / 5pm Singapore); save your spot!
š A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Read their article:
An increase in data privacy regulationsĀ isĀ pushingĀ companies toward compliance, but theĀ issue is more complex.Ā The primary goal of any privacy program should be to prevent and minimize privacy harms, with compliance as a byproduct of that mission.Ā See why this framing of data privacy is more important than ever inĀ MineOS'Ā latestĀ article.
š Open-Source AI: Legal Implications
What does open-source mean in AI? Did you know that this definition has direct legal implications? Here's what everyone in AI should know:
1ļøā£ What is the EU AI Act's definition of open-source? Recitals 102 and 103 bring important information on the EU AI Act's approach:
ā Recital 102:
"Software and data, including models, released under a free and open-source licence that allows them to be openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market and can provide significant growth opportunities for the Union economy. General-purpose AI models released under free and open-source licences should be considered to ensure high levels of transparency and openness if their parameters, including the weights, the information on the model architecture, and the information on model usage are made publicly available. The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected."
ā Recital 103:
āFree and open-source AI components cover the software and data, including models and general-purpose AI models, tools, services or processes of an AI system. Free and open-source AI components can be provided through different channels, including their development on open repositories. For the purposes of this Regulation, AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services, including through a software platform, related to the AI component, or the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software, with the exception of transactions between microenterprises, should not benefit from the exceptions provided to free and open-source AI components. The fact of making AI components available through open repositories should not, in itself, constitute a monetisation.ā
2ļøā£ What are the legal implications of being classified as open-source in the EU AI Act?
ā According to Article 2, open-source AI systems that are not high-risk, not prohibited, and not covered by Article 50 are out of the scope of the AI Act:
"Article 2 (12): This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls underĀ Article 5Ā orĀ 50."
ā According to Article 53, open-source general-purpose AI models that are not classified as posing systemic risk will not have to comply with some of the obligations for providers of general-purpose AI models:
"53(2) The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks."
3ļøā£ The U.S. FTC has recently released an article dealing with open-source vs. open-weights foundation models, stating why they prefer the latter. This might also have direct legal implications in the U.S.:
āWhile OSS [open-source software] is well-defined, there is still an active dialogue around what āopenā and āopen-sourceā should mean in the emerging context of AI modelsā and it is important to understand the range of definitions when assessing the potential impacts. A complete definition could include a variety of attributes: For example, it could require some set of components of a model, such as the training data, software systems used to train the model, or the modelās weights (the data that results from training and allows a model to generate content based on a prompt) to be made openly available. It could also require that model components be licensed with terms that allow for broad use and reuse. Or it could require freely available publications on the design and advances of the model.ā
4ļøā£ The Open Source Initiative has recently released its definition of open-source AI, which might also affect how professionals use this terminology in practice.
āAnĀ Open Source AIĀ is an AI system made available under terms and in a way that grant the freedomsĀ to:
UseĀ the system for any purpose and without having to ask for permission.
StudyĀ how the system works and inspect its components.
ModifyĀ the system for any purpose, including to change its output.
ShareĀ the system for others to use with or without modifications, for any purpose.
These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system.ā
ā°āā¤ As AI development evolves, we will keep encountering legal challenges and uncertainties, especially in fields such as copyright law and liability law, and when open-source general-purpose AI models are integrated into downstream AI applications.
ā°āā¤ To dive deeper into AI governance and regulation, join our 4-week AI governance training. The next cohort starts tomorrow, learn more and register here.
š” Unpopular Opinion
āWe are committed to using AI responsiblyā is the new āWe value your privacyā: without concrete action, it means nothing. If you don't say what precisely you're doing, when you are doing it, and how you're doing it, it's just an empty attempt to gain customer trust. People are watching.
š What are your thoughts? Join the discussion on LinkedIn and Twitter/X.
š§ Novice AI Risk Mitigation Tactics
Maybe senior professionals know better? For all those pushing AI adoption at any cost, the paper āDonāt Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tacticsā tackles AI hype in an unexpected way:
"We show that juniors, rather than being a source of expertise for senior professionals in the effective use of emerging technologies, may instead recommend three kinds of novice AI risk mitigation tactics for addressing risks to valued outcomes that:
ā are grounded in a lack of deep understanding of the emerging technologyās capabilities
ā focus on change to human routines rather than system design
ā focus on interventions at the project-level rather than system deployer- or ecosystem-level.
ā”ļø Juniors may recommend novice AI risk mitigation tactics because juniors themselves may not be technical experts, and because when technology is nascent and exponentially changing, juniors may have had no formal training on how to use the technology, no experience with using it in the work setting, and little experience with using it outside of the work setting.
ā”ļø Because emerging technologies have uncertain and wide-ranging capabilities that are changing at an exponential rate, juniors may not be fully informed about their capabilities.
ā”ļø Because emerging technologies have the potential for outperforming humans in a wide variety of skilled and cognitive tasks, juniorsā focus on change to human routines may be less effective in mitigating risks than would be a focus changes to system design.
ā”ļø And, because emerging technologies depend on a vast, varied, and high volume of data and other inputs from a broad ecosystem of actors, juniorsā focus on interventions at the project level may be less likely to be effective than interventions at the system deployer and ecosystem-level."
ā The paper was written by Kate Kellogg, Hila Lifshitz Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, FranƧois Candelon, and Karim Lakhani.
šÆ Advance Your Career
āµ Join our 4-week AI Governance Trainingāa live, online, and interactive program designed for professionals who want to accelerate their AI governance career and go beyond standard certifications. Here's what to expect:
āµ The training offers 8 live online lessons with me (90 minutes each) over the course of 4 weeks, totaling 12 hours of live sessions. You'll also receive additional learning material, quizzes,Ā a training certificate, andĀ 16 CPE credits pre-approved by the IAPP. You can always send me your questions or book an office-hours appointment with me. Groups are small, so it's an excellent opportunity to learn with peers and network.
āµ This is a comprehensive and up-to-date AI governance training focused on AI ethics, compliance, and regulation, covering the latest developments in the field. The program consists of two modules:
ā³ Module 1: Legal and ethical implications of AI, risks & harms, recent AI lawsuits, the intersection of AI and privacy, deepfakes, intellectual property, liability, competition, regulation, and more.
ā³ Module 2: Learn the EU AI Act in-depth, understand its strengths and weaknesses, and get ready for policy, compliance, and regulatory challenges in AI.
ā”ļø We offer discounted rates for students, NGO members, and those who are in career transition: get in touch.
ā”ļø Over 1,000 professionals from 50+ countries have already benefited from our programs. Are you ready?
ā° Last DAY to register for the 14th cohort!Ā Check out the training details, read testimonials, and save your spot here. I hope to see you there!
*If now isnāt the right time for you, you can sign up for ourĀ learning center to receiveĀ updates onĀ future training programsĀ along withĀ educational and professional resources.
š¹ļø The Futility of Privacy as Control
The paper "Kafka in the Age of AI and the Futility of Privacy as Control" by Daniel Solove and Woodrow Hartzog, two of the world's most respected privacy experts, is an excellent read for everyone in privacy & AI. Important info:
ā°āā¤ Having researched privacy both for my Master's and my Ph.D. thesis, I can say that this article is an excellent introduction for those who want to understand how privacy law has evolved through the last decades and some of the big ideas that have influenced it, such as the "Individual Control Model," (or "privacy as control") and its main shortcomings.
ā°āā¤ Building on Kafka's work and metaphors, they explain:
"we argue that the control privacy law gives to people is often turned against them, and that people readily surrender any control they might be given. People eagerly embrace the technologies that hurt them and make choices to their detriment. Although the law should certainly stop organizations from exploiting and manipulating people, merely curtailing these practices isnāt enough."
ā°āā¤ The alternative highlighted in this paper is the "Societal Structure Model." According to the authors:
"This view begins with the recognition that privacy is not purely (or even primarily) an individual interest; instead, privacy should be protected for the purpose of promoting societal values such as democracy, freedom, creativity, health, and intellectual and emotional flourishing."
ā°āā¤ When commenting on AI regulation and the intersection of privacy & AI, they state:
"New AI regulation is an important step forward, but existing privacy law must also be reworked to focus more on the Societal Structure Model. AI overlaps with privacy significantly, but there are still many AI issues that donāt involve privacy, and vice versa. There are a myriad of instances of data collection, use, and disclosure beyond AI where individual control is inadequate as a regulatory response. We thus caution against AI exceptionalism; the Societal Structure Model should be embraced broadly for privacy regulation, whether AI is involved or not."
ā°āā¤ The article concludes by affirming:
"The Individual Control Model is a dead end. Although many policymakers and commentators know this, they keep returning to it. Itās the classic Kafka plot: people know their quest is doomed and yet persist with it anyway. In Kafkaās world, the mouse doesnāt change direction, and it meets an un- timely demise. Letās hope in our world, policymakers wonāt keep making the same mistake."
šļø Taming Silicon Valley and Governing AI
If you are interested in AI,Ā particularly inĀ how we can ensure it works for us, you can't miss my live conversation with Gary Marcus [register here]:
āµ Marcus isĀ one ofĀ the most prominent voices in AI today. He is a scientist, best-selling author, and serial entrepreneur known forĀ anticipating many of AI's current limitations, sometimes decades in advance.
āµ In this live talk, we'll discuss his new book "Taming Silicon Valley: How We Can Ensure That AI Works for Us," focusing on Generative AI's most imminent threats, as well as Marcus' thoughts on what we should insist on, especially from the perspective of AI policy and regulation. We'll also talk about the EU AI Act and U.S. regulatory efforts and the false choice, oftenĀ promotedĀ by Silicon Valley, between AI regulation and innovation.
āµ This will be the 20th edition of my AI Governance Live Talks, and I invite you to attend live, participate in the chat, and learn from one of the most respected voices in AI today. Don't miss it!
š To join the live session, register here. I hope to see you there!
š¬ Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
ā Who values responsible AI?
Reminder: stating that a company values safe and responsible AI is not enough. There must be a concrete implementation strategy. Join the discussion on LinkedIn or X/Twitter.
š AI Book Club: What Are You Reading?
š More than 1,700 people have joined our AI Book Club and receive our bi-weekly book recommendations.
š The last book we recommended was Digital Empires: The Global Battle to Regulate Technology by Anu Bradford.
š Ready to discover your next favorite read? See our previous reads and join the book club here.
š¼ļø Copyright,Ā AIĀ Training, and LLMs
If you're interested in AI copyright, the paper "The Heart of the Matter: Copyright,Ā AIĀ Training, and LLMs" by Daniel Gervais, Noam Shemtov, Haralambos Marmanis, and Catherine Zaller Rowland is a must-read. Here's why:
ā°āā¤ Since the recent Generative AI boom started in November 2022, with the launch of ChatGPT, the AI copyright lawsuits have been piling up, and various academic papers covering LLM-related copyright infringement have been published, discussing, e.g., if "fair use" can be applicable or who can be covered by Article 3 of the EU copyright directive (I have been covering both the lawsuits and the articles in this newsletter; check the archive).
ā°āā¤ We are almost two years into the Generative AI wave, and we still do not have a final legal answer for the AI copyright conundrum. As the public debate proceeds (and the pressure from all the AI hype increases), I have recently read opposing arguments coming from the fields of copyright law and data protection law, stating that because there is no "copy" (copyright) or "storage" (data protection) in the traditional sense of these legal terms, the law would not cover the backstage processing that happens during LLM training and fine-tuning. According to these approaches, which support some version of 'Generative AI exceptionalism,' copyright and data protection infringement claims would not make sense in the context of LLMs because "it's different."
ā°āā¤ This paper covers the copyright side of the debate and refutes Gen AI exceptionalism. It brings us back to the legal grounds and is especially good at breaking down the technical part of how LLMs work, and how they infringe copyright law during input and output, and when they remove rights management information. If you're interested in AI and copyright, you can't miss it.
ā°āā¤ I'll finish with a quote from the paper:
"Now the most profound technological change in history is upon us. A technology that can produce commercially competitive content that is likely to displace some human-created works. It can do this because it has absorbed the works of human authors. The stakes could not be higher."
š„ Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
š©š° Novo Nordisk: Head of AI Governance and Processes - apply
šøš° PwC: AI Governance Manager - apply
š®šŖ Analog Devices: Senior Manager, AI Governance - apply
š©šŖ TRUSTEQ GmbH: AI Governance Associate Consultant - apply
š¬š§ Deliveroo: AI Governance Lead - apply
šŗšø Sutter Health: Director, Data and AI Governance - apply
šŖšø Zurich Insurance: AI Governance Architect, Technical Lead - apply
š®š³ Glean: Product Manager, AI Governance - apply
šØš¦ Interior Health Authority: Specialist, Data & AI Governance - apply
šµš¹ Siemens Energy: AI Governance Consultant - apply
š More job openings: subscribe to our AI governance & privacy job boards and receive our weekly email with job opportunities. Good luck!
š Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
AI is more than just hypeāit must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza