๐ Hi, Luiza Jarovsky here. Welcome to the 152nd edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 41,200+ subscribers in 155+ countries. I hope you enjoy reading it as much as I enjoy writing it.
๐ In this week's AI Governance Professional Edition, Iโll explore some of the legal issues behind AI agents. Paid subscribers will receive it on Thursday. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly editions (this free newsletter + the AI Governance Professional Edition), access all previous analyses, and stay ahead in the rapidly evolving field of AI governance.
๐ฃ๏ธ Step into 2025 with a new career path! In January, join me for a 3-week intensive AI Governance Training (8 live lessons; 12 hours total), already in its 16th cohort.ย Join over 1,000 professionals who have benefited from our programs: don't miss it. Students, NGO members, and professionals in career transition can request a discount.
๐
AI Governance on the Rise
In the past few days alone, major AI governance developments happened in Canada (twice!), Greece, India, and Spainโindicating that 2025 will start at full speed. I'll discuss each of these developments below, so be sure to read this newsletter all the way through.
From a job market perspective, while tracking new AI governance job openings on a weekly basis, I have observed a steady growth in the number of available opportunities. The entry into force of the EU AI Act is an important driver of this growth, as companies are starting to prepare for the first batch of legal provisions entering into effect in February 2025. Beyond the EU AI Act, other laws and policy efforts worldwide are also driving companies to hire AI governance professionals to ensure compliance is not compromised when AI is integrated into existing products and functionalities.
Expanding the comment about the AI governance job market, it's encouraging to see more technical positions in AI requiring ethical or legal expertise, such as when AI engineering job postings require responsible AI or AI governance skills. It shows how AI governance is inherently interdisciplinary, where compliance teams have to navigate technical complexities and terminologies, and technical teams have to navigate ethics and law. India's โDeveloper's Playbook for Responsible AIโ (see below) is just one more example of AI governance's interdisciplinary essence.
As we enter December, here's a reminder that 2025 promises to be an excellent year for AI governanceโlikely even more accelerated than this year was. Learning and staying updated every day will be more important than ever, and I look forward to being a partner in your journey!
๐ฃ๏ธ Step Into 2025 with a New Career Path
If you are dealing with AI-related challenges at work, don't miss our acclaimed live online AI Governance Trainingโnow in its 16th cohortโand start 2025 at full power.
This January, weโre offering a special intensive format for participants in Europe and Asia-Pacific: all 8 lessons (12 hours of live learning with me) condensed into just 3 weeks, allowing participants to catch up with recent developments and upskill.
โ Our unique curriculum, carefully curated over months and constantly updated, focuses on AI governance's legal and ethical topics, helping you elevate your career and stay competitive in this emerging field.
โ Over 1,000 professionals from 50+ countries have advanced their careers through our programs, and alumni consistently praise their experienceโsee their testimonials. Students, NGO members, and people in career transition can request a discount.
โ Are you ready? Select your cohort and register today:
*You can also sign up for our learning center to receive updates on future training programs, along with educational and professional resources.
๐จ๐ฆ Legal Info. Institute Sued Caseway AI
The Canadian Legal Information Institute sued Caseway AI over copyright infringement. Pay attention to the creepy practice and why the AI company will probably lose:
โฐโโค Quick summary of the lawsuit
1๏ธโฃ โThe plaintiff, the Canadian Legal Information Institute (โCanLIIโ), is a not-for-profit organization that owns and operates a proprietary search engine and database containing its work product, including court decisions, legislation and secondary sources that have been reviewed, curated, catalogued and enhanced by CanLII at significant cost and effort (the โCanLII Worksโ)
2๏ธโฃ In keeping with its mandate, CanLII provides the public with free access to the CanLII Works on certain terms and conditions.
3๏ธโฃ The Defendants, and each of them, have created a business by wrongfully taking for themselves the CanLII Works by way of a bulk and systematic download from the CanLII website without permission from or compensation to CanLII.
4๏ธโฃ In doing so, the Defendants, and each of them, have engaged in the blatant and willful breach of CanLIIโs terms of use and have otherwise infringed CanLIIโs copyright in the CanLII Works.
5๏ธโฃ CanLII seeks, among other things, injunctive relief, damages and disgorgement of the Defendantsโ profits resulting from the Defendantsโ wrongful taking and misappropriation of the CanLII Works, including copyright infringement under the Copyright Act, R.S.C. 1985, c. C-42."
โฐโโค Now, pay attention to this part:
"(...) at a time unknown to CanLII, but known by the Defendants, one or more of the Defendants or their agents accessed the CanLII Website and coordinated the bulk and systematic download and scraping of the CanLII Works from the CanLII Website (...). On or about- October 3, 2024, CanLII was alerted that the Copied Works were placed in an open elastic cluster located on a host with the IP address (...) which was the same host used by the Defendants, or some of them, to develop and host the Caseway Platform. Although a full investigation of the incident is ongoing, to CanLIIโs knowledge to date, the Copied Works include over 120 gigabytes of data and 3.5 million records."
โฐโโค In my view, especially given the AI company's bad faith and the explicit infringement of the Terms of Use, the Canadian Legal Information Institute is likely to succeed in this lawsuit.
๐จ๐ฆ Canadian News Companies Sued Open AI
Various Canadian News Media Companies sued Open AI over copyright infringement, highlighting circumvention of tech protections and breach of the Terms of Use. Will they be successful? Read this:
โฐโโค The news media companies are legally claiming for a declaration that the OpenAI is liable for:
1๏ธโฃ "Infringing, authorizing, and/or inducing the infringement of the News Media Companiesโ copyright in the Owned Works (defined below), contrary to sections 3 and 27 of the Copyright Act, RSC 1985, c C-42 (โCopyright Actโ);
2๏ธโฃ Engaging in prohibited circumvention of technological protection measures that prescribed access to, and restricted copying of, the News Media Companiesโ Works (defined below), contrary to, and within the meaning of, section 41 and 41.1 of the Copyright Act;
3๏ธโฃ Breaching the Terms of Use (defined below) of the News Media Companiesโ Websites (defined below); and
4๏ธโฃ Unjustly enriching themselves at the expense of the News Media Companies."
โฐโโค The media companies argue that they implemented technological protections on their websites, including exclusion protocols (e.g., robots.txt) and account or subscription-based restrictions. However, OpenAI allegedly circumvented these protective measures while scraping their content for its for-profit activities.
โฐโโค Regarding the infringement of the Terms of Use, they argue:
"(...) each of the Terms of Use expressly prohibit the use of the News Media Companiesโ Websites and Works for any use other than personal, non-commercial uses. The Terms of Use also generally prohibit users from reproducing, distributing, broadcasting, making derivative works from, retransmitting, distributing, publishing, communicating, or otherwise making available any of the Works. Any uses not expressly permitted by the Terms of Use require the News Media Companiesโ express consentโparticularly commercial uses.
Since as early as 2015, OpenAI has breached and continues to breach the applicable Terms of Use for each of the Websites in various ways, including by accessing, scraping, and/or copying the Works for use as part of the Training Data to train its GPT models and/or as part of the RAG Data to augment its for-profit commercial products and services."
โฐโโค The Terms of Use breach argument, coupled with the claim of infringement of the technological protective measures, could provide a strong legal basis for the news media companies to succeed in this lawsuit. Why? Because it shifts the focus of the dispute away from potential fair use-based defenses or endless discussions about the technicalities of the LLM training process.
๐ฎ๐ณ Developer's Playbook for Responsible AI
India has recently published its "Developer's Playbook for Responsible AI," and every country should be working on something similar. Here's why:
โฐโโค According to the authors:
"The playbook is designed to provide a voluntary framework for developers to systematically identify and mitigate the potential risks associated with the commercial development, deployment, and use of AI in India. The risk mitigation guides provided in the playbook contain consolidated AI risk libraries with corresponding sets of risk mitigation prompts, based on our evaluation of existing public and private AI risk management frameworks and hard and soft regulationโmost relevant and applicable to the commercial implementation of AI in India. The references section appended to this playbook provides a list of the regulatory and non-regulatory resources that informed the formulation of the risk mitigation guides."
โฐโโค As national AI governance frameworks are being developed worldwide, including AI laws, standards, and best practices, an essential aspect of the public debate is bridging technical and legal elements and building interdisciplinary AI governance teams. Why?
โฐโโค Traditionally, law and computer science have been separate fields, employing different languages, concepts, methods, and strategies. For AI governance to be effective, preventive, and by design, it should, as much as possible, promote a dialogue where: a) AI developers understand basic legal, governance, and compliance concepts and know when to consult the legal team; and b) lawyers are equipped to navigate AI's technical aspects and work constructively with tech teams from the beginning of the AI development journey.
โฐโโค Pay attention:
"The playbook uses technical or legal jargon at a minimum, only when necessary to maintain precision or accuracy. The glossary section contains the definitions of the jargon and special terms used in the playbook."
๐ฌ๐ท Blueprint for Greece's AI Transformation
Greece published its "Blueprint for AI Transformation," and it's a must-read for everyone involved in AI governanceโor for those with a taste for extreme graphic design! These will be Greece's priority areas:
1๏ธโฃ "Preparing citizens for the AI transition: introducing AI and its associated disciplines, including topics on the ethical use of AI, to the educational
curriculum, starting at the primary level; introducing reskilling and upskilling programs for the general population; improving AI literacy and spreading its empowerment opportunities across the Greek population; strengthening Greeceโs AI innovation potential;
2๏ธโฃ Improving public service efficiency for Greek citizens and people living in Greece: organizing smart infrastructures and citizen-friendly government services at the national and local levels;
3๏ธโฃ Safeguarding and enhancing democracy: defending democratic values, facilitating informed public participation in democratic processes, and
protecting the public sphere from disinformation and misinformation;
4๏ธโฃ Promoting the quality of health care for all: bolstering the national health system with the capacities to provide better quality, data-driven, and targeted health care, including the prediction and management of chronic and rare diseases, while at the same time protecting the rights of patients;
5๏ธโฃ Democratizing access to, and improving the quality of, education: enhancing the ability of children and young people to learn and develop new skills, improving opportunities available to children with learning disabilities and those from underprivileged backgrounds, and tailoring educational materials to individual interests and abilities of all students;
6๏ธโฃ Turning Greece into an attractive global destination for AI and high-tech investment: developing the ecosystems that will create high-quality jobs and improve the overall productivity of the Greek economy and prosperity of the Greek people;
7๏ธโฃ Preserving and enriching cultural heritage: developing the Greek language and heritage data space; augmenting, personalizing, and enriching the experience of Greek culture; enhancing preservation efforts and protecting the integrity of Greek culture in AI models and artifacts;
8๏ธโฃ Climate mitigation and adaptation: equipping Greece with the capabilities to prepare for, prevent, mitigate, adapt to, and effectively manage the climate crisis and extreme natural disasters; harmonizing technological progress with environmental stewardship to diminish the ecological footprint of AI development and deployment;
9๏ธโฃ Supporting national security: upgrading defense capabilities, including cyber-defense capabilities, and improved safeguarding of national borders."
๐๏ธ Live Talk with Ifeoma Ajunwa
Whether you're an employer, an employee, or simply interested in AI, don't miss my conversationโthe last AI Governance Live Talk of 2024โwith Ifeoma Ajunwa. We'll be diving into AI and the workplace:
โต Ifeoma Ajunwa, JD, PhD, is an award-winning tenured law professor and author of the highly acclaimed book "The Quantified Worker." She is a Professor at Emory School of Law, the Founding Director of the AI and Future of Work Program, and a renowned expert in the ethical governance of workplace technologies.
โต Among the topics we'll discuss in this session are:
- Worker surveillance, quantification, and exploitation;
- How existing AI applications in the workplace are making things worse;
- Existing policies and laws on AI in the workplace;
- How the EU AI Act approaches the topic;
- What we should advocate for;
- and more.
โต As AI is ubiquitously deployed by employers, workers remain unprotected, and existing policies and laws might not be enough. I invite you to participate and invite friends to join this fascinating live conversation next month!
๐ To join the live session, register here.
๐ฌ Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
๐ AI Book Club: What Are You Reading?
๐ More than 1,900 people have already joined our AI Book Club and receive our bi-weekly book recommendations.
๐ The 15th recommended book was โAI Snake Oil: What AI Can Do, What It Canโt, and How to Tell the Differenceโ by Arvind Narayanan & Sayash Kapoor.
๐ Ready to discover your next favorite read? See our previous reads and join the book club here.
๐ช๐ธ Spain's Royal Decree on AI Copyright
Spain proposed a draft Royal Decree regulating collective licenses for the mass exploitation of copyrighted works to train AI. This could shake up the field of AI; here's what everyone should know:
โก๏ธ According to the draft decree's summary:
"The purpose of the draft Royal Decree is to develop article 163 of the TRLPI (relating to the granting of non-exclusive authorizations for the use of the repertoire of management entities), with the aim of facilitating the granting of said non-exclusive authorizations (or collective licenses) in the context of technological development of AI (and, in particular, for the development of general-purpose AI models).
The specific mechanism for this would be extended collective licenses, an instrument provided for in Article 12 of the EU Copyright Directive (2019/790), whose inclusion in the internal regulations of the EU Member States is voluntary but which has proven particularly timely in the context of AI."
โก๏ธ The first article of the proposed decree establishes the conditions for the concession of these expanded collective licenses:
โฐโโค "Obtaining intellectual property rights holders' authorization on an individual basis is so onerous and difficult that it makes the required operation improbable.
โฐโโค The collective management entity is, on the basis of its mandates,
sufficiently representative in Spain of the rights holders of the corresponding
category of protected works or services and of the rights that are the subject of authorization, which will be accredited by the certificate of representation referred to in article 3.
โฐโโค All rights holders are guaranteed equal treatment in relation to the terms of the non-exclusive authorization.
โฐโโค Rights holders who have not authorized the entity to grant the non-exclusive authorization may exclude their protected works or services from the extended collective license at any time, easily and effectively.
โฐโโค Appropriate publicity measures are taken to inform rights holders who have not granted a management mandate, for a reasonable period, before the protected works or services are used under the non-exclusive authorization."
โก๏ธ This is still a draft proposal and might change. However, given that it's based on Article 12 of the EU Copyright Directive, if it works, other EU countries might follow suit.
โก๏ธ This is an extremely interesting development in the field of AI copyright, which might signal how the field might evolve and how lawsuits might be decided in the months to come, at least from an EU perspective.
โก๏ธ If you want to have a say on AI copyright issues in the EU, I highly recommend that you submit your contributions and participate in this discussion. The deadline is December 10. If you have friends who might want to participate, share this with them.
๐คบ AI Battles: Oversight & Explainability
If you are interested in AI, you can't miss last week's special AI Governance Professional Edition, where I explored the challenges of translating ethical principles into legal provisions, particularly concerning human oversight and the explainability of AI systems. Additionally, I analyzed why certain provisions of the EU AI Act might prove ineffective.
Read the preview here. If you're not a paid subscriber, upgrade your subscription to access all previous and future analyses in full.
๐ฅ Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
๐บ๐ธ Sony Interactive Entertainment: Director, AI Governance - apply
๐บ๐ธ Mobius Ventures: AI Governance Fellow - apply
๐บ๐ธ MassMutual: Head of Privacy, Data & AI Governance - apply
๐ฎ๐น BIP: AI Governance Specialist - apply
๐ฉ๐ช Dataiku: Software Engineer, AI Governance - apply
๐ฌ๐ง Vodafone: Data Privacy and Responsible AI Manager - apply
๐ฏ๐ต Rakuten: Research Scientist, Responsible AI - apply
๐ธ๐ฌ Resaro: Senior Responsible AI Scientist - apply
๐ฎ๐ณ Accenture in India: Responsible AI Advisor - apply
๐บ๐ธ TikTok: Research Scientist, Responsible AI - apply
๐ More job openings: subscribe to our AI governance and privacy job boards to receive weekly job opportunities. Good luck!
๐ Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
AI is more than just hypeโit must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza