👋 Hi, Luiza Jarovsky here. Welcome to the 136th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 36,100+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I'll explore the latest developments in the debate about relying on legitimate interest to train AI, including the EDPB Guidelines released this week. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
🎉 Registration for the November cohorts is open! If you are transitioning to AI governance and are ready to invest 5 hours a week in learning & upskilling, our 4-week live online AI Governance Training is for you: join 1,000+ professionals from 50+ countries who have accelerated their careers with our programs. 👉 Save your spot today and get 10% off with the coupon EARLYBIRD1314 (valid only until Monday, October 14th).
🚂 Next Station: AI Liability
The EU AI Act entered into force on August 1st, but AI regulation doesn't end there. Given that the AI Act doesn't cover AI-related liability, the EU must still decide on its liability rules for when AI-related harm is involved, meaning the legal consequences and the relevant legal procedures when AI systems cause harm.
Among the questions that will have to be answered by the EU AI liability rules are: who will be responsible for the damage or harm caused: the provider of the general-purpose AI model, the provider of the AI system, or the deployer of the AI system? Does the victim have to prove negligence or intent of the entity responsible for the harm, or causation and harm are enough? What types of harm will be recognized by the law? How will compensation be established? What will be the legal procedure?
To contextualize the legal background on the topic, each EU Member State has its own national liability rules, and there is also the recently amended Product Liability Directive, which is expected to enter into force soon and then will need to be transposed to each Member State's national legal framework. Despite the amendments to the EU Product Liability Directive to make sure it's “fit for the digital age” (given that the original text is from 1985), most experts agree that a legal framework exclusively focused on AI liability is still needed.
There is already an ongoing legislative process around AI liability rules, as the EU Commission presented a proposal in 2022 (AI Liability Directive - the AILD). The European Parliament's Committee on Legal Affairs has recently requested a complementary impact assessment of the proposal, focusing on specific issues. A few weeks ago, the European Parliament presented a document written by Philipp Hacker titled "Proposal for a directive on adapting non-contractual civil liability rules to AI." It's an extremely interesting proposal and a must-read for everyone in AI. Some of its key recommendations are:
1️⃣ "Identity of concepts and definitions. The AILD includes several key concepts from the AI Act. For reasons of coherence and legal clarity, the AILD should adopt the concepts used in the AI Act (e.g. the definition of AI itself).
2️⃣ From high-risk to high-impact AI systems. The AILD should, however, add certain categories that trigger the evidence disclosure obligations and the rebuttable presumptions concerning fault and causality. This primarily concerns: general-purpose AI systems (e.g. ChatGPT); Old Legislative Framework systems (e.g. autonomous vehicles; transportation-related AI applications more generally; other AI systems falling under Annex I Section B AI Act 3 ); and insurance applications beyond health and life insurance. The study suggests an umbrella term ('high-impact AI systems') to cover high-risk AI systems and those additional systems.
3️⃣ Rebuttal of presumption. The AILD framework should allow for the causality presumption to be rebutted in cases where initial violations of the AI Act are rectified at later stages.
4️⃣ Article 14 and 26 AI Act violations. Articles 14 and 26 AI Act require human oversight mechanisms in AI systems. The direct causation between a lack of ex-post oversight and harmful outputs is not always clear. It is suggested to establish a direct presumption of causality between AI outputs and damages for non-compliance with monitoring obligations.
5️⃣ Handling of prohibited AI systems. For AI systems banned under Article 5 AI Act, the recommendation is to assume strict liability for any damages they cause.
6️⃣ Impact of general-purpose AI systems. The current AILD framework does not adequately cover general-purpose AI systems, which can lead to significant harm particularly in the realms of non-discrimination (e.g. unbalanced content) and personality rights (e.g. hate speech and fake news). It is recommended that generative AI systems, such as ChatGPT, be classified under the new 'high-impact' category. This would bring them under the ambit of the AILD, ensure evidence disclosure, and establish presumptions of causality for safety violations. This, in turn, aids injured parties in legal claims.
7️⃣ Extension of the AILD beyond the PLD. Given the PLD's limitations (e.g. concerning nonprofessional users and types of damage not covered by the PLD), there is a strong case for extending the AILD, to ensure a comprehensive liability framework."
-
It is not only in the EU that lawmakers and AI governance professionals are thinking about AI liability rules. In a recent report titled "U.S. Tort Liability for Large-Scale AI Damages," by Ketan Ramakrishnan, Gregory Smith, and Conor Downey, the authors discuss U.S. tort law and its significance in the context of large-scale harm caused by AI. According to the authors:
"The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences (as well as lawyers) who wish to understand the liability exposure that AI development may entail, how this exposure might be mitigated, and how the existing liability regime might be improved by legislation in order to enhance its ability to properly incentivize responsible innovation."
➡️ Below are the report's key findings:
➵ "Tort law is a significant source of legal risk for developers that do not take adequate precautions to guard against causing harm when developing, storing, testing, or deploying advanced AI systems.
➵ There is substantial uncertainty, in important respects, about how existing tort doctrine will be applied to AI development. Jurisdictional variation and uncertainty about how legal standards will be interpreted and applied may generate substantial liability risk and costly legal battles for AI developers.
➵ AI developers that do not employ industry-leading safety practices, such as rigorous red-teaming and safety testing or the installation of robust safeguards against misuse, among others, may substantially increase their liability exposure.
➵ While developers face significant liability exposure from the risk that third parties will misuse their models, there is considerable uncertainty about how this issue will be treated in the courts, and different states may take markedly different approaches.
➵ Safety-focused policymakers, developers, and advocates can strengthen AI developers' incentives to employ cutting-edge safety techniques by developing, implementing, and publicizing new safety procedures and by formally promulgating these standards and procedures through industry bodies.
➵ Policymakers may wish to clarify or modify liability standards for AI developers and/or develop complementary regulatory standards for AI development."
-
It's important to remember that AI liability is also not the end station in the AI regulation journey, and there are still many open issues and doubts on how to apply traditional legal concepts, principles, and rules when AI is involved.
AI governance, regulation, and the ethical & legal implications of AI are precisely the topics of my 4-week AI Governance Training. The program includes 12 hours of live online lessons with me, 16 CPE credits pre-approved by the IAPP, and more. Over 1,000 people have benefited from our training programs. If you're looking to advance your career in AI governance, join our 13th cohort in November. Register here to secure your spot.
💼 AI Report: AI & the Legal Profession
The International Bar Association and the Center for AI and Digital Policy published an excellent report about AI and the legal profession, which every lawyer should read. These are the report's key recommendations:
1️⃣ "Promote widespread AI adoption with special support for smaller firms. Develop programmes and resources specifically targeted at smaller law firms to assist them in integrating AI technologies. (...)
2️⃣ Enhance AI governance and policy development. Establish comprehensive guidelines and best practices for AI governance, emphasising data governance, security, IP and privacy. (...)
3️⃣ Support structural and cultural changes in law firms. Provide guidance on the organisational changes required to integrate AI effectively. (...)
4️⃣ Facilitate AI training. Develop or identify training programmes focused on the legal profession and AI literacy. (...)
5️⃣ Encourage comprehensive stakeholder consultation for AI regulation. Advocate for the inclusion of diverse stakeholders – tech experts, industry representatives, academia and others, perhaps including civil society end-users and consumers, in the AI regulatory process. (...)
6️⃣ Promote consistency and coherence in AI regulation. Work with regulatory bodies to develop consistent and coherent AI regulations that avoid fragmentation and disorganisation. Special consideration should be given to cross-border issues and where possible, harmonisation. Emphasise the importance of stable yet flexible regulatory frameworks that can adapt to the evolving nature of AI technology while protecting legal and ethical standards.
7️⃣ Update ethical guidelines to reflect AI use. Revise and update ethical guidelines to include specific provisions for AI use. This should encompass the proper supervision and use of AI tools, the setting of standards for AI-generated work to meet professional ethical guidelines and include disclosure obligations regarding the use of AI. (...)
8️⃣ Foster global collaboration and knowledge-sharing. Promote international collaboration and knowledge sharing among national bar associations, law societies and legal professionals. (...)"
🌳 AI Research: EU AI Act & Environmental Impact
Will the AI Act help mitigate AI's environmental impact? The paper "AI, Climate, and Transparency: Operationalizing and Improving the AI Act" by Nicolas Alder, Kai Ebert, Ralf Herbrich, and Philipp Hacker is an excellent read, offering concrete policy proposals:
➡️ According to the paper:
"The AI Act is a first step toward mandatory AI related climate reporting, but is riddled with loopholes and vague formulations. To remedy this, we make six key policy proposals. Such mechanisms should not only be included in the evaluation report due in August 2028 (Art. 111(6)), but in any interpretive guidelines by the AI Office and other agencies, reviews and potential textual revisions beforehand."
➡️ Below, you can find the six main observed shortcomings and the authors' respective policy proposals:
➵ Shortcoming: Inference Energy Consumption Exclusion
💡 Policy Proposal: Explicitly include inference in energy reporting obligations in Annexes XI and XII.
➵ Shortcoming: Indirect Emissions and Water Consumption
💡 Policy Proposal: Extend reporting obligations to include water consumption and indirect GHG emissions from AI applications.
➵ Shortcoming: Fine-Tuning Uncertainty
💡 Policy Proposal: Clarify uncertainty of reporting obligations by tying them to computational cost and training mechanisms.
➵ Shortcoming: Open-Source Models
💡 Policy Proposal: Revoke the exemption to ensure comprehensive climate reporting.
➵ Shortcoming: Lack of Standard Reporting Methodology
💡 Policy Proposal: Measure energy consumption at the cumulative server level, with separate PUE.
➵ Shortcoming: Lack of Public Access to Energy Data
💡 Policy Proposal: Make all climate-related disclosures publicly available to foster transparency and market accountability.
➡️ As the paper highlights, transparency and reporting are not enough to tackle AI-related environmental issues, and more must be done from a regulatory perspective:
"(...) climate reporting can only be a first step in addressing the massive and fast-rising environmental impact of AI models and systems. It must be complemented by substantive obligations, including sustainability risk assessment and management, renewable energy targets for data centers, and potentially even (tradable) caps on the energy and water consumption of data centers and similar major consumption drivers in the AI value chain."
📄 AI Report: Competition & Generative AI
The report "Competition in Generative AI and Virtual Worlds" by Klaus Kowalski, Cristina A. Volpin, and Zsolt Zombori is a must-read AI policy brief most people missed, covering essential topics, such as:
1️⃣ Market tendencies
"There are several emerging tendencies characterising generative AI-related markets that seem to be prevailing at the time of writing and may be relevant from a competition perspective"
➵ Tendency towards vertical integration or establishing partnerships to access input resources
➵ Tendency towards vertical integration or establishing partnerships to access distribution channels
➵ Tendency towards more efficient, smaller models
➵ Tendency towards the parallel development of open source and proprietary models
2️⃣ Potential barriers to entry in Generative AI-related markets
"The consultation and the on-going market investigations revealed that the key components for the development and deployment of generative AI systems include:
➵ Data
➵ AI accelerator chips
➵ Computing infrastructure
➵ Cloud capacity
➵ Technical expertise
As mentioned above, depending on the economic context, each of these may qualify as a potential barrier to entry or expansion, or potentially lead to an anticompetitive practice."
3️⃣ Other factors promoting competition in generative AI-related markets
"There are several factors that can be considered important to reduce potential barriers to entry or limit their effects, as well as directly or indirectly promoting competition in generative AI-related markets. Some examples include:
➵ The presence of open-source models (...)
➵ The availability of freely or easily accessible high-quality databases (...)
➵ The availability of freely accessible public supercomputers to researchers and stakeholders
➵ The availability and mobility of AI talent
➵ The ability of customers and consumers to switch and multihome across different cloud or AI foundation model providers
➵ The presence and dissemination of differentiated AI foundation models (...)
➵ The presence of pro-competitive non-exclusive partnerships between generative AI developers and players with access to important inputs or access to consumers
➵ Some well-targeted interoperability standards across different AI foundation models and across different level of the generative AI supply stack."
⚖️ AI Lawsuit: Christopher Farnsworth vs. Meta
Author Christopher Farnsworth sued Meta over copyright infringement, alleging it used pirated copies of his book to develop its AI model LLAMA. Important quotes:
"In its February 27, 2023 research paper, Meta admitted to downloading and reproducing Books3 willfully as part of its Llama development project. Meta AI researchers explained that the training data used to develop Llama 1 included “two book corpora . . . the Gutenberg Project, which contains books that are in the public domain, and the Books3 section of The Pile (Gao et al., 2020), a publicly available dataset for training large language models.”
"Instead of willfully downloading and reproducing a notorious trove of pirated material, Meta could have lawfully purchased copies of books then negotiated a license to reproduce them. Alas, Meta did not even bother to pay the purchase price for the books it illegally downloaded, let alone obtain a license for their reproduction."
"Meta has also usurped a licensing market for copyright owners. In the last two years, a thriving licensing market for copyrighted training data has developed. AI companies have paid hundreds of millions of dollars to obtain licenses to reproduce high-quality copyrighted material for LLM training. Meta chose to use Plaintiff’s works, and the works owned by the proposed Class, free of charge, and in doing so has harmed the market for the copyrighted works by depriving them of book sales and licensing revenue.”
📋 AI Report: Secure Use of AI Coding Assistants
The French Cybersecurity Agency and the German Federal Office for Information Security published recommendations for the secure use of AI coding assistants, and it's a great read for everyone in AI. Quick summary:
1️⃣ Opportunities
"AI coding assistants can be utilized in several different stages of the software development process. While the generation of source code is the key functionality, these LLM-based AI systems can also help developers to familiarize themselves with new projects by providing code explanations. Furthermore, AI coding assistants can support the code development process by automatically generating test cases and ease the burden of code formatting and documentation steps. The functionality to translate between programming languages can simplify the maintenace efforts by translating legacy code into modern programming languages. Additionally, the assistive nature of the technology can help increase the satisfaction of employees."
2️⃣ Risks
"One important issue is that sensitive information can be leaked through the user inputs depending on the contract conditions of providers. Furthemore, the current generation of AI coding assistants cannot guarantee to generate high quality source code. The results have a high variance in terms of quality depending on programming language and coding task. Similar limitations can be observed for the security of generated source code. Mild and severe security flaws are commonly present in AI-provided code snippets. Moreover, using LLM-based AI systems during software development allows for novel attack vectors that can be exploited by malicious actors. These attack vectors include package confusion attacks through package hallucination, indirect prompt injections and poisoning attacks."
3️⃣ Recommendations
"➵ AI coding assistants are no substitute for experienced developers. An unrestrained use of the tools can have severe security implications.
➵ A systematic risk analysis should be performed before introducing AI tools (including an assessment of the trustworthiness of providers and involved third-parties).
➵ Gain in productivity of development teams must be compensated by appropriate scaling measures in quality assurance teams (AppSec, DevSecOps).
➵ Generated source code should generally be checked and reproduced by the developers."
🏛️ California AI Laws
Most people don't know it, but in September alone, California enacted 17 bills (!) covering Generative AI. What do these bills say? Here's what you need to know:
As I wrote in this newsletter last week, the Governor of California vetoed the much-debated AI safety bill SB 1047. However, it doesn't mean that California has completely neglected AI regulation. On the contrary: in its own fragmented way, in September alone, California enacted 17 new bills covering Generative AI. Check them out:
➵ AB 1008: "Clarifies that personal information under the California Consumer Privacy Act (CCPA) can exist in various formats, including information stored by AI systems."
➵ AB 1831: "Expands the scope of existing child pornography statutes to include matter that is digitally altered or generated by the use of AI."
➵ AB 1836: "Prohibits a person from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent, except as provided."
➵ AB 2355: "Requires committees that create, publish, or distribute a political advertisement that contains any image, audio, or video that is generated or substantially altered using AI to include a disclosure in the advertisement disclosing that the content has been so altered."
➵ AB 2602: "Provides that an agreement for the performance of personal or professional services which contains a provision allowing for the use of a digital replica of an individual’s voice or likeness is unenforceable if it does not include a reasonably specific description of the intended uses of the replica and the individual is not represented by legal counsel or by a labor union, as specified."
➵ AB 2655: "Requires large online platforms with at least one million California users to remove materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election, if the content is reported to the platform."
➵ AB 2839: "Expands the timeframe in which a committee or other entity is prohibited from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content from 60 days to 120 days, amongst other things."
➵ AB 2885: "Establishes a uniform definition for AI, or artificial intelligence, in California law."
➵ And also the following bills:
AB 2013
AB 2876
AB 3030
SB 896
SB 926
SB 942
SB 981
SB 1120
SB 1288
SB 1381
➡️ Read more about each of these bills on Governor Newsom's website.
➡️ Sometimes I wonder... Wouldn't it be easier to have a unified national legal framework regulating AI in the U.S.?
💼 Take your AI Governance career to the next level
➵ Join our 4-week AI Governance Training— a live, online, and interactive program led & designed by me for professionals who want to accelerate their AI governance career and who are ready to dedicate at least 5 hours a week to the program (live sessions + self-learning). Here's what to expect:
➵ The training includes 8 live online sessions with me (90 minutes each), over the course of 4 weeks, totaling 12 hours of live sessions. You'll also receive additional learning material, quizzes, 16 CPE credits pre-approved by the IAPP, and a training certificate. You can always send me your questions or book an office-hours appointment with me. Groups are small, so it's also an excellent opportunity to learn with peers and network.
➵ This is, to the best of our knowledge, one of the most comprehensive and up-to-date AI governance programs available, covering the latest developments in the field. The program consists of two modules:
Module 1: Legal and ethical implications of AI, risks & harms, recent AI lawsuits, the intersection of AI and privacy, deepfakes, intellectual property, liability, competition, regulation, and more.
Module 2: Learn the EU AI Act in-depth, understand its strengths and weaknesses, and get ready for policy, compliance, and regulatory challenges in AI.
➡️ Over 1,000+ people from 50+ countries have already benefited from our programs, and we are announcing the 13th & 14th cohorts today, with live sessions in November. Are you ready?
🎓 Check out the program, read testimonials, and save your spot here.
🎉 Registration for the November cohorts opens today! Those who register until Monday, October 14, receive 10% off with the coupon EARLYBIRD1314.
We hope to see you there!
🎙️ Next week: AI Regulation Around the World
If you are interested in the current state of AI regulation—beyond just the EU and the U.S.—you can't miss my conversation with Raymond Sun next week: register here. This is what we'll talk about:
➵ Among other topics, we'll discuss the latest AI regulation developments in:
🇦🇺 Australia
🇨🇳 China
🇪🇬 Egypt
🇮🇳 India
🇯🇵 Japan
🇲🇽 Mexico
🇳🇬 Nigeria
🇸🇬 Singapore
🇹🇷 Turkey
🇦🇪 United Arab Emirates
and more.
➵ There can be no better person to discuss this topic than Raymond Sun, a lawyer, developer, and the creator of the Global AI Regulation Tracker, which includes an interactive world map that tracks AI regulation and policy developments around the world.
➵ This will be the 19th edition of my Live Talks with global privacy & AI experts, and I hope you can join this edition live, participate in the chat, and stay up to date with AI regulatory approaches worldwide.
👉 To participate, register here.
🎬 Find all my previous Live Talks on my YouTube Channel.
📚 AI Book Club: What Are You Reading?
📖 More than 1,500 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari.
📖 Ready to discover your next favorite read? See the book list & join the book club here.
🔥 Job Opportunities: AI Governance is HIRING
Below are 10 new AI Governance positions posted in the last few days. Bookmark, share & apply early:
1. Swift 🇬🇧 - AI Governance Lead: apply
2. Vienna Insurance Group (VIG) 🇦🇹 - Data & AI Governance Lead: apply
3. JPMorganChase 🇬🇧 - AI Governance, Data Management Lead: apply
4. YOCHANA 🇺🇸 - AI Governance Lead: apply
5. Accenture España 🇪🇸 Responsible AI, Monitoring & Compliance: apply
6. BCG X 🇺🇸 - Responsible AI Leader: apply
7. Accenture in India 🇮🇳 - Responsible AI Tech Lead: apply
8. Commonwealth Bank 🇦🇺 - Senior Data Scientist, Responsible AI: apply
9. eBay 🇺🇸 - Senior Researcher, Responsible AI: apply
10. Munich Re 🇬🇧 - Data Privacy, AI & Ethics Lead: apply
🔔 For more AI governance and privacy job opportunities, subscribe to our weekly job alerts. Good luck!
🙏 Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
If you found this edition valuable, consider sharing it with friends & colleagues, and help me spread awareness about AI policy, compliance & regulation. Thank you!
Have a great day.
All the best, Luiza