👋 Hi, Luiza Jarovsky here. Welcome to the 104th edition of this newsletter on AI policy & regulation, read by 24,400+ subscribers in 130+ countries. I hope you enjoy reading it as much as I enjoy writing it.
📢 I know many of you are involved in exciting projects in AI policy, regulation, and governance. I would love to hear more and feature some of these projects in the newsletter. Reply to this email and let me know.
📜 The 1st international treaty on AI
The Council of Europe adopts the 1st international treaty on AI. Here's what you need to know:
➡️ The treaty - the Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law - was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe's Committee of Ministers.
➡️ According to the official release:
"The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation."
"The convention adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems."
"The treaty covers the use of AI systems in the public sector – including companies acting on its behalf - and in the private sector. The convention offers parties two ways of complying with its principles and obligations when regulating the private sector: parties may opt to be directly obliged by the relevant convention provisions or, as an alternative, take other measures to comply with the treaty's provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. This approach is necessary because of the differences in legal systems around the world"
➡️ Read the treaty here.
✅ The Council gives final green light to the EU AI Act
But it's NOT enforceable yet. Here's what will happen next:
➵ In the coming days, the legislative act will be published in the EU’s Official Journal;
➵ 20 days after the publication in the EU’s Official Journal, the EU AI Act will enter into force, and its provisions will start taking effect in stages;
➵ 6 months later: countries will be required to ban prohibited AI systems;
➵ 1 year later: rules for general-purpose AI systems will start applying;
➵ 2 years later: the whole AI Act will be enforceable.
➵ Fines for non-compliance can be up to 35 million Euros or 7% of worldwide annual turnover.
💡 This is a ground-breaking AI legislation that will have a global impact. To learn more about the EU AI Act, check out my 4-week Bootcamp on the topic, and if you are not a paid subscriber of this newsletter yet, upgrade to paid and start receiving my weekly in-depth analyses.
📋 New OECD report on AI incidents
The OECD publishes a new AI report: "Defining AI incidents and related terms," and it's a must-read for everyone in AI. Important information:
➡️ An AI incident is defined as:
"an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
➵ injury or harm to the health of a person or groups of people;
➵ disruption of the management and operation of critical infrastructure;
➵ violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
➵ harm to property, communities or the environment."
➡️ An AI hazard is defined as:
"An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms:
➵ injury or harm to the health of a person or groups of people;
➵ disruption of the management and operation of critical infrastructure;
➵ violations to human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
➵ harm to property, communities or the environment."
➡️ Types of harm listed by the report:
➵ Physical harm
➵ Environmental harm
➵ Economic or financial harm, including harm to property
➵ Reputational harm
➵ Harm to public interest
➵ Harm to human rights and to fundamental rights
➵ Psychological harm
➡️ The report states:
"A further step would be to establish clear taxonomies to categorise incidents for each dimension of harm. Assessing the “seriousness” of an AI incident, harm, damage, or disruption (e.g., to determine whether an event is classified as an incident or a serious incident) is context-dependent and is also left for further discussion."
➡️ Read the full report here.
❌ Sony's declaration of AI training opt out
Sony Music Group publishes a statement confirming that using its content, including publicly available creative works, to train AI is prohibited.
"(…) innovation must ensure that songwriters’ and recording artists’ rights, including copyrights, are respected."
➡️ Read the full statement below:
⚖️ Voice actors sue AI voice generator
Voice actors Paul Lehrman and Linnea Sage sue Lovo AI ("hyper-realistic AI voice generator") for creating voice-over productions without permission or proper compensation.
➡️ Quotes:
"This is a class action brought on behalf of Plaintiffs and similarly situated persons whose voices and/or identities were stolen and used by LOVO – to create millions of voice-over productions – without permission or proper compensation, in violation of numerous state right of privacy laws, and the federal Lanham Act. (...) To be clear, the product that customers purchase from LOVO is stolen property. They are voices stolen by LOVO and marketed by LOVO under false pretenses: LOVO represents that it has the legal right to market these voices, but it does not." (page 1)
"LOVO claims that its Genny voices were created using thousands of other voices. The voice of “Kyle Snow” was undoubtedly the voice of Plaintiff Lehrman. The voice of “Sally Coleman” was undoubtedly the voice of Plaintiff Sage. Upon information and belief, the voices of other LOVO voice options are undoubtedly the voices of other class Plaintiffs who neither gave their authorization to use their voice – for either teaching Genny, use by LOVO, or sale by LOVO as part of its service – nor were properly compensated." (page 17)
"Defendant not only improperly appropriates the voices of the named Plaintiffs who are “working actors” but not “celebrities”; Defendant also borrows the name and likeness of some of the nation’s most well-known celebrities, including Barack Obama (crudely represented as “Barack Yo Mama”), Conan O’Brien (“Cocoon O’Brien”), and Elton John (“Elton John Cena”), to promote its services and show the capabilities of the LOVO product." (page 21)
➡️ Read the lawsuit here.
🔎 From Search to Generative AI
Google's "AI Overviews" is the beginning of the transition from Search as we know it to a Generative AI-powered internet. It will also make us more dependent on AI systems, here's why:
➡️ During Google's I/O 2024 conference, the company revealed changes to their search engine, such as the AI-powered new feature "AI Overviews."
➡️ From Google's blog post "Generative AI in Search: Let Google do the searching for you":
"Now, with generative AI, Search can do more than you ever imagined. So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork."
"Sometimes you want a quick answer, but you don’t have time to piece together all the information you need. Search will do the work for you with AI Overviews."
"When you’re looking for fresh ideas, it can take a lot of work to find inspiration and consider all your options. Soon, when you’re looking for ideas, Search will use generative AI to brainstorm with you and create an AI-organized results page that makes it easy to explore."
➡️ According to their blog post, AI Overviews will:
- research, plan, brainstorm
- piece the information together
- find inspiration, consider the options
- brainstorm
- make it easy to explore
➵ Why is this important? We are obviously not talking about "search" anymore, which involves trying to extract content or answers from an immense online database (the internet). We are now talking about letting generative AI do the creative and intellectual work for us.
➵ For example, the time and effort we spend planning a trip with friends are an opportunity for connection, during which we can exercise our creativity, memory, choices, emotional and intellectual skills, and more.
➵ The time and effort we use to brainstorm a project are usually times of intense intellectual growth, during which we understand more about the project, about ourselves, and about the world.
➵ In the near future, we'll be technologically and socially prompted to delegate these time and efforts to AI systems, which will be capable of accomplishing intellectual and creative tasks for us.
➵ It's still unclear to me what we'll do with our time when generative AI does all the intellectual and creative "legwork" of life for us.
➵ It's also unclear to me what will happen to our knowledge about the world when we stop dedicating time and effort to learn it (and delegate to AI systems instead). We'll become immensely dependent.
📌 Resources on AI, tech & privacy
If you enjoy this newsletter, you might also want to subscribe to our:
➵ Job alerts: receive a weekly curation of privacy & AI governance jobs
➵ AI Book Club: receive our AI book recommendations
➵ Learning Center: receive information on upcoming learning opportunities in AI, tech & privacy
🏛️ US Senate releases a roadmap for AI policy
The US Senate released its roadmap for AI policy based on the nine bipartisan 'AI Insight Forums' hosted by its AI Working Group, covering the following topics:
1. Inaugural Forum
2. Supporting U.S. Innovation in AI
3. AI and the Workforce
4. High-Impact Uses of AI
5. Elections and Democracy
6. Privacy and Liability
7. Transparency, Explainability, Intellectual Property, and Copyright
8. Safeguarding Against AI Risks
9. National Security
➡️ More than 150 people attended these forums, including developers, deployers, AI startups, AI companies, providers of key components of the AI supply chain, academia, AI researchers, think tanks, labor unions, and civil rights leaders.
➡️ Quotes:
"Participants agreed that AI could have a significant impact on our democratic institutions. Participants shared examples demonstrating how AI can be used to influence the electorate, including through deepfakes and chatbots, by amplifying disinformation and eroding trust. Participants also noted how AI could improve trust in government if used to improve government services, responsiveness, and accessibility."
"Some participants noted that a national standard for data privacy protections would provide legal certainty for AI developers and protection for consumers. Participants observed that the “black box” nature of some AI algorithms, and the layered developer-deployer structure of many AI products, along with the lack of legal clarity, might make it difficult to assign liability for any harms. There was also agreement that the intersection of AI, privacy, and our social world is an area that deserves more study."
"Some participants noted that there is a role for the federal government to play in protecting American companies’ and individuals’ IP while supporting innovation. Participants shared stories about creators struggling to maintain their identities and brands in the age of AI as unauthorized digital replicas become more prevalent. Participants agreed that the United States will play a key role in charting an appropriate course on the application of copyright law to AI."
"Participants raised awareness about countries like China that are heavily investing in commercial AI and aggressively pursuing advances in AI capacity and resources. In order to ensure that our adversaries don’t write the rules of the road for AI, participants reinforced the need to ensure the DOD has sufficient access to AI capabilities and takes full advantage of its potential."
➡️ Read the full report here.
📄 Great AI paper alert
➡️ The paper "Consent and Compensation: Resolving Generative AI’s Copyright Crisis" by Frank Pasquale & Haochen Sun is a must-read for everyone interested in AI, copyright, and artists' rights. Quotes:
"The opacity and scale of AI systems is disrupting the knowledge ecosystem by significantly eroding authors’ proprietary control of their works, well beyond extant digital practices that have already undermined many authors’ well-being. Whereas prior scraping at scale tended to be focused on the non-expressive aspects of works (such as facts), AI is focused by many prompts on their expressive dimensions. Search engines have historically provided links which lead users to works themselves. In contrast, AI tends to provide substitutes for such works, while failing to provide citations to the works in the dataset most similar to the texts, images, and videos it presents as a computed synthesis." (pages 8-9)
"Under the proposed mechanism, copyright owners can first request AI providers to take actions to effectively prevent their systems from generating outputs that appear identical or substantially similar to relevant copyrighted works. A copyright owner would be entitled to send a notice to an AI provider when he or she identifies that an output generated by the provider’s AI system contains either a verbatim or substantially similar copy of his or her work, or a derivative work. In the notice, the copyright owner would be obliged to document the unauthorized reproduction of the work and his or her copyright ownership, along with a digital copy or an online link to the work." (page 21)
"Given the complexity of the AI supply chain, particularly with respect to generative AI, it is not feasible to impose a per-device cost on AI providers. However, other triggers for payment are possible. Levies on the use of particular datasets may be imposed, or on model training, or on some aggregate number of responses provided to users, or on paid subscriptions. Alternatively, the level of the levy could be benchmarked with respect to some percentage of AI providers’ expenditures or revenues" (page 39)
➡️ Read the paper here.
💡 On AI assistants and intellectual dependency
When we don't use intellectual skills, they recede.
For example, if we stop speaking a language for years, our vocabulary is drastically reduced (unless we practice it again).
If AI assistants start doing our life's intellectual "legwork" for us, what will happen to our brains?
What if they are biased? What if they are programmed to "nudge" us? What if they suddenly stop working?
What type of individuals will we become? What type of society will emerge?
That concerns me from both theoretical and practical perspectives.
🎤 Join my upcoming live session
➡️ As AI development continues at high speed and AI assistants become more advanced, important legal and ethical implications must be discussed.
➡️ In this live session, I'll share my thoughts on the current state of AI assistants and my main concerns from a legal and ethical perspective. The session is open to everyone, and it would be great to see you there and hear your questions and concerns.
➡️ For those interested, after the live session, I'll send additional information and resources.
🚀 Boost your AI governance career
Our new Bootcamp Generative AI Legal Issues: Advanced Program is especially recommended to those who want to advance their careers in AI governance & compliance. Here's what to expect and why to join:
➵ Despite the ongoing technological disruption, for legal and compliance professionals there is a clear opportunity to learn AI-related issues, navigate emerging challenges, integrate new skills and frameworks with previous knowledge, and lead.
➵ The program starts on June 3 and covers AI ethics, data protection, intellectual property, consumer protection, liability, antitrust, governance, and more. It's an excellent opportunity for those who want to advance their careers in AI governance and compliance. It's a live online program designed and led by me and includes additional material, office hours, quizzes, and a certificate.
➵ Those who read this newsletter every week or who have attended our previous Bootcamps already know that you can expect me to cover the most up-to-date discussions and debates at a global level. I hope to see you there in June!
🙏 Thank you for reading!
If you have comments on this week's edition, write to me, and I'll get back to you soon.
To receive the next editions in your email, subscribe here.
Have a great day.
Luiza