👋 Hi, Luiza Jarovsky here. Welcome to the 142nd edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 37,500+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it!
💎 In this week's AI Governance Professional Edition, I’ll discuss the intersection of AI and competition law. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
⏰ Last week to register! If you are transitioning to AI governance and want to go beyond standard certifications, our 4-week AI Governance Training is for you. Join 1,000+ professionals from 50+ countries who have accelerated their careers through our programs. The 13th cohort starts next week; save your spot!
💬 Free Speech & AI
AI advancements bring critical new challenges in the context of free speech and the First Amendment, including questions such as:
↳ Should Generative AI outputs be considered protected speech?
↳ Can First Amendment doctrines help tackle AI-powered deepfakes?
↳ Can content moderation law help regulate AI chatbots?
↳ Who is liable for Generative AI outputs?
Here are 10 excellent papers to help you dive deeper and learn more about the topic. Download, read, and share:
1️⃣ Intentionally Unintentional: GenAI Exceptionalism and the First Amendment (2024) by David Atkinson, Jena D. Hwang, and Jacob Morrison
🔎 Read it here
2️⃣ Generative Artifice: Regulation of Deepfake Exploitation and Deception under the First Amendment (2024) by Michael Murray
🔎 Read it here
3️⃣ Constructing AI Speech (2024) by Margot Kaminski and Meg Leta Jones
🔎 Read it here
4️⃣ The Disembodied First Amendment (2023) by Nathan Cortez and William Sage
🔎 Read it here
5️⃣ Speech Certainty: Algorithmic Speech and the Limits of the First Amendment (2024) by Mackenzie Austin and Max Levy
🔎 Read it here
6️⃣ AI Outputs Are Not Protected Speech (2024) by Peter N. Salib
🔎 Read it here
7️⃣ Algorithmic Enforcement Tools: Governing Opacity with Due Process (2023) by Giancarlo Frosio
🔎 Read it here
8️⃣ Large Libel Models? Liability for AI Output (2023) by Eugene Volokh
🔎 Read it here
9️⃣ Artificial Intelligence and the First Amendment (2023) by Cass Sunstein
🔎 Read it here
🔟 Negligent AI Speech: Some Thoughts About Duty (2023) by Jane Bambauer
🔎 Read it here
💡 Unpopular Opinion
We are in the regulatory Wild West of AI chatbots. Personified AI chatbots (like Replika and CharacterAI) are like "digital cigarettes," addictive and harmful, and should be heavily regulated. Children should not be able to access them.
👉 What are your thoughts? Join the discussion on LinkedIn and Twitter/X.
🤖 “Emotional AI”: Challenges and Risks
As we witness more tragic cases involving AI chatbots, understanding their inherent risks is crucial. The paper "Feels Like Empathy: How 'Emotional' AI Challenges Human Essence," by Angelina Chen, Sarah Isabel Koegel, Oliver Hannon, Raffaele F Ciriello, is a must-read. Selected quotes:
"(...). This (de)humanisation paradox involves a complex dilemma where, by personifying AEI [Artificial Emotional Intelligence] agents, we attribute human-like characteristics to probabilistic outputs generated by non-human entities, thereby diminishing our human essence. As we place AEIs on equal footing with genuine human empathy and consciousness, we might contribute to a dilution of the authenticity and complexity of these essential human emotions and traits. Paradoxically, this may diminish what it means to be human, and create problematic comparisons between humans and machines. The uncritical personification of AEI may reduce essential human qualities to predictable and replicable responses from technology, which in turn, may objectify humans and denigrate their unique complexities. Through the belief that AI tools can be human, we may also implicitly reduce humans to tools. This is particularly concerning when weighed against Kant's (1785) categorical imperative of treating humans as ends, not means. As the boundaries between human and AI emotions blur, we may risk treating both as mere mechanistic outputs."
"AEI's emergent empathy presents daunting ethical and legal challenges, especially when considering emergency mental health chatbots. Given the potential for algorithmic bias within chatbots, is it safe to deploy them during high-demand times at suicide hotlines? This raises questions about accountability— who bears responsibility if a chatbot's guidance proves detrimental? Different ethical paradigms provide varied answers (Gal et al. 2022). One kind of response could be informed by utilitarian ethics, evaluating the net benefit: If the bot provides more aid than harm, its implementation is justified, at least for those who are not harmed. Deontology, emphasising moral duties, provides a different answer: If there is a chance that the chatbot might harm even a single individual, it should not be deployed. From a virtue ethics lens, the technology might diminish human potential, regardless of its efficacy in emergencies. The theoretical implications of such emergent empathy underscore the profound responsibility and discretion needed when harnessing AI, particularly for critical mental health support."
💼 Advance Your Career
➵ Join our 4-week AI Governance Training—a live, online, and interactive program designed for professionals who want to accelerate their AI governance career and go beyond standard certifications. Here's what to expect:
➵ The training includes 8 live online lessons with me (90 minutes each) over the course of 4 weeks, totaling 12 hours of live sessions. You'll also receive additional learning material, quizzes, a training certificate, and 16 CPE credits pre-approved by the IAPP. You can always send me your questions or book an office-hours appointment with me. Groups are small, so it's an excellent opportunity to learn with peers and network.
➵ This is a comprehensive and up-to-date AI governance training focused on AI ethics, compliance, and regulation, covering the latest developments in the field. The program consists of two modules:
↳ Module 1: Legal and ethical implications of AI, risks & harms, recent AI lawsuits, the intersection of AI and privacy, deepfakes, intellectual property, liability, competition, regulation, and more.
↳ Module 2: Learn the EU AI Act in-depth, understand its strengths and weaknesses, and get ready for policy, compliance, and regulatory challenges in AI.
➡️ We offer discounted rates for students, NGO members, and those who are in career transition: get in touch.
➡️ Over 1,000 professionals from 50+ countries have already benefited from our programs. Are you ready?
⏰ Last week to register for the 13th cohort. Check out the training details, read testimonials, and save your spot. I hope to see you there!
*If now isn’t the right time for you, you can sign up for our learning center to receive updates on future training programs along with educational and professional resources.
🎥 AI and the Audiovisual Sector
The report "AI and the audiovisual sector: navigating the current legal landscape" is a valuable resource for anyone working in AI and a critical step to tackling legal issues impacting the audiovisual sector. Important information:
➵ The report was authored by Dr. Malte Baumann, Judit Bayer, Mira Burri, Gianluca Campus, Mark Cole, Kelsey Farish, Philipp Hacker, Elodie Migliore, Jan Bernd Nordemann, Justine Radel-Cormann, Sandra Schmitz-Berndt, Bart van der Sloot and covers essential ethical and legal topics in the intersection of AI and the audiovisual sector.
➵ Among the legal fields covered are data protection, copyright, liability, labor law, and constitutional law. It also deals with ethical dilemmas and societal challenges brought by Generative AI.
➵ On copyright-related issues, as evidenced by the growing number of lawsuits in the U.S. (which I've been covering in this newsletter) and a recent decision in Germany (learn more here), there are no straightforward answers. This report addresses the topic and enlightens the discussion.
➵ Concluding quote:
"With the expansion of AI usage by citizens, from recreational purposes at home to content restoration and creative assistance, freedom of expression (Art. 10 ECHR) should remain a key concern when regulating AI. The work has already begun. The Council of Europe’s creation of a committee of experts on the impacts of generative AI for freedom of expression in April 2024 is an important step. The committee is tasked with drafting a non-binding Guidance Note on the implications of generative AI for freedom of expression by the end of 2025. According to the draft meeting report from the committee’s meeting in April 2024, the Guidance Note should be framed around benefits and systemic risks. The benefits include: expanding access to information at a larger scale, adapting the information format to the individual, for example making the language simpler and communicating visually, or enabling a better understanding and use of information; increasing the visibility of diverse voices, and providing a suitable platform also for groups and individuals in vulnerable situations. The risks encompass the spread of disinformation, de-skilling of people, digital exclusion, manipulation, cheating, deep fakes, and environmental aspects of foundation models."
➵ As more sector-specific AI reports emerge, the audiovisual sector could not be left behind. I'm sincerely pleased that this comprehensive report came out, as it calls attention to important issues affecting the audiovisual sector and informs policy and regulatory efforts to support creators in various fields.
🫧 Is AI Another Tech Bubble?
Is the current AI wave another tech bubble? According to Luciano Floridi, yes, it is, and his recent paper "Why the AI Hype is Another Tech Bubble" is a must-read. Below, he explains more about tech bubbles:
"A quick comparison between the Dot-Com Bubble and the Cryptocurrency
Bubble shows that the former was larger in scale, longer in duration, and primarily centred in the U.S., while the latter rose and fell more rapidly and was a more global phenomenon. The Dot-Com Bubble saw significant institutional investor participation, while the Cryptocurrency Bubble was initially driven more by retail investors. Unlike purely speculative assets, many dot-coms had actual products, services, and intellectual property, even if overvalued. However, the bubbles fed each other, and shared several core characteristics, to such an extent that there are strong family resemblances, to use Wittgenstein’s famous metaphor:
1️⃣ A disruptive technology at the core. A bubble centres around a technology with the potential to revolutionise multiple industries, other technology (synergetic effect), and the tendency to cause tunnel vision.
2️⃣ Speculation outpacing reality. In a bubble, market excitement and investment outpace the actual development and implementation of the technology, its long-term usefulness, and sustainable profitability.
3️⃣ New valuation paradigms. Traditional financial metrics tend to be discarded in favour of new, unorthodox and often flawed measures of value.
4️⃣ Retail investor participation. Since the Cryptocurrency Bubble, a significant involvement of individual investors, often motivated by FOMO, has become a significant feature.
5️⃣ Regulatory gap and lag. Regulatory frameworks are absent in a bubble and/or struggle to keep pace with technological and market developments."
"Following the previous analysis, there is a compelling argument to be made that the current AI Hype Cycle shares significant similarities with previous tech bubbles, exhibiting the typical characteristics of a tech bubble (...). The rapid advancements in AI, particularly in machine learning, deep learning, and LLMs (or, to be more precise, foundation models), have led to a surge of excitement, investment, and media attention, mutually reinforcing each other and reminiscent of previous tech bubbles, especially the Dot-Com Bubble (...). The release of ChatGPT in November 2022 accelerated this trend, creating a perfect storm (still ongoing at the time of writing, though it seems to be abating) of inflated expectations and speculative investment. This phenomenon bears striking resemblances to the tech bubbles of the past, suggesting that we may indeed be witnessing the formation of an AI bubble."
⚙️ Impossibility of Artificial Inventors
The paper "Impossibility of Artificial Inventors" by Matt Blaszczyk is an excellent read for everyone interested in AI and intellectual property, as well as post-humanists' arguments. Selected quotes:
"It remains true today that the 'individual inventor as crucial to the production of new inventions and innovations.' According to some, this does not result from 'any legislation, statute, or even the Constitution' but rather is the 'collective belief in the narrative itself: that small inventors are crucial to technological innovation and that the patent system should support their activities,' at least notionally protecting them from big corporations. This is despite the fact that the 'canonical story of the lone genius inventor is largely a myth,' since inventions are often a product of group effort, perhaps thus undermining traditional justifications of patents. (...)" (page 17)
"Perhaps it is unsurprising then that philosophers argue that 'our focus must be on properly integrating AI technology into a culture that respects and advances the dignity and well-being of humans, and the nonhuman animals with whom we share the world, rather than on the highly speculative endeavor of integrating the dignity of intelligent machines' into our frameworks. It is similarly understandable that the legal institutional responses have not been eager to abandon the basic assumption of the modern age and, as a matter of patent law, it seems the doctrine will continue to place the 'human causer' in the center (...)" (page 24)
"Dan Burk once called artificial inventorship a 'bizarre and counterproductive' idea decisively precluded by the US law. He was right, and the same proves true in the UK, the EU, Australia, and others. This is not just a doctrinal insight. The attempts to get rid of the human inventor’s notionally central place undermine the theoretical foundations of patent law, but also strike at modern law more broadly, and it is unsurprising they have been rejected in the dicta examined above. Indeed, this is, generally, where jurisprudence ends, and philosophy begins. In this respect, artificial inventorship is at the same time a radical and corrosive idea, wreaking havoc within the legal system, but also a seemingly moderate one, which does not offer any radical alternatives to IP or the modern state, but doubles down on their most problematic features. Indeed, it does not even try to liberate the robots, but merely to remove causative obstacles to obtaining monopolies – ultimately, at the cost of the common good." (pages 32-33)
🎙️ Taming Silicon Valley and Governing AI
If you are interested in AI, particularly in how we can ensure it works for us, you can't miss my live conversation with Gary Marcus. Here's why:
➵ Marcus is one of the most prominent voices in AI today. He is a scientist, best-selling author, and serial entrepreneur known for anticipating many of AI's current limitations, sometimes decades in advance.
➵ In this live talk, we'll discuss his new book "Taming Silicon Valley: How We Can Ensure That AI Works for Us," focusing on Generative AI's most imminent threats, as well as Marcus' thoughts on what we should insist on, especially from the perspective of AI policy and regulation. We'll also talk about the EU AI Act and U.S. regulatory efforts and the false choice, often promoted by Silicon Valley, between AI regulation and innovation.
➵ This will be the 20th edition of my AI Governance Live Talks, and I invite you to attend live, participate in the chat, and learn from one of the most respected voices in AI today. Don't miss it!
👉 To join the live session, register here. I hope to see you there!
🎬 Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
⚖️ Rawlsian Ethics of AI
If you're interested in AI ethics, the paper "Reconstructing AI Ethics Principles: Rawlsian Ethics of AI" by Salla Westerstrand is a must-read. These are the 'Rawlsian ethics guidelines for fair AI' proposed:
1️⃣ "Developers and deployers of an AI system must ensure that the AI system does not threaten the basic liberties of any individual.
↳ AI systems should not endanger but support the freedom of thought and liberty of conscience;
↳ AI systems should not compromise but support political liberties and freedom of association, such as the right to vote and to hold public office;
↳ AI systems should not harm but support the liberty and integrity of the person, including freedom from psychological oppression and physical assault and dismemberment;
↳ All AI systems should be aligned with the principle of rule of law.
2️⃣ The use and development of AI systems should not negatively impact people’s opportunities to seek income and wealth. If an AI system is used in distribution of advantageous positions, such as recruitment, performance evaluation, or access to education, it needs to be ensured that.
↳ The tool is trained with non-biased training data, or appropriate tools are used to mitigate the biases in the final product if no non-biased training data is available (data bias mitigation);
↳ The outcome of the use of the tool includes an explanation of the grounds for the outcome it produces (explainability); and
↳ The algorithms used shall encourage neither biased results nor the systematic repetition and amplification thereof in, e.g., the feedback loops of a machine learning system (algorithmic bias mitigation).
*If these conditions cannot be met, AI should not be used in the process.
3️⃣ All inequalities affected by AI systems, such as acquiring a position of power or accumulation of wealth, must be to the greatest benefit of the least advantaged members of society."
➵ This is a fascinating paper, especially for those familiar with Rawls' theory of justice. As AI development advances and AI agents start to become more prevalent, AI ethics is more important than ever, including in the context of supporting AI regulation and policy efforts.
📚 AI Book Club: What Are You Reading?
📖 More than 1,700 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was Digital Empires: The Global Battle to Regulate Technology by Anu Bradford.
📖 Ready to discover your next favorite read? See our previous reads and join the book club here.
🩺 The Ethics of AI in Health Care
The paper "The Ethics of AI in Health Care: An Updated Mapping Review" by Jessica Rose Morley and Luciano Floridi is great for understanding ethical issues in AI-powered healthcare. Three core ethical concerns everybody should be familiar with:
1️⃣ “Epistemic concerns: inconclusive, inscrutable, and misguided evidence
(...) Just because an AI model can recognize a pattern, does not automatically make the pattern meaningful nor the action it informs clinically efficacious or safe. In brief, AI models trained on poor-quality data can produce inaccurate or discriminatory outputs that may lead to patient fatalities (...); models that are initially well-trained and properly validated may introduce patient safety risks post-deployment due to the negative effects of dataset drift or even data poisoning (...); and large language models may regurgitate or hallucinate harmful misinformation (...)"
2️⃣ “Normative concerns: unfair outcomes and transformative effects
(...) normative ethical concerns focus on how the integration of AI into healthcare systems might fundamentally alter the nature of care delivery. Specifically, by focusing on how the implementation of AI may reconfigure relationships between different parts of the healthcare ecosystem, normative ethical implications stress AI’s potential to (a) change how health and illness are conceptualized, categorized, and managed (...); (b) reshape the embodied and affective experiences of both patients and clinicians; (c) reinforce social inequalities (...); and (d) erode trust in human healthcare professionals (...)"
3️⃣ “Concerns related to traceability
The preceding discussion of the normative ethical risks associated with the increasing use of AI in healthcare highlighted how AI has the potential to transform the delivery of healthcare fundamentally. At the core of this transformation process, is the development of a far more complex network consisting of humans and non-human agents. The complexity of this resultant network makes responsibility attribution difficult because a fault, that may result in a patient safety issue, may have multiple different causes that are hard to trace (...). This is concerning because, according to Felder (2021), the transparent organization of responsibility in a healthcare system serves at least two purposes: (a) to incentivize healthcare providers to act ethically in all circumstances; and (b) to ensure the foundation of patient trust in the reliability of healthcare providers is sufficiently solid for healthcare to flourish. (...)"
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇺🇸 Barclays: AI Governance VP - apply
🇭🇰 Hays: Compliance Manager, Data Privacy & AI Governance - apply
🇬🇧 Swift: AI Governance Lead - apply
🇬🇧 Royal Caribbean: Data Privacy & AI Governance - apply
🇹🇭 Agoda: Data Privacy & AI Governance Program Specialist - apply
🇮🇪 Analog Devices: Senior Manager, AI Governance - apply
🇬🇧 JPMorganChase: AI Governance Lead, Data Management - apply
🇺🇸 Lenovo: Director, AI Governance - apply
🇸🇰 Swiss Re: Senior Data & AI Governance Architect - apply
🇬🇧 ByteDance: Senior Counsel, AI Governance & Tech Policy - apply
🔔 More job openings: subscribe to our AI governance & privacy job boards and receive our weekly email with job opportunities. Good luck!
🌧️ It Was Raining
About Humans and Machines | Short story #1
It was raining, but he still went for a walk.
There is something about the rain that makes our inner noises quieter. When nature is crying on us from the outside, there is suddenly more space for new feelings to appear. He needed internal time and space, and wet clothes would provide the ideal distraction.
He still could not understand what had happened last year. There was anger, as nobody should be treated that way. But there was also despair and a profound sense of injustice. He knew that his soft personality, contained and agreeable, had a part in the way he was destroyed inside.
He would never be the same person after what had happened. Strangely, that might be good. It might be the world signaling that it's time for a change.
👉 Continue reading this week's short story here.
🙏 Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
AI is more than just hype—it must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza