👋 Hi, Luiza Jarovsky here. Welcome to the 140th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 37,000+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it!
💎 In this week's AI Governance Professional Edition, I’ll discuss the intersection of AI & copyright in the context of the AI Act. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
🗓️ Two weeks left to register! If you are transitioning to AI governance and are ready to invest 5 hours a week in learning & upskilling, our exclusive 4-week AI Governance Training is for you. Join 1,000+ professionals from 50+ countries who have accelerated their careers through our programs. Save your spot in the November cohort:
📜 Fundamental Rights & AI
AI-powered technology can lead to fundamental rights harm, and everyone in AI should learn more about the topic. Below are 10 excellent resources to help you dive deeper. Download, read, and share:
1️⃣ From Global Standards to Local Safeguards: The AI Act, Biometrics, and Fundamental Rights (2024) by Federica Paolucci
🔎 Read it here.
2️⃣ Assessing the (Severity of) Impacts on Fundamental Rights (2024) by Gianclaudio Malgieri and Cristiana Santos
🔎 Read it here.
3️⃣ The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template (2024) by Alessandro Mantelero
🔎 Read it here.
4️⃣ Advancing the Protection of Fundamental Rights Through AI Regulation: How the EU and the Council of Europe are Shaping the Future (2024) by Francesco Paolo Levantino and Federica Paolucci
🔎 Read it here.
5️⃣ Algorithmic Management and a New Generation of Rights at Work
Institute of Employment Rights (2024) by Joe Atkinson and Philippa Collins
🔎 Read it here.
6️⃣ Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance? (2023) by Marion Ho-Dac
🔎 Read it here.
7️⃣ How to Think About Freedom of Thought (and Opinion) in the Age of Artificial Intelligence (2023) by Sue Anne Teo
🔎 Read it here.
8️⃣ When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis (2023) by Francesca Palmiotto
🔎 Read it here.
9️⃣ Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law (2023) by Pauline Kim
🔎 Read it here.
🔟 The Future AI Act and Facial Recognition Technologies in Public Spaces: Nice to Have or Strictly Necessary? (2023) by Catherine Jasserand-Breeman
🔎 Read it here.
⚖️ New AI Copyright Lawsuit
Dow Jones and the New York Post are suing the AI company Perplexity over copyright infringement. AI copyright lawsuits are piling up and evolving in their allegations. Don't miss the 'input,' 'output,' and 'hallucination' claims below:
"Perplexity’s conduct violates Plaintiffs’ exclusive rights under the Copyright Act in various ways. First, Perplexity’s actions at the input stage – copying without authorization massive amounts of Plaintiffs’ copyrighted works for inclusion into Perplexity’s RAG index (“inputs”) – constitute completed and massive copyright violations without even a colorable fair use."
"Nevertheless, the outputs of Perplexity’s products also unlawfully infringe on Plaintiffs’ copyrights in several ways. Perplexity’s “answers” to users’ queries often include full or partial verbatim reproductions of Plaintiffs’ news, analysis, and opinion articles. And worse, users can access verbatim reproductions of Plaintiffs’ content more frequently by purchasing a subscription to Perplexity’s premium service, “Perplexity Pro.” Other times, Perplexity turns Plaintiffs’ copyrighted articles into paraphrases or summaries of those copyrighted works that similarly serve as substitutes for accessing Plaintiffs’ copyrighted works on Plaintiffs’ own websites and/or licensed websites. The use of Plaintiffs’ copyrighted content to generate any such substitutes is not a fair use."
"In addition to using Plaintiffs’ copyrighted work to develop a substitute product that reproduces or imitates Plaintiffs’ original content, Perplexity also harms Plaintiffs’ brands by falsely attributing to Plaintiffs certain content that Plaintiffs never wrote or published. Not infrequently, if Perplexity is asked about what Plaintiffs’ publications reported, Perplexity “answers” with false information. AI developers euphemistically call these factually incorrect outputs “hallucinations.” Perplexity’s hallucinations can falsely attribute facts and analysis to content producers like Plaintiffs, sometimes citing an incorrect source, and other times simply inventing and attributing to Plaintiffs fabricated news stories."
🚧 AI and Worker Well-being
The U.S. Department of Labor published the document "AI and Worker Well-being: Principles and Best Practices for Developers and Employers," and it's a must-read for everyone, especially employers. Eight key principles:
1️⃣ Centering Worker Empowerment
"Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace."
2️⃣ Ethically Developing AI
"AI systems should be designed, developed, and trained in a way that protects workers."
3️⃣ Establishing AI Governance and Human Oversight
"Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace."
4️⃣ Ensuring Transparency in AI Use
"Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace."
5️⃣ Protecting Labor and Employment Rights
"AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and antiretaliation protections."
6️⃣ Using AI to Enable Workers
"AI systems should assist, complement, and enable workers, and improve job quality."
7️⃣ Supporting Workers Impacted by AI
"Employers should support or upskill workers during job transitions related to AI."
8️⃣ Ensuring Responsible Use of Worker Data
"Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly."
╰┈➤ My comments:
➵ This is an essential document, especially given the accelerated pace of AI development and deployment, including at the workplace, while not much is said regarding workers' rights and labor law.
➵ AI developers should have labor law and workers' rights in mind when building AI systems that will be used in the workplace. Additional guardrails might be required.
➵ Employers should be aware of their ethical and legal duties if they decide to use AI in the workplace. AI-powered systems are not "just another technology" and present specific risks that should be tackled before deployment, especially in the workplace.
💼 Advance Your AI Governance Career
➵ Join our 4-week AI Governance Training— a live, online, and interactive program led & designed by me for professionals who want to accelerate their AI governance career and who are ready to dedicate 5 hours a week to the program (live lessons + self-learning). Here's what to expect:
➵ The training includes 8 live online lessons with me (90 minutes each) over the course of 4 weeks, totaling 12 hours of live sessions. You'll also receive additional learning material, quizzes, 16 CPE credits pre-approved by the IAPP, and a training certificate. You can always send me your questions or book an office-hours appointment with me. Groups are small, so it's an excellent opportunity to learn with peers and network.
➵ This is a comprehensive and up-to-date AI governance training focused on AI ethics, compliance, and regulation, covering the latest developments in the field. The program consists of two modules:
Module 1: Legal and ethical implications of AI, risks & harms, recent AI lawsuits, the intersection of AI and privacy, deepfakes, intellectual property, liability, competition, regulation, and more.
Module 2: Learn the EU AI Act in-depth, understand its strengths and weaknesses, and get ready for policy, compliance, and regulatory challenges in AI.
➡️ Over 1,000+ professionals from 50+ countries have already benefited from our programs. Are you ready?
🗓️ Only 2 weeks left to register for the November cohort! Check out the training details, read testimonials, and save your spot here.
I hope to see you there!
📄 Regulation & Innovation
The paper "The False Choice Between Digital Regulation and Innovation" by Anu Bradford is a must-read for everyone in AI governance, offering provocative insights on AI regulation. Selected quotes:
"Of course, all digital regulation is not beneficial. But neither is all innovation. While many techno-optimists herald the revolutionary nature of digital technologies, others question whether today’s leading tech companies are producing truly welfare-enhancing innovations that are leading to meaningful technological progress and economic growth, or enhancing the human experience. A growing number of technologists, investors, journalists, and politicians are criticizing tech companies’ business models that rely on the exploitation of internet users’ data, asking whether those digital services ought to be considered “innovations” that are worth shielding from regulation. In reassessing tech regulation, the EU should therefore also think more carefully about innovation, including what kind of innovation its tech regulation ought to advance. This includes the EU asking whether it even wants to nurture a 'European Google' if that entails embracing a business model that is based on extracting user data in ways that contradict the EU’s steadfast commitment to protect European citizens from such exploitation."
"The discussion also offers lessons for the US or any other government considering greater government oversight of its tech industry. If the policymakers and various stakeholders in the US understand that the country’s technological progress and culture of innovation are not tied to its lax regulatory approach, they are likely to feel more comfortable pursuing regulatory reforms that the American people have increasingly come to support. This Article has argued that any adjustment in the US towards the European regulatory regime—or the widespread emulation of that regime across the world more generally—would not, as a rule, set the US back in terms of innovation. Protecting internet users’ data privacy, regulating tech giants’ anticompetitive behavior, calling for more platform accountability over harmful online content, or insisting on ethical AI development would not dismantle the dynamic capital markets in the US, repeal its entrepreneurship-friendly bankruptcy laws, or discourage global tech talent from migrating to the country."
🔒 Guidelines on Securing AI Systems
Singapore takes the lead in AI governance again. The Cyber Security Agency of Singapore released its Guidelines on Securing AI Systems, and everyone developing or deploying AI should read them. Selected quotes:
1️⃣ Take a lifecycle approach
"As with good cybersecurity practice, CSA recommends that system owners take a lifecycle approach to consider security risks. Hardening only the AI model is insufficient to ensure a holistic defence against AI related threats. All stakeholders involved across the lifecycle of an AI system should seek to better understand the security threats and their potential impact on the desired outcomes of the AI system, and what decisions or trade-offs will need to be made. The AI lifecycle represents the iterative process of designing an AI solution to meet a business or operational need. As such, system owners will likely revisit the planning and design, development, and deployment steps in the lifecycle many times in the delivery of an AI solution."
2️⃣ Start with risk assessment
"Given the diversity of AI use cases, there is no one-size-fits-all solution to implementing security. As such, effective cybersecurity starts with conducting a risk assessment. This will enable organisations to identify potential risks, priorities, and subsequently, the appropriate risk management strategies. A fundamental difference between AI and traditional software is that while traditional software relies on static rules and explicit programming, AI uses machine learning and neural networks to autonomously learn and make decisions without the need for detailed instructions for each task. As such, organisations should consider conducting risk assessments more frequently than for conventional systems, even if they generally base their risk assessment approach on existing governance and policies. These assessments may also be supplemented by continuous monitoring and a strong feedback loop."
3️⃣ Guidelines for securing AI systems
╰┈➤ "Planning and design
➵ Raise awareness and competency on security risks
➵ Conduct security risk assessments
╰┈➤ Development
➵ Secure the supply chain
➵ Consider security benefits and trade-offs when selecting the appropriate model to use
➵ Identify, track and protect AI-related assets
➵ Secure the AI development environment
╰┈➤ Deployment
➵ Secure the deployment infrastructure and environment of AI systems
➵ Establish incident management procedures
➵ Release AI systems responsibly
╰┈➤ Operations and Maintenance
➵ Monitor AI system inputs
➵ Monitor AI system outputs and behaviour
➵ Adopt a secure-by-design approach to updates and continuous learning
➵ Establish a vulnerability disclosure process
╰┈➤ End of Life
➵ Ensure proper data and model disposal"
🎙️ AI Regulation Around the World
If you are interested in AI regulation, you can't miss my 1-hour conversation with Raymond Sun. We spoke live last week about AI regulation in 🇦🇺 🇨🇳 🇪🇬 🇮🇳 🇯🇵 🇲🇽 🇳🇬 🇸🇬 🇹🇷 🇦🇪, and the recording is now available. Access it here.
🎬 This was my 19th live talk and an extremely informative exchange. I recommend it to everyone looking to understand global regulatory trends in AI beyond the EU and the U.S. It's a great opportunity to learn more.
🎬 Find all my previous live conversations with privacy & AI governance experts on my YouTube Channel.
📋 Stopping Big Tech from Becoming Big AI
Many people missed the excellent report "Stopping Big Tech from Becoming Big AI" by Max von Thun-Hohenstein Yoshioka and Daniel Hanley, a great read for everyone interested in AI & competition. Important info:
1️⃣ “A set of high-level principles for regulatory intervention serve as the foundation for the report’s recommendations. These principles include:
➵ preserving market diversity;
➵ fostering fair competition;
➵ emphasizing structure;
➵ adopting a proactive (rather than reactive) strategy;
➵ regulating where necessary;
➵ ensuring regulators have the tools they need to do their jobs.”
2️⃣ “In particular, our report calls on governments and regulators to:
➵ ensure that new ex-ante digital competition regimes are ready to respond to emerging anti-competitive threats in AI;
➵ block mergers and nullify existing exclusive partnerships that unfairly limit competition;
➵ break up existing concentrations of power across the AI technology stack and target existing unfair market practices more generally;
➵ guarantee access to essential inputs such as computing power by imposing non-discrimination obligations on dominant firms and applying structural separation where necessary;
➵ empower businesses and consumers to switch providers by imposing data portability and interoperability requirements on cloud and AI services.”
3️⃣ “Above all, it is clear that protecting and promoting competition in AI – and in digital markets more generally – will require a cross-governmental approach, in two senses of the term.
➵ First, the different constituent parts of government – competition agencies, consumer protection authorities, data privacy regulators – must work collectively to regulate AI and counter the concentrated economic power of a few gatekeepers.
➵ Second, governments around the world need to work together in taking on this ambitious task, given the transnational nature of the threat."
╰┈➤ My comment: I have seen people confuse technological innovation (e.g., the current generative AI wave) with full technological democratization (which has not happened), especially due to the consolidation of resources in the context of computing power, data, capital, ecosystems, and technical expertise (read the first chapter of the report). This is an important document to understand these macro trends in the context of AI & competition and potential remedies to avoid 'Big AI.'
📚 AI Book Club: What Are You Reading?
📖 More than 1,500 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was Taming Silicon Valley: How We Can Ensure That AI Works for Us by Gary Marcus.
📖 Ready to discover your next favorite read? See the book list and join the book club here.
🛠️ G7 Toolkit for AI in the Public Sector
UNESCO and the OECD have published the "G7 Toolkit for AI in the Public Sector," a must-read for everyone in AI governance, particularly considering the OECD's global influence in this field. Important information:
1️⃣ What's the toolkit, and how was it prepared?
"The toolkit proposed in this document aims to support and guide governments in developing, deploying, and using AI in the public sector in a safe, secure, and trustworthy manner. The toolkit leverages the information collected through a purposely conceived questionnaire for G7 members, as well as existing work by international organisations and initiatives such as the Organisation for Economic Cooperation and Development (OECD) and the recently integrated Global Partnership on Artificial Intelligence (GPAI), as well as the United Nations Educational, Scientific and Cultural Organisation (UNESCO)."
2️⃣ What's the toolkit's goal?
The goal is to provide guidance for governments in the following contexts:
➵ "Assessing relevance of AI in and for specific domains in the public sector;
➵ Identifying the skills, competencies, and profiles needed to ensure the strategic and responsible use of AI in the public sector;
➵ Providing an overview of the policies that may be needed to guide and coordinate the strategic and responsible use of AI in the public sector, including also by facilitating public-private collaboration."
3️⃣ What are the toolkit's 7 key messages?
➵ "Establish clear strategic objectives and action plans in line with expected benefits;
➵ Include the voices of users in shaping strategies and implementation;
➵ Overcome siloed structures in government for effective governance;
➵ Establish robust frameworks for the responsible use of AI;
➵ Improve scalability and replicability of successful AI initiatives;
➵ Enable a more systematic use of AI in and by the public sector;
➵ Adopt an incremental and experimental approach to the deployment and use of AI in and by the public sector."
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇺🇸 KPMG US: Senior Director, AI and Data Governance - apply
🇹🇷 Mastercard: Manager, AI Governance - apply
🇨🇿 Deutsche Börse: AI Governance Specialist - apply
🇨🇭 Xebia: Data & AI Governance Consultant - apply
🇮🇹 BIP: AI Governance Specialist - apply
🇺🇸 Sony Interactive Entertainment: Head of AI Governance - apply
🇨🇦 TD: Manager, AI Governance - apply
🇪🇸 Zurich Insurance: AI Governance Architect - apply
🇺🇸 World Economic Forum: Head of Data and AI Innovation - apply
🇬🇧 Meta: Research Scientist Intern, Responsible AI (PhD) - apply
🔔 More job openings: subscribe to our AI governance & privacy job boards and receive our weekly email with job opportunities. Good luck!
🙏 Thank you for reading!
If you have comments on this edition, reply to this email, and I'll get back to you soon.
AI is more than just hype—it requires proper governance. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza