👋 Hi, Luiza Jarovsky here. Welcome to the 123rd edition of this newsletter on AI policy, compliance & regulation, read by 32,500+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I discuss some of the AI Act's foundations & building blocks that are often left out of main discussions but are essential to understanding the AI Act's enforcement structure. Paid subscribers received it yesterday and can access it here:
🏗️ AI Act: NLF, CE Marking & Standards. If you are an AI governance professional, upgrade to paid, get immediate access to my exclusive analyses unpacking AI compliance & regulation, and stand out in the AI governance arena.
🏛️ Top 10 AI Governance Papers
Below is my selection of the top 10 AI governance papers published in recent months. They are an excellent way to expand your knowledge; make sure to download, read & share:
📄 Title: "The Great Scrape: The Clash Between Scraping and Privacy"
✏️ Authors: Daniel Solove & Woodrow Hartzog
🔍 Read it here.
📄 Title: "Brave New World? Human Welfare and Paternalistic AI"
✏️ Author: Cass Sunstein
🔍 Read it here.
📄 Title: "The Law of AI is the Law of Risky Agents without Intentions"
✏️ Authors: Ian Ayres & Jack M. Balkin
🔍 Read it here.
📄 Title: "Theory Is All You Need: AI, Human Cognition, and Decision Making"
✏️ Author: Teppo Felin & Matthias Holweg
🔍 Read it here.
📄 Title: "Anthropomorphising machines and computerising minds: the crosswiring of languages between Artificial Intelligence and Brain & Cognitive Sciences"
✏️ Author: Luciano Floridi & Kia Nobre
🔍 Read it here.
📄 Title: "The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence"
✏️ Authors: Peter Slattery, Alexander Saeri, Emily Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush J. Pour, Stephen Casper & Neil Thompson
🔍 Read it here.
📄 Title: "Digital Democracy in the Age of Artificial Intelligence"
✏️ Authors: Claudio Novelli & Giulia Sandri
🔍 Read it here.
📄 Title: "On the Antitrust Implications of Embedding Generative AI in Core Platform Services"
✏️ Authors: Thomas Höppner & Steffen Uphues
🔍 Read it here.
📄 Title: "Consent and Compensation: Resolving Generative AI’s Copyright Crisis"
✏️ Authors: Frank Pasquale & Haochen Sun
🔍 Read it here.
📄 Title: "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"
✏️ Author: John Wihbey
🔍 Read it here.
👉 If you want to upskill and advance your AI Governance career, register for our 4-week EU AI Act Bootcamp (led by me) and join 875+ people who have participated in our training programs. With the 🔥 AI Governance package, you can enroll in both of our AI Governance Bootcamps and save $180 (20% off).
🇿🇦 [AI Policy] South Africa's AI Policy Framework
South Africa has recently published its “South Africa National Artificial Intelligence Policy Framework," and it's an excellent read for everyone in AI governance.
➡️ Quotes:
"For South Africa to exploit the full potential of AI, the country need to carefully take into consideration ethical, social, and economic implications, ensuring that AI benefits are broadly shared, and risks are managed effectively. A cornerstone of this framework is the commitment to ethical AI development and use. It integrates comprehensive guidelines to ensure AI systems are transparent, accountable, and designed to promote fairness while mitigating biases. This includes establishing robust data governance frameworks to protect privacy and enhance data security, alongside setting standards for AI transparency and explainability to foster trust among users and stakeholders."
"Developing a comprehensive AI policy for South Africa is crucial amidst rapid global advancements in AI technology, offering significant opportunities for economic growth, societal improvement, and positioning the country as a leader in innovation. However, South Africa faces challenges such as historical inequalities, digital divides, and outdated regulatory frameworks that hinder widespread AI adoption. Overcoming these obstacles requires regulatory reforms and policies to encourage targeted investments in strategic areas initially in education and digital infrastructure to ensure equitable access and to maximize AI’s transformative potential. By aligning with global AI governance standards and addressing socioeconomic disparities, South Africa can leverage AI to drive economic transformation, foster social equity, and enhance its global competitiveness in AI innovation."
"In addition to ethical considerations, the framework outlines key pillars such as robust data governance frameworks, infrastructure enhancement, and significant investments in research and innovation. These pillars are crucial for creating an enabling environment where AI technologies can thrive and contribute meaningfully to sectors such as healthcare, education, and public administration."
👉 Read the document here.
🇦🇺 [AI Policy] Australia: AI Use in Government
The Australian Government published its "Policy for the Responsible Use of AI in Government," and it's a great read for everyone in AI governance.
➡️ Quotes:
"The adoption of AI technology and capability varies across the APS. This policy is designed to unify government’s approach by providing baseline requirements on governance, assurance and transparency of AI. This will remove barriers to government adoption by giving agencies confidence in their approach to AI and incentivising safe and responsible use for public benefit."
"One of the biggest challenges to the successful adoption of AI is a lack of public trust around government’s adoption and use. Lack of public trust acts as a handbrake on adoption. The public is concerned about how their data is used, a lack of transparency and accountability in how AI is deployed and the way decision-making assisted by these technologies affects them. This policy addresses these concerns by implementing mandatory and optional measures for agencies, such as monitoring and evaluation of performance, being more transparent about their AI use and adopting standardised governance."
"It is strongly recommended that agencies implement: • AI fundamentals training for all staff, aligned to the approach under the policy guidance, within 6 months of this policy taking effect • Additional training for staff in consideration of their roles and responsibilities, such as those responsible for the procurement, development, training and deployment of AI systems."
👉 Read the report prepared by the Australian Government's Digital Transformation Agency here.
📋 [Report] The State of AI in the Pacific Islands
The AI Asia Pacific Institute published "The State of Artificial Intelligence in the Pacific Islands," and it's a great read for everyone in AI governance.
Nations & territories covered:
🇦🇺 Australia
🇨🇰 Cook Islands
🇫🇲 Federated States of Micronesia
🇫🇯 Fiji
🇵🇫 French Polynesia
🇰🇮 Kiribati
🇲🇭 Marshall Islands
🇳🇷 Nauru
🇳🇨 New Caledonia
🇳🇿 New Zealand
🇳🇺 Niue
🇵🇼 Palau
🇵🇬 Papua New Guinea
🇼🇸 Samoa
🇸🇧 Solomon Islands
🇹🇴 Tonga
🇹🇻 Tuvalu
🇻🇺 Vanuatu
➡️ Quotes:
"It is worth noting that the Pacific Islands face unique challenges due to their relative isolation, small size, limited resources, and vulnerability to natural disasters. (...) However, AI and digital technologies offer significant and unique opportunities for the Pacific Islands to overcome these challenges and improve the lives of their citizens. For example, AI can be used to enhance disaster response and recovery, as demonstrated by the partnership between the UNCDF and Tractable in Fiji. Additionally, digital transformation can improve access to financial services, as seen in the Palau-Ripple partnership, and boost the tourism sector, as New Zealand is supporting through the Pacific Digital Champions Training Program."
"The Pacific Islands that are scored show a significant disparity compared to Australia and New Zealand, with a 13.44 difference in total score between New Zealand and Nauru, the highest-ranking country from the Pacific Islands. Aside from Nauru and Fiji, the remaining countries in the Pacific Islands are ranked outside the top 100 and generally show lower levels of development across all pillars. Several factors contribute to this, including the geographical dispersion and small populations of these islands, and the distance from major international markets. Moreover, the exclusion of seven islands from the index due to the lack of data availability underscores the difficulties in collecting ICT related data in this region."
"The Pacific Islands in general have not yet developed systematic and comprehensive government-led AI governance and ethics frameworks, with their efforts primarily directed towards broader digital and ICT initiatives (...). Nevertheless, the benefits and threats posed by AI technologies affect the Pacific Islands, underscoring the need for their own AI strategies, governance and ethical frameworks. By examining and adapting the best practices and international efforts of the EU, US, China, regional players, and international organizations, the Pacific Islands can leverage their position as latecomers to selectively adopt advantageous strategies in their efforts to promote each island’s unique characteristics and address specific challenges."
👉 Read the report here.
🎙️ [AI Live Talks] Live Session with Max Schrems
If you are interested in the intersection of privacy and AI, don't miss my live talk with Max Schrems (our second one!). Register now. Here's why you should join us live:
➵ If you have been reading this newsletter for some time, you know that my view is that common AI practices - which became ubiquitous in the current Generative AI wave - are unlawful from a GDPR perspective. Yet, we have not seen a clear response from data protection authorities.
➵ Max - the Chairman of noyb and one of the world's leading privacy advocates - has been a tireless advocate for privacy rights. More recently, he and his team have also been pioneers in defending privacy rights in the context of AI-related new risks and challenges.
➵ In this live talk, we'll discuss noyb's recent legal actions in this area, including their complaints against Meta & X/Twitter, legitimate interest in the context of AI, and more.
➵ This is my second talk with Max. The first one, in which we discussed GDPR enforcement challenges, was watched by thousands of people (live and on-demand). You can find it here.
👉 If you are interested in privacy and AI, or if you work in AI policy, compliance & regulation, you can't miss it. To participate, register here.
💡 [Non-Profits] Responsible AI Advocacy
The AI Act entered into force and, among its goals, is the promotion of "human-centric & trustworthy AI." Below are six excellent non-profits in the field that everyone in AI must know:
➵ The Algorithmic Justice League, founded by Joy Buolamwini
🔍 A glimpse into their work: "Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem" by Sasha Costanza-Chock, Emma Harvey, Deborah Raji, Martha C. & Joy Buolamwini. Read it here.
➵ AI Now Institute, founded by Kate Crawford & Meredith Whittaker
🔍 A glimpse into their work: "Climate Justice & Labor Rights" by Tamara Kneese. Read it here.
➵ HumaneIntelligence, founded by Rumman Chowdhury
🔍 A glimpse into their work: "Generative AI Red Teaming Challenge: Transparency Report 2024" by Victor Storchan, Ravin Kumar, Rumman Chowdhury, Seraphina Goldfarb-Tarrant & Sven Cattell. Read it here.
➵ AlgorithmWatch, founded by Matthias Spielkamp
🔍 A glimpse into their work: "Making sense of the Digital Services Act How to define platforms' systemic risks to democracy" by Michele Loi. Read it here.
➵ Data & Society Research Institute, founded by Danah Boyd
🔍 A glimpse into their work: "Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI" by Wanheng Hu & Ranjit Singh. Read it here.
➵ Center for AI and Digital Policy, founded by Marc Rotenberg
🔍 A glimpse into their work: "AI and democratic values Index 2023." Read it here.
📄 [AI Research] Model Collapse
The paper "AI models collapse when trained on recursively generated data" by Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson & Yarin Gal is a must-read for everyone in AI.
➡️ Quotes & comments:
"The development of LLMs is very involved and requires large quantities of training data. Yet, although current LLMs2,4–6, including GPT-3, were trained on predominantly human-generated text, this may change. If the training data of most future models are also scraped from the web, then they will inevitably train on data produced by their predecessors. In this paper, we investigate what happens when text produced by, for example, a version of GPT forms most of the training dataset of following models. What happens to GPT generations GPT-{n} as n increases? We discover that indiscriminately learning from data produced by other models causes ‘model collapse’—a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time."
"Our evaluation suggests a ‘first mover advantage’ when it comes to training models such as LLMs. In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to misperceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. (...)"
➡️ As I've been discussing in this newsletter, most general-purpose AI providers rely on legitimate interest as the lawful grounds to process data - including personal data - to train their AI models & systems. When attempting to justify their legitimate interest grounds, these providers often cite the collective & societal benefits to be reaped from their AI. [I disagree with the legitimate interest argument from a data protection perspective, and you can find some of my recent comments on the topic in this newsletter's archive, such as “The Elephant in the Room” - link).
➡️ But leaving the data protection context aside for a moment, there is a general interest in preserving the quality of those models from ethical, environmental, and fairness perspectives (at least), not to mention the loss of human, technical & computational resources in case most existing models collapse. Perhaps regulation should also ensure that AI training, as a rule, preserves the sustainability of present and future models.
👉 Link to the paper here.
🚀 Train, Update & Upskill your AI Governance Team
I would welcome the opportunity to:
➵ Give a talk about the latest developments in AI, tech & privacy, discussing emerging compliance & governance challenges in these areas;
➵ Coordinate private cohorts of our AI Bootcamps for your team (15+ people).
👉 Get in touch with me here.
🔥 [Job Openings] AI Governance is HIRING
Below are 10 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. 🇳🇱 Nebius AI: Privacy and AI Governance Manager - apply
2. 🇮🇪 Barden: AI Governance Specialist - apply
3. 🇩🇪 adesso SE: IT-Consultant generative AI Governance - apply
4. 🇬🇧 ByteDance: Senior Counsel, AI Governance & Tech Policy - apply
5. 🇺🇸 Children's National Hospital: Senior Manager AI Governance - apply
6. 🇺🇸 Compunnel Inc.: AI Governance Strategy Architect - apply
7. 🇮🇪 Analog Devices: Senior Manager, AI Governance - apply
8. 🇺🇸 M&T Bank: AI Governance Consultant - apply
9. 🇺🇸 Barclays: AI Governance & Oversight - apply
10. 🇺🇸 Perficient: Program Manager, AI Governance - apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
🎓 [AI Governance] Upskill & Advance your Career
As AI regulation expands worldwide, it's a great time to invest in your AI governance career. 875+ people have participated in our training programs - don't miss them! Save your spot in the autumn cohorts of our AI Bootcamps:
1. Emerging Challenges in AI, Tech & Privacy (4 weeks)
🗓️ Tuesdays, September 3 to 24 - learn more & register
2. The EU AI Act Bootcamp (4 weeks)
🗓️ Wednesdays, September 4 to 25 - learn more & register
🔥Tip: Save $180 with our AI Governance Package—join both Bootcamps at 20% off.
🙏 Thank you for reading
➵ If you have comments on this edition, write to me, and I'll get back to you soon.
➵ If you enjoyed this edition, consider sending it to friends & colleagues and help me spread awareness about AI policy, compliance & regulation. Thank you!
All the best, Luiza