👋 Hi, Luiza Jarovsky here. Welcome to the 121st edition of this newsletter on AI policy, compliance & regulation, read by 31,700+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I discuss how regulators are approaching the topic of open-source AI. Paid subscribers received it yesterday and can access it here: 🔓Regulating Open-Source AI. If you are an AI governance professional, upgrade to paid, get immediate access to my analyses unpacking AI compliance & regulation, and stand out in the AI governance arena.
🏆 The Top 5 AI Act Guides
The AI Act entered into force on August 1st (read my article about it), and everyone in AI should be familiar with it.
➡️ Below are 5 excellent AI Act guides to help you expand your knowledge. Download, bookmark & share:
1. "The William Fry AI Guide" (William Fry). By Barry Scannell, David Cullen & Susan Walsh. 📙 Read it here.
2. "EU AI Act A Pioneering Legal Framework On Artificial Intelligence - Practical Guide" (Cuatrecasas). By Joana Mota Agostinho, Valentín García González, Nora Oyarzabal, Cristina Romariz, Ramon Baradat Marí, Lavínia Marques, António Souto Moura & Lucas Battistello Espindola. 📙 Read it here.
3. "Decoding the EU Artificial Intelligence Act" (KPMG). By David Rowlands & Laurent Gobbi. 📙 Read it here.
4. "EU AI Act: Navigating a Brave New World" (Latham & Watkins). By Elisabetta Righini, Hanno Kaiser, Tim Wybitul, Fiona Maclean, Jean-Luc Juhan, Myria Saarinen & Michael Rubin. 📙 Read it here.
5. "Conformity Assessments Under the Proposed EU AI Act: A Step-By-Step Guide" (Future of Privacy Forum & OneTrust). By Katerina Demetzou, Vasileios Rovilos, Gabriela Zanfir-Fortuna, Rob van Eijk, Andrew Clearwater, Alexis Kateifides & Alexander Thompson. 📙 Read it here.
➡️ Tip: If you're looking to upskill and advance your AI Governance career, register for our acclaimed 4-week EU AI Act Bootcamp and join 875+ people who have participated in our training programs. With the 🔥 AI Governance package, you can enroll in both AI Bootcamps and save $180 (that's 20% off).
⚖️ Noyb vs. X/Twitter
The non-profit noyb filed 9 complaints (in 🇦🇹🇧🇪🇫🇷🇬🇷🇮🇪🇮🇹🇳🇱🇪🇸🇵🇱) against X/Twitter due to its use of personal data to train Grok, its AI system. Here's what you need to know:
➡️ According to noyb, X/Twitter is violating at least Articles 5(1) and (2), 6(1) and(4), 9(1), 12(1) and (2), 13 (1) and (2), 17(1)(c), 18(1)(d), 19, 21(1) and 25 of the GDPR. These are noyb's main allegations:
➵ "Twitter has no legitimate interest under Article 6(1)(f) GDPR that would override the interest of the complainant (or any data subject) and no other legal basis to process such vast amounts of personal data for undefined purposes.
➵ Twitter unlawfully assumed permission to process personal data for undefined, broad technical means (“machine learning or artificial intelligence models”) without specifying the purpose of the processing under Article 5(1)(b) GDPR.
➵ Twitter has taken steps to deter data subjects from exercising their right to choose by pretending that data subjects only enjoy a right to object (“opt-out”) instead of relying on consent (“opt-in”) and by deterring users from objecting under Article 21 GDPR.
➵ Twitter fails to provide the necessary “concise, transparent, intelligible and easily accessible” information, “using clear and plain language”.
➵ Twitter is highly unlikely to properly differentiate (i.) between data subjects where it can rely on a legal basis to process personal data and other data subjects where such a legal basis does not exist and (ii.) between personal data that falls under Article 9 GDPR and other data that does not.
➵ The processing of personal data is highly likely to be irreversible and thus Twitter is unable to comply with the right to be forgotten once personal data of the complainant is ingested into (unspecified) “machine learning or artificial intelligence models.”
➡️ I especially recommend that everyone reads pages 14 and 15, which discuss the non-applicability of legitimate interest when using personal data to train AI. This is the concrete discussion that has been missing from recent data protection authorities' reports.
➡️ In my opinion, noyb makes even clearer that the lack of a firm positioning from EU data protection authorities regarding AI practices (when they involve personal data) is detrimental to everyone. Kudos to Max Schrems and his team. To learn more about noyb's recent complaints and legal actions, join my live talk with Max in September (more info below).
➡️ Read noyb's complaint (Ireland) here.
🎙️ AI live talks: my conversation with Max Schrems
If you are interested in the intersection of privacy and AI, don't miss my live talk with Max Schrems (our second one!). Register now. Here's why you should join us live:
➵ If you have been reading this newsletter for some time, you know that my view is that common AI practices - which became ubiquitous in the current Generative AI wave - are unlawful from a GDPR perspective. Yet, we are yet to see a clear response from data protection authorities.
➵ Max - the Chairman of noyb and one of the world's leading privacy advocates - has been a tireless advocate for privacy rights. More recently, he and his team have also been pioneers in defending privacy rights in the context of AI-related new risks and challenges.
➵ In this live talk, we'll discuss noyb's recent legal actions in this area, including their complaints against Meta & X/Twitter (see above), legitimate interest in the context of AI, and more. If you attend live, you'll also be able to post your questions in the chat (and read other participants' questions and comments).
➵ This is my second talk with Max. The first one, in which we discussed GDPR enforcement challenges, was watched by thousands of people (live and on-demand). You can find it here.
➡️ If you are interested in privacy and AI, or if you work in AI policy, compliance & regulation, you can't miss it. To participate, register here.
📑 AI Governance Research
Claudio Novelli & Giulia Sandri published "Digital Democracy in the Age of Artificial Intelligence," and it's a great read for everyone in AI & public policy. Quotes:
"(...) while AI can improve campaign efficiency and personalisation, it also raises significant risks. AI-generated misinformation and ethical dilemmas are prevalent issues, with deepfakes being a notable example. These realistic but fake videos can manipulate public perception and spread false information. They can thus be used to promote unethical – if not illegal – competition between candidates. Recent research on deepfakes and cheapfakes highlights significant advancements and concerns in digital misinformation. Deepfakes use sophisticated AI to create highly realistic but fake videos, posing severe threats to information integrity and public trust. Conversely, cheapfakes, which are simpler to produce, involve basic editing techniques to manipulate content."
"Digital platforms have reshaped political participation, offering new civic engagement and advocacy avenues. AI enhances these processes through personalised communication, real-time monitoring, and data analysis but also poses risks of manipulation and disinformation. AI improves efficiency and integrity in modern electoral processes through voter registration, e-voting, and result tabulation. However, it also raises privacy, security and trust issues. AI's predictive capabilities in electoral behaviour introduce new dynamics in political competition, raising ethical concerns about manipulation and democratic legitimacy."
"To ensure the mitigation of the primary AI and digital technology- related risks in digital democracy processes, enhancing citizens' digital media literacy is crucial. This includes educating individuals on critically assessing information, recognising misinformation, and understanding the underlying mechanisms of AI and digital platforms. On the supply side, robust AI and platform regulation must establish clear ethical guidelines and accountability measures. These regulations should prevent misuse, ensure transparency in AI-driven decision-making processes, and protect user privacy and data security. By addressing both the demand and supply sides, a more resilient and trustworthy digital democratic environment can be fostered."
➡️ Read the full chapter here.
🎤 Are you looking for a speaker in AI, tech & privacy?
I would welcome the opportunity to:
➵ Give a talk at your company;
➵ Speak at your event;
➵ Coordinate a private AI Bootcamp for your team (15+ people).
🏛️ AI Lawsuit: Elon Musk vs. Sam Altman
Elon Musk sued OpenAI & its CEO Sam Altman again, this time in a federal court and alleging manipulation & more. The drama continues. Quotes:
"Elon Musk’s case against Sam Altman and OpenAl is a textbook tale of altruism versus greed. Altman, in concert with other Defendants, intentionally courted and deceived Musk, preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence (“AI”). Altman and his long-time associate Brockman assiduously manipulated Musk into co-founding their spurious non-profit venture, OpenAL Inc., by promising that it would chart a safer, more open course than profit-driven tech giants. The idea Altman sold Musk was that a non-profit, funded and backed by Musk, would attract world-class scientists, conduct leading Al research and development, and, as a meaningful counterweight to Google’s DeepMind in the race for Artificial General Intelligence (“AGI”), decentralize its technology by making it open source. Altman assured Musk that the non-profit structure guaranteed neutrality and a focus on safety and openness for the benefit of humanity, not shareholder value. But as it turns out, this was all hot-air philanthropy—the hook for Altman’s long con."
"After Musk lent his name to the venture, invested significant time, tens of millions of dollars in seed capital, and recruited top Al scientists for OpenAL Inc., Musk and the non-profit’s namesake objective were betrayed by Altman and his accomplices. The perfidy and deceit are of Shakespearean proportions."
"Once OpenAl Inc.'s technology approached transformative AGI, Altman flipped the narrative and proceeded to cash in. In partnership with Microsoft, Altman established an opaque web of for-profit OpenAl affiliates, engaged in rampant self-dealing, seized OpenAL Inc.’s Board, and systematically drained the non-profit of its valuable technology and personnel. The resulting OpenAl network, in which Altman and Microsoft hold significant interests, was recently valued at a staggering $100 billion."
"As a result of their unlawful actions, Defendants have been unjustly enriched to the tune of billions of dollars in value, while Musk, who co-founded their de-facto for-profit start-up, has been conned along with the public, whom s vital technology was supposed to benefit. Musk brings this remedial action to divest Defendants of their ill-gotten gains."
➡️ See the lawsuit here.
🏛️ AI Lawsuit: YouTuber David Millette vs. OpenAI
YouTube creator David Millette sued OpenAI over its non-consensual transcription of YouTube videos to train ChatGPT. The lawsuits against OpenAI are piling up. Important quotes:
"This case addresses the surreptitious, non-consensual transcription of millions of YouTube users’ videos by Defendants to train Defendants’ AI software products. For years, YouTube has been a popular video-sharing platform that allows content creators and users to upload and share videos with audiences worldwide. However, unbeknownst to those who upload videos to YouTube, Defendants have been covertly transcribing YouTube videos to create training datasets that they then use to train their AI products."
"By transcribing and using these videos in this way, Defendants profit from Plaintiff’s and class members’ data time and time again. As Defendants’ AI products become more sophisticated through the use of training datasets, they become more valuable to prospective and current users, who purchase subscriptions to access Defendants’ AI products."
"By collecting and using this data without consent, Defendants have profited significantly from the use of Plaintiff’s and Class members’ materials, violated California’s Unfair Competition Law (“UCL”), and been unjustly enriched at Plaintiff and Class members’ expense."
➡️ See the lawsuit here.
📋 AI Report: UN and Intl. Labour Organization
The United Nations & the International Labour Organization published the report "Mind the AI Divide - Shaping a Global Perspective on the Future of Work," and it's a must-read for everyone interested in AI & the future of work. Quotes:
"Research on the possible effects of generative AI on employment across the world suggests that while there are likely to be important transformative effects on some occupations, impacts in terms of job losses are much less than headline figures appearing in the media, and certainly do not point to a jobless future. According to an analysis undertaken by the International Labour Organization on the potential exposure of tasks to generative AI technology, clerical support workers are the most exposed occupational group with 24 percent of the tasks in these jobs associated with high level of exposure to automation and another 58 percent with medium-level exposure (see Figure 1).2 Other occupational groups are less exposed, with only 1 to 4 percent of tasks considered as having high automation potential and medium-exposed tasks not exceeding 25 percent. This means that, while certain tasks in these occupations could potentially be automated, most tasks still require human intervention. Such partial automation could enable efficiency gains, by allowing humans to spend more time on other areas of work."
"Narrowing the divide is not a straightforward endeavour. It requires policies at the international and national level, with special attention to integration of AI into the world of work. As the analysis in this report has shown, advances in technology put at risk jobs in sectors such as call centres and other types of business process outsourcing that are prevalent in some developing countries. In addition, the potential for productivity gains in the workplace risks not being realized if basic impediments – such as the lack of access to computers at work and foundational digital skills – are not addressed. (...)."
"Building AI capacity through international cooperation is essential for equitably distributing the benefits of this transformative technology. By pooling expertise, targeting sensitive areas, and fostering public-private collaboration, countries can enhance their AI readiness, mitigate risks, and unlock the potential of AI for sustainable economic and social progress. International organizations play a critical role in facilitating this collaborative effort, serving as platforms for coordination, knowledge-sharing, and the development of global frameworks for responsible AI development and deployment."
➡️ Read the full report here.
🔥 AI Governance is HIRING
Below are 18 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. Booking.com (🇳🇱): Group Data & AI Governance Manager - apply
2. Nebius AI (🇳🇱): Privacy and AI Governance Manager - apply
3. Analog Devices (🇮🇪): Senior Manager, AI Governance - apply
4. Rakuten (🇯🇵): Vice Sr.Manager & AI Governance Manager - apply
5. Advanced Data & AI Company (🇦🇺): AI Governance & Privacy - apply
6. EY (🇮🇳): Manager, AI Governance - apply
7. Mastercard (🇹🇷): Manager, AI Governance - apply
8. Hyatt Hotels Corporation (🇺🇸): Director Data & AI Governance - apply
9. Barden (🇮🇪): AI Governance Specialist - apply
10. Mars (🇬🇧): Global Director Enterprise Data and AI Governance - apply
11. Siemens Energy (🇵🇹): AI Governance Consultant - apply
12: Zurich Insurance (🇪🇸) AI Governance Architect / Technical Lead - apply
13. Cruise (🇺🇸): Program Manager, Privacy & AI Governance - apply
14. GEICO (🇺🇸): Director, Data, Model & AI Governance - apply
15. Deloitte (🇨🇦): Senior Manager, AI Governance, Risk and Data - apply
16. Compass.uol (🇧🇷): AI Governance Analyst - apply
17. Visa (🇺🇸): Lead System Architect, AI Governance - apply
18. Snowflake (🇺🇸): Product Marketing Manager - AI Governance - apply
➡️ For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
🚀 Upskill & advance your AI governance career
As AI regulation expands worldwide, it's a great time to invest in your AI governance career. Save your spot in the autumn cohorts of our AI Bootcamps:
1. Emerging Challenges in AI, Tech & Privacy (4 weeks)
🗓️ Tuesdays, September 3 to 24 - learn more & register
2. The EU AI Act Bootcamp (4 weeks)
🗓️ Wednesdays, September 4 to 25 - learn more & register
More than 875 people have participated in our training programs - don't miss them!
🔥Tip: Save $180 with our AI Governance Package—join both Bootcamps at 20% off.
🙏 Thank you for reading!
➵ If you have comments on this week's edition, write to me, and I'll get back to you soon.
➵ If you enjoyed this edition of this newsletter, share it with friends & colleagues, and help me spread awareness about AI policy & regulation.
All the best, Luiza