👋 Hi, Luiza Jarovsky here. Welcome to the 125th edition of this newsletter with the latest developments in AI policy, compliance & regulation, read by 33,300+ subscribers in 145+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I discuss the EU's consultation on trustworthy general-purpose AI models and its implications within the context of the EU AI Act. Paid subscribers received it yesterday and can access it here: 🤖 Trustworthy General-Purpose AI. If you enjoy this newsletter, upgrade to a paid subscription to support my work and gain access to my weekly exclusive analyses on AI compliance and regulation. Thank you!
🔥 Last call: the September cohorts of our AI Governance Bootcamps start next week! With our AI Governance Package, you can join both Bootcamps and save $180. Register here.
✏️ [AI & Education] 10 Excellent Resources
Education is one of the fields most impacted by AI, and everyone should be aware of it. Below are 10 excellent resources to learn more about the topic. Download, read & share:
1️⃣ UNESCO (authors: Fengchun Miao & Wayne Holmes): "Guidance for generative AI in education and research"
🔎 Read it here.
2️⃣ UK Parliament (authors: Juri Felix & Laura Webb): "Use of artificial intelligence in education delivery and assessment"
🔎 Read it here.
3️⃣ Cambridge Summit of Education (Rose Luckin's keynote speech): "How educators can help future learners outwit the robots"
🔎 Watch it here.
4️⃣ UNESCO: "Education in the age of artificial intelligence"
🔎 Read it here.
5️⃣ The World Bank (authors: Ezequiel Molina, Cristóbal Cobo, Helena Rovner & Jasmine Pineda): "AI revolution in education: What you need to know. In Digital Innovations in Education"
🔎 Read it here.
6️⃣ UNESCO: "Reimagining our futures together: a new social contract for education"
🔎 Read it here.
7️⃣ World Economic Forum (authors: Genesis Elhussein, Elselot Hasselaar, Ostap Lutsyshyn, Tanya Milberg & Saadia Zahidi): "Shaping the Future of Learning: The Role of AI in Education 4.0"
🔎 Read it here.
8️⃣ UNESCO (authors: Fengchun Miao, Kelly Shiohira, Zaahedah Vally & Wayne Holmes): "International forum on AI and education: steering AI to empower teachers and transform teaching, 5-6 December 2022; analytical report"
🔎 Read it here.
9️⃣ US. Department of Education, Office of Educational Technology (authors: Miguel A. Cardona, Roberto Rodríguez & Kristina Ishmael): "Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations"
🔎 Read it here.
🔟 UNESCO: "International conference on Artificial intelligence and Education, Planning education in the AI Era: Lead the leap: final report"
🔎 Read it here.
📌 [AI Glossary] 100 Concepts
As AI development accelerates, AI literacy becomes increasingly important, and everyone should be familiar with AI-related terminology. Out of the 100 concepts listed below, how many do you know?
➡️ Among the terms above, there are those relating more specifically to:
➵ Machine learning
➵ Deep learning
➵ AI governance
➵ AI regulation
➵ Data protection
1️⃣ AI regulation is in ongoing development, and it's a good idea to use AI legislation trackers to stay up to date with new laws being introduced worldwide. I've gathered some of the best legislation trackers in this article; check it out.
2️⃣ Data protection law is a core discipline in AI development and governance, and many essential issues are still being debated by data protection authorities. I've written extensively about it in my newsletter; check out the archive.
3️⃣ AI & machine learning are in constant development as well. If you are an AI governance professional, it's a good idea to look for resources to expand your technical knowledge. I've gathered a few resources here - check them out.
👉 If you want to upskill and advance your AI Governance career, register for our 4-week Bootcamps (led by me) and join 875+ people who have participated in our training programs. With the AI Governance package, you save $180 (20% off). Register here.
📑 [AI Research] "Regulating under Uncertainty”
The report "Regulating under Uncertainty: Governance Options for Generative AI" by Florence G'SELL is a must-read for everyone in AI governance. Important information:
➡️ This is a 531-page document in which the author offers a broad overview of aspects relevant to AI regulation initiatives. Among the topics covered are:
➵ Generative AI: The technology and supply chain
➵ Challenges and risks of generative AI
➵ Industry initiatives
➵ Regulatory initiatives
➵ International initiatives and negotiations
➡️ The report reviews regulatory initiatives from:
➵ EU
➵ China
➵ US
➵ Brazil
➵ Canada
➵ India
➵ Israel
➵ Japan
➵ Saudi Arabia
➵ Singapore
➵ South Korea
➵ UAE
➵ UK
➡️ It also covers international initiatives and negotiations, such as:
➵ United Nations
➵ OECD
➵ The G7
➵ The G20
➵ BRICS
➵ African Union
➵ AI safety summits
➵ The Council of Europe’s treaty
➵ The Global Partnership on AI
➵ US-EU Trade and Technology Council
➵ UNESCO
➡️ According to the author:
"The title of this report – “Regulating Under Uncertainty: Governance Options for Generative AI” – seeks to convey the unprecedented position of governments as they confront the regulatory challenges AI poses. Regulation is both urgently needed and unpredictable. It also may be counterproductive, if not done well. However, governments cannot wait until they have perfect and complete information before they act, because doing so may be too late to ensure that the trajectory of technological development does not lead to existential or unacceptable risks. The goal of this report is to present all of the options that are “on the table” now with the hope that all stakeholders can begin to establish best practices through aggressive information sharing. The risks and benefits of AI will be felt across the entire world. It is critical that the different proposals emerging are assembled in one place so that policy proponents can learn from one another and move ahead in a cooperative fashion."
➡️ It's a comprehensive document that will be relevant for those interested in learning more about the current state of AI regulation efforts and about some of the main debates and issues to consider in the context of AI governance & compliance.
👉 Read the report here.
🎙️ [AI Live Talks] Conversation with Max Schrems
If you are interested in the intersection of privacy and AI, don't miss my live talk with Max Schrems (our second one!). Register now. Here's why you should join us live:
➵ If you have been reading this newsletter for some time, you know that my view is that common AI practices - which became ubiquitous in the current Generative AI wave - are unlawful from a GDPR perspective. Yet, we have not seen a clear response from data protection authorities.
➵ Max - the Chairman of noyb and one of the world's leading privacy advocates - has been a tireless advocate for privacy rights. More recently, he and his team have also been pioneers in defending privacy rights in the context of AI-related new risks and challenges.
➵ In this live talk, we'll discuss noyb's recent legal actions in this area, including their complaints against Meta & X/Twitter, legitimate interest in the context of AI, and more.
➵ This is my second talk with Max. The first one, in which we discussed GDPR enforcement challenges, was watched by thousands of people (live and on-demand). You can find it here.
👉 If you are interested in privacy and AI, or if you work in AI policy, compliance & regulation, you can't miss it. To participate, register here.
🌍 [AI Regulation] Emerging Approaches Worldwide
UNESCO published its "Consultation Paper on AI Regulation - Emerging Approaches Across the World," and it's an excellent read for everyone in AI governance. Important information:
➵ Among other topics, the consultation paper describes 9 regulatory approaches (with examples from around the world) that are extremely interesting to anyone working or studying AI governance & regulation:
"1️⃣ Principles-Based Approach: Offer stakeholders a set of fundamental propositions (principles) that provide guidance for developing and using AI systems through ethical, responsible, human-centric, and human-rights-abiding processes.
2️⃣ Standards-Based Approach: Delegate (totally or partially) the state’s
regulatory powers to organizations that produce technical standards that
will guide the interpretation and implementation of mandatory rules.
3️⃣ Agile and Experimentalist Approach: Generate flexible regulatory schemes, such as regulatory sandboxes and other testbeds, that allow organizations to test new business models, methods, infrastructure, and tools under more flexible regulatory conditions and with the oversight and accompaniment of public authorities.
4️⃣ Facilitating and Enabling Approach: Facilitate and enable an environment that encourages all stakeholders involved in the AI lifecycle to develop and use responsible, ethical, and human rights-compliant AI systems.
5️⃣ Adapting Existing Laws Approach: Amend sector-specific rules (e.g., health, finance, education, justice) and transversal rules (e.g., criminal codes, public procurement, data protection laws, labor laws) to make incremental improvements to the existing regulatory framework.
6️⃣ Access to Information and Transparency Mandates Approach: Require the
deployment of transparency instruments that enable the public to access basic information about AI systems.
7️⃣ Risk-Based Approach: Establish obligations and requirements in accordance with an assessment of the risks associated with the deployment and use of certain AI tools in specific contexts.
8️⃣ Rights-Based Approach: Establish obligations or requirements to protect individuals' rights and freedoms.
9️⃣ Liability Approach: Assign responsibility and sanctions to problematic
uses of AI systems."
➵ It's important to notice that the regulatory approaches described above are not mutually exclusive, and AI laws around the world will often combine two or more approaches.
➵ The paper will be available for open public consultation in English until September 19, 2024.
👉 Download the document here.
⚖️ [AI Lawsuit] Best-Selling Authors vs. Anthropic
Best-selling authors Andrea Bartz, Charles Graeber & Kirk Johnson filed a lawsuit against Anthropic over copyright infringement. The AI copyright lawsuits continue piling up. Important quotes below:
"Anthropic’s commercial gain has come at the expense of creators and rightsholders, including Plaintiffs and members of the Class. Book readers typically purchase books. Anthropic did not even take that basic and insufficient step. Anthropic never sought—let alone paid for—a license to copy and exploit the protected expression contained in the copyrighted works fed into its models. Instead, Anthropic did what any teenager could tell you is illegal. It intentionally downloaded known pirated copies of books from the internet, made unlicensed copies of them, and then used those unlicensed copies to digest and analyze the copyrighted expression-all for its own commercial gain. The end result is a model built on the work of thousands of authors, meant to mimic the syntax, style, and themes of the copyrighted works on which it was trained."
"Anthropic styles itself as a public benefit company, designed to improve humanity. In the words of its co-founder Dario Amodei, Anthropic is “a company that’s focused on public benefit.” For holders of copyrighted works, however, Anthropic already has wrought mass destruction. It is not consistent with core human values or the public benefit to download hundreds of thousands of books from a known illegal source. Anthropic has attempted to steal the fire of Prometheus. It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works."
"Anthropic has also usurped a licensing market for copyright owners. In the last two years, a thriving licensing market for copyrighted training data has developed. A number of AI companies, including OpenAI, Google, and Meta, have paid hundreds of millions of dollars to obtain licenses to reproduce copyrighted material for LLM training. These include deals with Axel Springer, News Corporation, the Associated Press, and others. Furthermore, absent Anthropic’s largescale copyright infringement, blanket licensing practices would be possible through clearinghouses, like the Copyright Clearance Center, which recently launched a collective licensing mechanism that is available on the market today. Anthropic, however, has chosen to use Plaintiffs works and the works owned by the Class free of charge, and in doing so has harmed the market for the copyrighted works by depriving them of book sales and licensing revenue."
👉 Read the lawsuit here.
🚀 [Corporate] AI Governance Upskilling
I would welcome the opportunity to:
➵ Give a talk about the latest developments in AI, tech & privacy, discussing emerging compliance & governance challenges in these areas;
➵ Coordinate an in-company AI Governance Bootcamp for your team
👉 Get in touch here.
🔥 [Job Openings] AI Governance is HIRING
Below are 15 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. 🇺🇸 The Future Society: Director, U.S. AI Governance - apply
2. 🇩🇰 PFA: AI Governance Specialist - apply
3. 🇬🇧 Lloyds Banking Group: Head of Data and AI Ethics - apply
4. 🇭🇺 Thermo Fisher Scientific: AI Governance Lead - apply
5. 🇮🇳 Microsoft: Senior PM - Responsible AI Applied Science - apply
6. 🇺🇸 Sutter Health: Director, Data and AI Governance - apply
7. 🇳🇴 Accenture Nordics: Responsible AI Advisor - apply
8. 🇺🇸 SAS: Solution Consultant, AI Governance Advisory - apply
9. 🇨🇿 EY: Responsible AI - Senior Manager - apply
10. 🇺🇸 Trace3: Consultant, AI Governance Risk & Compliance - apply
11. 🇳🇱 Accenture: Responsible AI Advisor - apply
12. 🇺🇸 Health Care Service Corporation: AI Governance Specialist - apply
13. 🇫🇷 Dataiku: Software Engineer - AI Governance - apply
14. 🇮🇪 Analog Devices: Senior Manager, AI Governance - apply
15. 🇺🇸 TikTok: Software Engineer Intern AI Governance - apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alert. Good luck!
⏰ [Last call] AI governance Bootcamps in September
Our 4-week AI governance Bootcamps are live online training programs designed for professionals who want to upskill and advance their AI governance careers. 875+ professionals have already joined them.
Each Bootcamp includes 4 live classes with me, additional material, quizzes, office hours, a certificate upon completion, and 8 CPE credits pre-approved by the IAPP.
👉 Check out the programs & read testimonials here; sign up for information about upcoming programs here. If you have questions, write to me.
👉 Register for both Bootcamps using our AI Governance Package and save $180 (20% off). 🔥 This is the last week to register for cohorts starting in September.
🙏 Thank you for reading
➵ If you have comments on this edition, write to me, and I'll get back to you soon.
➵ If you enjoyed this edition, consider sharing it with friends & colleagues and help me spread awareness about AI policy, compliance & regulation.
See you next week!
All the best, Luiza