👋 Hi, Luiza Jarovsky here. Welcome to the 134th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 35,600+ subscribers in 150+ countries. I hope you enjoy reading it as much as I enjoy writing it.
💎 In this week's AI Governance Professional Edition, I'll explore the EU AI Act's supervision and enforcement mechanisms in the context of general-purpose AI models. Paid subscribers will receive it tomorrow. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition) and stay ahead in the fast-paced field of AI governance.
⏰ The October cohorts start next week! If you are transitioning to AI governance and can invest at least 5 hours a week in learning & upskilling, our 4-week AI Governance Training is for you! Join 950+ professionals from 50+ countries who have already benefited from our programs. 👉 Reserve your spot today!
👉 A special thanks to MineOS for sponsoring this week's free edition of the newsletter. Read their article:
Privacy risks are more prevalent than ever, but with the right strategic approach, you can turn risks into opportunities for growth through enhanced resilience and transparent compliance– something customers have come to demand from brands. See the insights “Data Diva” Debbie Reynolds shared on MineOS, where she highlights the importance of a robust data governance strategy when navigating privacy-related risks.
🌍 The AI Regulation Divide
The Governor of California vetoed the AI safety bill SB-1047, which had the support of Elon Musk and Anthropic and was opposed by OpenAI, Meta, and Google. This development significantly impacts the AI governance debate. Here's why:
1️⃣ The bill SB-1047, called "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," would apply to AI models "trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer."
2️⃣ Among its safety provisions, after significant amendments suggested by the tech industry, were to implement safety measures before starting to train the model and to allow California’s attorney general to seek injunctive relief (making a company cease the AI-related operations it finds dangerous) and sue an AI developer if its model causes a catastrophic event.
3️⃣ The bill was approved by California's Legislature in August, and among its supporters were Elon Musk, Anthropic, Boston Dynamics, and scientists like Gary Marcus.
4️⃣ Among those opposing the bill were companies like OpenAI, Meta, Google, and Microsoft, as well as scientists such as Yann LeCun, Fei-Fei Li, and Andrew Ng. When opposing the bill, the AI Alliance stated that it "would slow innovation, thwart advancements in safety and security, and undermine California’s economic growth. The bill’s technically infeasible requirements will chill innovation in the field of AI and lower access to the field’s cutting edge (...)”
5️⃣ In his veto, Gavin Newsom, the Governor of California, stated: "Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself."
6️⃣ From a global AI regulation perspective, California has taken a position opposed to that of the EU (which approved the EU AI Act), leaving stricter guardrails aside and focusing on transparency measures. In this context, on September 28, Governor Newsom signed the bill “AB 2013 Generative Artificial Intelligence: Training Data Transparency,” establishing transparency obligations for developers of Generative AI systems.
7️⃣ This is a core development in the AI regulation debate, and it remains unclear if we'll have a global regulatory divide in AI, with some countries following the EU approach, where the law establishes a comprehensive risk-based legal framework, market surveillance system, and enforcement mechanisms, and other countries following the US approach, where there isn't a comprehensive legal framework, but various self-regulatory initiatives and fragmented rules.
⚖️ German Court Dismisses AI Copyright Lawsuit
In a shock to many, a German court dismissed the AI copyright lawsuit filed by photographer Robert Kneschke against LAION for using his images to train AI without his consent. Here's what you need to know:
1️⃣ The photographer filed the AI copyright lawsuit against the non-profit LAION (Large-scale Artificial Intelligence Open Network) in April 2023, arguing that his images were used to train the LAION 5B dataset without his consent.
2️⃣ The Hamburg Regional Court dismissed the infringement allegations and decided that LAION's use of Robert Kneschke’s images to train AI fell into Section 60(d) of the German Copyright Law (implementing Article 3 of the EU Copyright Directive), which establishes an exception for text and data mining for the purposes of scientific research.
3️⃣ This is what Section 60(d) of the German Copyright Law says:
"Text and data mining for scientific research purposes
(1) It is permitted to make reproductions to carry out text and data mining (...) for scientific research purposes in accordance with the following provisions.
(2) Research organisations are authorised to make reproductions.
ʻResearch organisationsʼ means universities, research institutes and other establishments conducting scientific research if they
1. pursue non-commercial purposes,
2. reinvest all their profits in scientific research or
3. act in the public interest based on a state-approved mandate. (...)"
4️⃣ Why has this decision shocked many copyright experts?
➵ The court based its decision on Section 60(d), which reduces the control creators can have over their work, as it does not allow the reservation of rights established in Section 44b. This means that even if a creator opts out and states upfront (in a machine-readable format) that they don't want their work used to train AI, this reservation of rights is not applicable when scientific research purposes are involved, and Section 60(d) is applied.
➵ Many creators believed that the reservation of rights mechanism established by the EU Copyright Directive was a safe way to ensure they would remain in control and be able to opt out of AI training. This decision seemingly expanded the potential application of Section 60(d) and created doubts about the interpretation of EU copyright law in the context of AI training.
5️⃣ The main issue now is whether this decision will set a precedent in the EU. In any case, Robert Kneschke can still appeal.
⚖️ AI Copyright: Dataset Inspection
Like in the movies, the judge in one of the AI copyright lawsuits allowed authors' representatives to inspect OpenAI's confidential training dataset in a secure room. Will it become the norm? Here's what will happen:
"Training Data shall be made available for inspection in electronic format at OpenAI’s offices in San Francisco CA, or at a secure location determined by OpenAI within 25 miles of San Francisco, CA; or at another mutually agreed location. Training Data will be made available for inspection between the hours of 8:30 a.m. and 5:00 p.m. on business days, although the parties will be reasonable in accommodating reasonable requests to conduct inspections at other times."
"Training Data shall be made available by OpenAI in a secure room on a secured computer without Internet access or network access to other unauthorized computers or devices. The secured computer will contain a README file that will provide a directory of the Training Data and brief descriptions of layout, format, and searching.”
"No recordable media or recordable devices, including without limitation computers, cellular telephones, cameras, other recording devices, or drives of any kind, shall be permitted into the secure inspection room, except the Producing Party may provide a limited-use note-taking computer at Inspecting Party’s request, solely for note-taking purposes. At the end of each day of inspection, the Inspecting Party shall be able to copy notes from the note-taking computer onto a recordable device, under the supervision of the Producing Party. For the avoidance of doubt, medical devices (e.g., heart monitors, insulin pumps), stopwatches, and timers are permitted so long as such devices are not capable of recording or copying data from the secured computer."
"All persons who will review OpenAI’s Training Data on behalf of an Inspecting Party, including the Inspecting Party’s counsel, must qualify under paragraph 7.3 of the Stipulated Protective Order as an individual to whom “HIGHLY CONFIDENTIAL – ATTORNEYS’ EYES ONLY” information may be disclosed, and must sign the Non-Disclosure Agreement attached as Exhibit A to the Stipulated Protective Order."
⚖️ Artist Sues the U.S. Copyright Office
An artist is suing the U.S. Copyright Office over its decision not to grant copyright protection for AI-generated art. Should AI art receive copyright protection? Here's what happened and why it matters:
1️⃣ According to the lawsuit, Jason Allen uses Midjourney (a Generative AI tool) to create his artworks. In 2022, he filed an application to register the copyright of "Théâtre D'opéra Spatial" - one of his AI-generated artworks - with the U.S. Copyright Office. However, the U.S. Copyright Office denied the copyright application because the artwork "lacks the human authorship necessary to support a copyright claim."
2️⃣ Behind this lawsuit are some extremely important issues involving AI and copyright law, such as:
➵ Should AI-generated or AI-assisted artworks receive copyright protection? Should any type of AI assistance be allowed?
➵ What is the minimum threshold of human effort? How do we measure the human effort involved?
➵ If multiple iterations with the AI system were needed to create the artwork, or if the artist had to learn about the AI system's functioning and practice prompting, should it change the type of protection afforded?
3️⃣ Here are some of the artist's arguments in this case:
"Plaintiff conducted extensive testing of the outputs generated based on his prompts. Through 624 iterations, he noted the results produced by each word and its placement within the prompt. By studying the AI's behavior, he learned when Midjourney focused on specific parts of his instructions and when it ignored them altogether. He realized that guiding the AI to incorporate all elements required a process of trial and error, akin to a director working with a cameraman. Just as a cameraman needs repeated instructions on what to focus on, Midjourney needed precise and repeated guidance. During his experimentation, Plaintiff developed a "writing technique" for crafting effective prompts, ensuring the AI accurately captured his vision."
"The Work was not created by Midjourney merely through inputting a few prompts or pressing a button. Midjourney did not engage in any creative selection or arrangement of the image elements. Instead, it simply followed the meticulously crafted instructions provided by the Plaintiff. Midjourney, lacking independent creativity, did not generate the image on its own. Midjourney, much like Gigapixel and Adobe Photoshop, only assisted Plaintiff in creating the Work."
📈 The Rapid Adoption of Generative AI: Stats
The paper "The Rapid Adoption of Generative AI" by Alexander Bick, Adam Blandin, and David Deming has mind-blowing stats that should serve as a wake-up call in favor of AI regulation & governance. Check it out:
1️⃣ This was the 1st nationally representative survey in the US on generative AI use at work and home, and the data are sourced from the Real-Time Population Survey (RPS). According to the paper:
2️⃣ Generative AI's adoption rate is faster than PCs or the internet
"(...) generative AI has been adopted at a faster pace than PCs or the internet. Faster adoption of generative AI compared with PCs is driven by much greater use outside of work, probably due to differences in portability and cost. (...) We find an adoption rate of 28 percent in year two for generative AI, compared with a 25 percent adoption rate in year three for PCs." (page 15)
3️⃣ Extremely high adoption rate at work, across a variety of occupations
"Generative AI adoption at work is highest for computer/mathematical and management occupations, at about 49 percent. Usage at work is also high for business and finance and education occupations (42 and 38 percent, respectively). However, generative AI adoption is relatively common across a range of jobs. With the exception of personal services, at least 20 percent of workers from all major occupations groups use generative AI at work. Interestingly, 22 percent of workers in “blue collar” jobs - construction and extraction, installation and repair, skilled production, and transportation and moving occupations - use generative AI at work." (page 17)
4️⃣ Generative AI assists with 0.5% to 3.5% of all work hours in the US
"We estimate that between 0.5 and 3.5 percent of all work hours in the U.S. are currently assisted by generative AI. Assuming that the productivity gains from recent experimental studies are externally valid, this suggests that generative AI could plausibly increase labor productivity by between 0.125 and 0.875 percentage points at current levels of usage, although we caution that this calculation should be considered highly speculative given the assumptions it requires." (pages 20-21)
5️⃣ What should we take from that? Generative AI is spreading fast across various occupations, including the potential for widespread harm, which might be difficult to control afterward. AI governance and regulation are needed now, as well as effective AI literacy efforts.
📜 100+ Companies Signed the EU AI Pact
The EU Commission announced that 100+ companies signed the EU AI Pact pledges to drive trustworthy and safe AI development. According to the press release:
“The EU AI Pact voluntary pledges call on participating companies to commit to at least three core actions:
➵ AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act.
➵ High-risk AI systems mapping: Identifying AI systems likely to be categorised as high-risk under the AI Act
➵ Promoting AI literacy and awareness among staff, ensuring ethical and responsible AI development.”
➡️ So far, Meta & Apple haven't signed.
🏛️ Privacy Professionals & AI Governance
Have you heard of "digital governance"? Did you know that the privacy career is changing and AI governance skills are in demand? Here's what you should know:
1️⃣ The IAPP recently published its "Organizational Digital Governance Report 2024," and it's a must-read for privacy & AI governance professionals.
2️⃣ Many organizations are struggling to organize their internal digital governance structure. In this context, the privacy career is also changing, and leading privacy professionals should be able to navigate AI governance issues. See these two quotes from the IAPP report:
"Existing C-suite leaders of specific domains are seeing their personal remits expanded and elevated. For example, 69% of chief privacy officers surveyed have acquired additional responsibility for AI governance, while 69% are responsible for data governance and data ethics, 37% for cybersecurity regulatory compliance, and 20% for platform liability. This trend continues at a team level, with over 80% of privacy teams gaining responsibilities that extend beyond privacy. At 55%, more than one in two privacy professionals works in functions with AI governance responsibilities. At 58%, more than one in two privacy pros has picked up data governance and data ethics. At 32%, almost one in three covers cybersecurity regulatory compliance. At 19%, almost one in five has platform liability responsibilities."
"One trend taking root is the expansion and greater empowerment of the role of the CPO. Some of this extends to include responsibility for other digital governance subdomains. For example, many CPOs have acquired responsibility for AI governance. The IAPP-EY Professionalizing Organizational AI Governance Report found 63% of organizations have tasked their privacy functions with AI governance responsibility. In some cases, the extension is even broader to include digital safety and ethics. These are logical extensions in many ways, given the disciplinary, regulatory and governance overlaps. This trend can be observed through the changing job titles in the market, with many additional descriptors tagged on to CPO titles over the past year."
3️⃣ The privacy career is changing and adapting, and with AI's growth, AI governance skills are in demand. For those interested in advancing their privacy careers or pivoting to AI, it's a great moment to upskill and obtain AI governance training (and I'm glad to see that many have already realized that).
4️⃣ If you are transitioning to AI governance, don't miss our 4-week AI governance training.
🎙️ Global AI Regulation, with Raymond Sun
If you are interested in the current state of AI regulation—beyond just the EU and the U.S.—you can't miss my conversation with Raymond Sun in two weeks: register here. This is what we'll talk about:
➵ Among other topics, we'll discuss the latest AI regulation developments in:
🇦🇺 Australia
🇨🇳 China
🇪🇬 Egypt
🇮🇳 India
🇯🇵 Japan
🇲🇽 Mexico
🇳🇬 Nigeria
🇸🇬 Singapore
🇹🇷 Turkey
🇦🇪 United Arab Emirates
and more.
➵ There can be no better person to discuss this topic than Raymond Sun, a lawyer, developer, and the creator of the Global AI Regulation Tracker, which includes an interactive world map that tracks AI regulation and policy developments around the world.
➵ This will be the 19th edition of my Live Talks with global privacy & AI experts, and I hope you can join this edition live, participate in the chat, and stay up to date with AI regulatory approaches worldwide.
👉 To participate, register here.
🎬 Find all my previous Live Talks on my YouTube Channel.
🏛️ Transitioning to AI Governance? Join Our Training
Our 4-week AI Governance Training is a live, online, and interactive program led & designed by me for professionals who want to transition to the AI governance field and who can dedicate at least 5 hours a week (live sessions + self-learning) to the program.
➡️ So far, 950+ people from 50+ countries have already participated, and our 11th cohort starts next week. Here's what to expect:
➵ The full training includes a module on Emerging Challenges in AI, Tech & Privacy and a module on the EU AI Act, comprising a total of 8 live sessions with me, lasting ~75-90 minutes each;
➵ We'll have a meet-and-greet kick-off session and meet twice a week for four weeks. The sessions are interactive, and you're welcome to ask questions. After each live session, I'll send a quiz and additional learning material. Expect to invest 1-2 hours a week reviewing the material. If you have questions, you can email me or schedule an office hours appointment.
➵ When you finish the full program, you'll receive a certificate of completion and 16 CPE credits pre-approved by the IAPP (8 credits per module).
➵ To my knowledge, this is the most up-to-date and in-depth AI governance program available, and you can expect me to bring up the most recent developments in the field. During office hours, I'll be happy to answer your questions and discuss career-related issues I might be able to help you with.
Are you ready?
🗓️ The October training starts next week.
🎓 Check out more details about each module, and read testimonials here.
👉 Save your spot now and enjoy 20% off with our AI Governance Package.
We hope to see you there!
📚 AI Book Club: What Are You Reading?
📖 More than 1,500 people have joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The last book we recommended was Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari.
📖 Ready to discover your next favorite read? See the book list & join the book club here.
🔥 Job Opportunities: AI Governance is HIRING
Below are 10 new AI Governance positions posted in the last few days. Bookmark, share & be an early applicant:
1. BASF 🇪🇸 - Data & AI Governance Facilitation: apply
2. JPMorganChase 🇺🇸 - VP, AI Governance Lead: apply
3. Hays 🇭🇰 - Compliance Manager, Privacy & AI Governance: apply
4. TRUSTEQ GmbH 🇩🇪 - AI Governance Senior Consultant: apply
5. Analog Devices 🇮🇪 - Senior Manager, AI Governance: apply
6. ByteDance 🇬🇧 - AI Governance & Tech Policy: apply
7. Siemens Energy 🇵🇹 - AI Governance Consultant: apply
8. WayUp 🇺🇸 - AI Governance & Oversight: apply
9. EY 🇮🇳 Manager, AI Governance, Risk Consulting: apply
10. Bankwest 🇦🇺 - Manager, Data Science, Responsible AI: apply
👉 For more AI governance and privacy job opportunities, subscribe to our weekly job alerts. Good luck!
🚀 Partnerships: Let's Work Together
Love this newsletter? Here’s how we can collaborate:
Become a Sponsor: Does your company offer privacy or AI governance solutions? Sponsor this newsletter (3, 6, or 12-month packages), get featured to 35,600+ email subscribers, and grow your audience: get in touch.
Upskill Your Privacy Team: Enroll your team in our 4-week AI Governance Training—three or more participants get a group discount: get in touch.
🙏 Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
If you found this edition valuable, consider sharing it with friends & colleagues to help spread awareness about AI policy, compliance & regulation. Thank you!
See you next week.
All the best, Luiza