๐ Hi, Luiza Jarovsky here. Welcome to the 154th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 42,000+ subscribers in 160+ countries. I hope you enjoy reading it as much as I enjoy writing it.
๐ In this week's AI Governance Professional Edition, Iโll dive deep into Brazil's proposed AI bill and compare it to the EU AI Act, highlighting some of the legal elements that should be prioritized in AI governance frameworks. Paid subscribers will receive it on Thursday. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly editions (this free newsletter + the AI Governance Professional Edition), access all previous and upcoming analyses in full, and stay ahead in the rapidly evolving field of AI governance.
๐ฃ๏ธ Step into 2025 with a new career path! In January, join me for a 3-week intensive AI Governance Training (8 live lessons; 12 hours total), already in its 16th cohort.ย Join over 1,000 professionals who have benefited from our programs: don't miss it! Students, NGO members, and professionals in career transition can request a discount.
๐ฎ๐น Italy vs. LLMs: Are They Unlawful?
Many didn't notice, but The Italian Data Protection Authority has just caused a massive earthquake in the field of AI. Here's the formal warning it issued to a publisher after it finalized a licensing deal with OpenAI, followed by my commentary:
"Privacy guarantor to Gedi: be careful about selling personal data contained in the newspaper archive to OpenAI so that it can use them to train algorithms.
The digital archives of newspapers store the stories of millions of people, with information, details, personal data, even extremely sensitive, that cannot be licensed for use by third parties to train AI, without due precautions.
If the Gedi Group, by virtue of the agreement signed last September 24 with OpenAI, communicated to the latter the personal data contained in its archive, it could violate the provisions of the EU Regulation, with all the consequences, including sanctions, provided for by the legislation.
This is, in short, the formal warning that the Privacy Guarantor has sent to Gedi and to all the companies (Gedi News Network Spa, Gedi Periodi e Servizi Spa, Gedi Digital Srl, Monet Srl, and Alfemminile Srl) that are part of the agreement for the communication of editorial content stipulated with OpenAI. The measure was adopted after the first feedback provided by the company, as part of the investigation recently launched by the Authority.
Based on the information received, the Authority believes that the processing activities are intended to involve a large volume of personal data, including of a particular nature and of a judicial nature, and that the impact assessment, carried out by the company and transmitted to the Garante, does not sufficiently analyze the legal basis by virtue of which the publisher could transfer or license for use by third parties the personal data present in its archive to OpenAI, so that it can process them to train its algorithms.
Finally, the warning measure highlights how the information and transparency obligations towards the interested parties do not appear to have been sufficiently fulfilled and that Gedi is not in a position to guarantee the latter the rights they are entitled to under European privacy legislation, in particular the right to object."
*Automatic translation from Italian to English
โก๏ธ The Italian DPA is saying that:
โ The licensing deal is potentially unlawful;
โ Other deals and business models based on LLMs might also be unlawful, as basic data protection obligations and rights are likely not respected.
โก๏ธ This is a topic I've been covering in this newsletter for the past two years and in my AI Governance Training program: when we closely examine LLM-based business models, itโs unclear how they comply with EU data protection law. More guidelines from the European Data Protection Board (EDPB) are needed. In the meantime, it seems the Italian Data Protection Authority is tackling the issue with their bare hands and saying: not here!
โก๏ธ A reminder that it's not the first time that the Italian DPA has taken pioneering enforcement measures against AI companies: Replika, OpenAI, and others have already been targeted in the past.
โก๏ธ There's an interesting intersection with copyright here: licensing deals between content owners and AI companies have been proposed as a potential solution to AI copyright issues. If licensing becomes unlawful from a data protection perspective (in the EU), we're back to square one.
โก๏ธ What might happen:
โ The publishers' lawyers might formally answer;
โ Other EU DPAs might follow suit - ๐ฑ.
๐๏ธ Overregulation?
Unpopular opinion: the EU is not overregulating AI.
๐ Join the discussion on LinkedIn, X/Twitter, or Bluesky.
๐ง๐ท The Brazilian AI Bill Is Coming
The Brazilian Senate has approved a request to vote urgently on a bill regulating AI. The voting is scheduled for tomorrow, Tuesday. Here are my comments and a comparison with the EU AI Act:
1๏ธโฃ This is the proposed definition of an AI system (Article 4). Pay attention to how similar it is to the EU AI Act's definition:
"computer system, with different degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, through input data from machines or humans, with the aim of producing predictions, recommendations or decisions that can influence the virtual or real environment."
2๏ธโฃ Article 5 of the proposed bill brings a list of rights for people affected by AI systems. It's similar to the approach proposed by the GDPR (Articles 12-23: data subject's rights), which we don't find in the EU AI Act (there are rights, but they are not listed in a dedicated article). Well done to the Brazilian lawmakers who took a firm stand on affected people's rights. Here's the list:
โ "the right to prior information regarding interactions with artificial intelligence systems;
โ the right to an explanation of the decision, recommendation or prediction made by artificial intelligence systems;
โ the right to challenge decisions or predictions of artificial intelligence systems that produce legal effects or that significantly impact the interests of the affected party;
โ the right to human determination and participation in decisions of artificial intelligence systems, taking into account the context and the state of the art of technological development;
โ the right to non-discrimination and the correction of direct, indirect, illegal or abusive discriminatory biases; and
โ the right to privacy and the protection of personal data, under the terms of the relevant legislation"
3๏ธโฃ It adopts a risk-based approach similar to the EU AI Act (including, for example, excessive risk and high risk). These AI systems are prohibited:
โ "that employ subliminal techniques that have the purpose or effect of inducing a natural person to behave in a manner that is harmful or dangerous to their health or safety or against the foundations of this Law;
โ that exploit any vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, in order to induce them to behave in a manner that is harmful to their health or safety or against the foundations of this Law;
โ by the public authorities, to evaluate, classify or rank natural persons, based on their social behavior or personality attributes, by means of universal scoring, for access to goods and services and public policies, in an illegitimate or disproportionate manner."
๐ I'll continue the analysis in this week's AI Governance Professional Edition. Paid subscribers will receive it on Thursday. If you're not a paid subscriber, upgrade today to access exclusive, in-depth analyses every week.
๐ฎ The Future of AI Governance
Iโm launching a new interview series with the leaders shaping the future of AI governance, and I'll publish it in this newsletter.
I'm especially interested in the professionals leading AI governance teams or building AI governance products and services.
๐ Who should I interview? Please tag them (or yourself!) here.
๐ฃ๏ธ Step Into 2025 with a New Career Path
If you are dealing with AI-related challenges at work, don't miss our acclaimed live online AI Governance Trainingโnow in its 16th cohortโand start 2025 ready to excel.
This January, weโre offering a special intensive format for participants in Europe and Asia-Pacific: all 8 lessons (12 hours of live learning with me) condensed into just 3 weeks, allowing participants to catch up with recent developments and upskill.
โ Our unique curriculum, carefully curated over months and constantly updated, focuses on AI governance's legal and ethical topics, helping you elevate your career and stay competitive in this emerging field.
โ Over 1,000 professionals from 50+ countries have benefited from our programs, and alumni consistently praise their experienceโcheck out their testimonials. Students, NGO members, and people in career transition can request a discount.
โ Are you ready? Register now to secure your spot before the cohort fills up:
*If this is not the right time, join our Learning Center to receive AI governance professional resources and updates on training programs and live sessions.
โ๏ธ FTC vs. AI Misrepresentation
Companies should not misrepresent what their AI systems can do: the FTC is taking action against a company that misrepresented its AI facial recognition system's accuracy and efficacy. Here's what happened and my comments:
โก๏ธ The FTC's complaint is against IntelliVision, a company selling facial recognition software used in home security systems & smart home touch panels. According to the FTC's complaint, IntelliVision claimed:
โ That it had one of the highest accuracy rates on the market
โฐโโค The FTC stated that the company did not have evidence to claim its software was free of gender or racial bias;
โ That it trained its facial recognition software on millions of faces
โฐโโค The FTC stated that the company trained its systems using images of ~100,000 unique individuals and then applied technology to create variants of those images;
โ That its anti-spoofing technology ensured a photo or video image couldn't trick the system
โฐโโค The FTC stated the company lacked adequate evidence to back this claim.
โก๏ธ The FTC is proposing a consent order. Under this order, the company will be prohibited from making misrepresentations about:
โ "The accuracy or efficacy of its facial recognition technology;
โ The comparative performance of the technology with respect to individuals of different genders, ethnicities, and skin tones;
โ The accuracy or efficacy of the technology to detect spoofing."
โก๏ธ AI hype is still at its peak, and many companies often misrepresent what their AI systems or AI-powered functionalities can doโin an attempt to "win" in this competitive field. This case shows that the FTC is watching.
๐ Hint to the FTC: I wonder how companies advertising AI companions can show concrete evidence that their AI chatbots "care" about the user or that they are "a friend" or "a companion for life"...
๐๏ธ AI Governance + Data Protection
Reminder: AI governance does not exist without data protection compliance
๐ Join the discussion on LinkedIn, X/Twitter, or Bluesky.
๐ฌ AI Governance Binge-Watching
If you have free time in December, you can't miss my conversations with global AI experts. Hundreds of people watched each episode; these were the last ones: (bookmark to watch them later!)
1๏ธโฃ Taming Silicon Valley & Governing AI, with Gary Marcus: watch here
โ Why watch: we discussed Gary's new book "Taming Silicon Valley: How We Can Ensure That AI Works for Us," focusing on Generative AI's most imminent threats, and his thoughts on AI policy and regulation. We also talked about the AI Act and U.S. regulatory efforts and the false choice, often pushed by Silicon Valley, between AI regulation and innovation.
2๏ธโฃ AI Regulation Around the World, with Raymond Sun: watch here
โ Why watch: we discussed the latest AI regulation developments in Australia, China, Egypt, India, Japan, Mexico, Nigeria, Singapore, Turkey, the United Arab Emirates, the Brussels effect, and more.
3๏ธโฃ Privacy Rights in the Age of AI, with Max Schrems: watch here
โ Why watch: we discussed Meta's AI practices and noyb's recent complaints; the Brazilian Data Protection Authority's decision allowing Meta to train AI with Brazilian users' data with restrictions; noyb's vs. OpenAI; LLM & GDPR compliance; legitimate interest; and more.
4๏ธโฃ Insights on AI Governance, Compliance, and Regulation, with Barry Scannell: watch here
โ Why watch: we discussed unspoken challenges behind the EU AI Act, main AI compliance issues, how startups can get ready, emotion recognition in the EU AI Act, regulating deepfakes, how lawyers should prepare for upcoming challenges, Barry's personal views, and more.
๐๏ธ This Week: Live Talk with Ifeoma Ajunwa
Whether you're an employer, an employee, or simply interested in AI, don't miss my conversationโthe last AI Governance Live Talk of 2024โwith Ifeoma Ajunwa. We'll be diving into AI and the workplace:
โ Ifeoma Ajunwa, JD, PhD, is an award-winning tenured law professor and author of the highly acclaimed book "The Quantified Worker." She is a Professor at Emory School of Law, the Founding Director of the AI and Future of Work Program, and a renowned expert in the ethical governance of workplace technologies.
โ Among the topics we'll discuss in this session are:
- Worker surveillance, quantification, and exploitation;
- How existing AI applications in the workplace are making things worse;
- Existing policies and laws on AI in the workplace;
- How the EU AI Act approaches the topic;
- What we should advocate for;
- and more.
โ As AI is ubiquitously deployed by employers, workers remain unprotected, and existing policies and laws might not be enough. I invite you to participate and invite friends to join this fascinating live conversation next month!
๐ To join the live session, register here.
๐ฌ Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
๐ AI Book Club: What Are You Reading?
๐ More than 1,900 people have already joined our AI Book Club and receive our bi-weekly book recommendations.
๐ The 15th recommended book was โAI Snake Oil: What AI Can Do, What It Canโt, and How to Tell the Differenceโ by Arvind Narayanan and Sayash Kapoor.
๐ Ready to discover your next favorite read? See our previous reads and join the book club here.
โ๏ธ Legal Challenges of AI Agents
If you are interested in AI, you can't miss last week's AI Governance Professional Edition, where I examined some of the main legal risks behind emerging AI agents and multi-agentic AI systems and how they differ from what we have been observing in the last two years in the context of the Generative AI wave.
๐ Read the preview here. If you're not a paid subscriber, upgrade your subscription to access all previous and future analyses in full.
๐ฅ Job Opportunities in AI Governance
Below are 5 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
๐ช๐บ Credo AI: AI Governance Success Manager - apply
๐จ๐ฆ Autodesk: Technical AI Governance Specialist - apply
๐บ๐ธ Ascensus: Responsible AI Governance and Risk Leader - apply
๐บ๐ธ Capco: AI Governance SME - apply
๐ฌ๐ง Abcam: Data and AI Governance Lead - apply
๐ More job openings: subscribe to our AI governance and privacy job boards to receive weekly job opportunities. Good luck!
๐ Thank you for reading!
AI is more than just hypeโit must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza