👋 Hi, Luiza Jarovsky here. Welcome to the 105th edition of this newsletter on AI policy & regulation, read by 25,000+ subscribers in 130+ countries. I hope you enjoy reading it as much as I enjoy writing it.
⏰ Last call: the EU AI Act Bootcamp and the advanced program on Generative AI Legal Issues we are offering at the AI, Tech & Privacy Academy start next week. I'm your instructor in both training programs. Don't miss them!
🐘 The elephant in the room
The European Data Protection Board (EDPB) has just published its ChatGPT Taskforce Report, and there is a big elephant in the room. I explain:
➡️ On web scraping and "collection of training data, pre-processing of the data and training," the report recognizes that OpenAI relies on legitimate interest to collect and process personal data to train ChatGPT (OpenAI states that in a hidden page in its Help Center, as I've discussed previously). On that, the report says:
1. "It has to be recalled that the legal assessment of Article 6(1)(f) GDPR should be based on the following criteria: i) existence of a legitimate interest, ii) necessity of processing, as the personal data should be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed and iii) balancing of interests (...)"
and also
2. "As already stated by the former Article 29 Working Party, adequate safeguards play a special role in reducing undue impact on data subjects and can therefore change the balancing test in favor of the controller. While the assessment of the lawfulness is still subject to pending investigations, such safeguards could inter alia be technical measures, defining precise collection criteria and ensuring that certain data categories are not collected or that certain sources (such as public social media profiles) are excluded from data collection. Furthermore, measures should be in place to delete or anonymise personal data that has been collected via web scraping before the training stage."
➡️ What they are basically saying is that scraping to train ChatGPT under legitimate interest might be possible if technical measures are used, such as the ones described above.
🚨 Problems:
➡️ If they are relying on legitimate interest, transparency obligations must be respected.
➵ Article 14.5 (b) of the GDPR says that in cases where it's not possible to notify data subjects of the information being processed (such as in the context of scraping), "the controller shall take appropriate measures to protect the data subject’s rights and freedoms and legitimate interests, including making the information publicly available."
➵ ChatGPT's data is not publicly available.
➵ Unlike a search engine, the data subject cannot exercise simple data subject rights, for example, the right to be forgotten.
➡️ They are not anonymizing personal data, as the system still outputs information about people.
➡️ They are closing various licensing deals with websites such as Reddit and other platforms that contain personal data. Obviously, they are not avoiding personal data and don't plan to do so.
🚨 Legitimate interest has been totally distorted here. Either the EDPB says that legitimate interest works differently for OpenAI and other AI companies relying on scraping to train AI (and explain why), or they require them to comply with legitimate interest according to the GDPR.
🔎 What happens when Search & Generative AI merge
As Google integrates Generative AI into its search engine, everybody should be aware of some of the downsides of the "AI-powered search." I summarize it in 2 minutes, watch:
⚖️ Generative AI & Human Rights
The UN published the paper "Taxonomy of Human Rights Risks Connected to Generative AI," and it's a must-read for everyone in AI. Here's why:
➡️ The taxonomy supplements the UN B-Tech Project’s foundational paper on generative AI and examines how generative AI may negatively impact human rights (providing real-world examples for each right).
➡️ These are the rights covered:
➵ Freedom from Physical and Psychological Harm
➵ Right to Equality Before the Law and to Protection against Discrimination
Right to Privacy
➵ Right to Own Property
➵ Freedom of Thought, Religion, Conscience and Opinion
➵ Freedom of Expression and Access to Information
➵ Right to Take Part in Public Affairs
➵ Right to Work and to Gain a Living
➵ Rights of the Child
➵ Rights to Culture, Art and Science
➡️ For each of these rights, the report covers:
➵ a summary of why the right is at risk from the development or use of generative AI;
➵ a selected list of key international human rights law articles pertaining to the right;
➵ a list of examples in which generative AI may threaten the right.
➡️ This is an extremely important report to anyone concerned with AI's real-world risks and harms.
🔥 AI Governance is HIRING
Below are twelve AI Governance positions posted LAST WEEK. Bookmark, share, and be an early applicant:
1. OpenAI: Associate General Counsel, AI Research - apply
2. Google: AI Protection Analyst - apply
3. Anthropic: Product Policy Manager, Cyber Threats - apply
4. TikTok: Public Policy Manager, AI Lead - apply
5. IBM: Lead, AI Advocacy - apply
6. Meta: Privacy Program Manager, AI Product - apply
7. Zoom: Senior Product Manager - AI and Data Management - apply
8. Amazon: Data Governance and Privacy Compliance Specialist - apply
9. Ascendion: Director of GenAI Strategy and Governance - apply
10. Vodafone: Responsible AI Manager - apply
11. AXA UK: AI Governance Lead - apply
12. Zurich Australia: AI Governance Lead - apply
➡️ For more job opportunities in AI governance and privacy, subscribe to our weekly job alert.
➡️ To upskill and land your dream AI governance job, check out our training programs in AI, tech & privacy. Good luck!
📋 OECD publishes the paper "AI, data and competition"
As the FTC investigates Generative AI investments & partnerships, the OECD publishes a timely new paper: "AI, data and competition." Important quote:
"Further, competition authorities and policy makers should not ignore developments in AI. As a potentially revolutionary technology, the stakes are too high not to give competition every chance. Despite being too early to know how competition will develop, there do appear to be several risks that could emerge. These could be difficulties in accessing key inputs, which could be exacerbated through firm conduct over time, including through acquisitions or vertical and adjacent linkages with existing markets. Most notably, links between existing providers of cloud computing infrastructure services or within existing digital ecosystems, may present situations in which competitors struggle to access requisite data, compute or end users. An important factor regarding specific data’s effect on competition within generative AI appears to be the extent to which the marginal benefit of training models using it can be replicated"
🎬 AI Act: Challenges, Opportunities, and Practical Insights
You can't miss my recent conversation with Gianclaudio Malgieri, Luca Bertuzzi & Risto Uuk on the AI Act. Watch the full video.
🦺 New report on AI safety
The International Scientific (interim) Report on the Safety of Advanced AI is out. It contains insights from 75 AI experts, including an international Expert Advisory Panel nominated by 30 countries, the EU, and the UN. It was published to inform the discussions in the context of the AI Seoul Summit 2024. Below are some highlights of the executive summary:
➵ "If properly governed, general-purpose AI can be applied to advance the public interest, potentially leading to enhanced wellbeing, more prosperity, and new scientific discoveries. However, malfunctioning or maliciously used general-purpose AI can also cause harm, for instance through biased decisions in high-stakes settings or through scams, fake media, or privacy violations.
➵ As general-purpose AI capabilities continue to advance, risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI could emerge, although the likelihood of these scenarios is debated among researchers. Different views on these risks often stem from differing expectations about the steps society will take to limit them, the effectiveness of those steps, and how rapidly general-purpose AI capabilities will be advanced.
➵ There is considerable uncertainty about the rate of future progress in general-purpose AI capabilities. Some experts think a slowdown of progress is by far most likely, while other experts think that extremely rapid progress is possible or likely.
➵ There are various technical methods to assess and reduce risks from general-purpose AI that developers can employ and regulators can require, but they all have limitations. For example, current techniques for explaining why general-purpose AI models produce any given output are severely limited.
➵ The future of general-purpose AI technology is uncertain, with a wide range of trajectories appearing possible even in the near future, including both very positive and very negative outcomes. But nothing about the future of AI is inevitable. It will be the decisions of societies and governments that will determine the future of AI. This interim report aims to facilitate constructive discussion about these decisions."
🎤 [You're Invited] Live session on AI assistants
Join our free live session this Thursday. As AI development continues at high speed and AI assistants become more advanced, important legal and ethical implications must be discussed. In this session, I'll share my thoughts on the current state of AI assistants and my main concerns. The session is open to everyone, so feel free to forward this to colleagues. 👉 Register here
Reminder: Upcoming Bootcamps ⏰
Generative AI Legal Issues: Advanced Program
🗓️ Mondays, June 3 to 24, 10am PT / 6pm UK time
👉 Register here
The EU AI Act Bootcamp
🗓️ Thursdays, June 6 to 27, 10am PT / 6pm UK time
👉 Register here
Emerging Challenges in Privacy, Tech & AI
🗓️ Wednesdays, July 17 to Aug 7, 10am PT / 6pm UK time
👉 Register here
📩 To receive our AI, Tech & Privacy Academy weekly emails with learning opportunities, subscribe to our Learning Center.
I hope to see you there!
🙏 Thank you for reading!
If you have comments on this week's edition, write to me, and I'll get back to you soon.
To receive the next editions in your email, subscribe here.
Have a great day.
Luiza