đšī¸ LLMs Don't Respect Privacy
The Latest Developments in AI Governance | Free Edition | #180
Hi, Luiza Jarovsky here. Welcome to our 180th edition, read by 56,000+ subscribers worldwide. This is a free weekly roundup of the latest developments in AI governance. It's great to have you on board!
đ On Sundays, paid subscribers receive my critical analysis of emerging AI governance challenges. If you're not a paid subscriber yet, here's what you missed in the past few weeks:
1. AI Is Dehumanizing the Internet
2. Manus AI: Why Everyone Should Worry
3. Can Humans Really Oversee AI?
4. Quantum Computing Governance
5. Prohibited AI Practices in the EU
For more AI governance resources: Book Club | Live Talks | Job Board | Learning Center | Training Program
đ A special thanks to TrustWorks for sponsoring this week's free edition of the newsletter. Check it out:
With the first enforcements of the EU AI Act now in place, TrustWorks clients have demanded solutions to streamline AI Governance efficiently, just as they have already benefited from best practices, enhanced collaboration, and automation in Data Protection. Discover how the TrustWorks Privacy and AI Governance platform can support your organization.
*Promote your product to 56,000+ readers: Sponsor us (Next spot: Aug. 6)
đšī¸ LLMs Don't Respect Privacy
One of the first articles I wrote about Generative AI and LLMs in 2022 highlighted how they are a privacy nightmare and how unlikely it would be to make them GDPR compliant.
At the time, many people responded to my articles commenting that:
AI hallucinations would soon be fixed, making major data protection challenges irrelevant;
People do not, and should not, use LLM-powered systems as search engines.
More than two years have passed, and both issues remain problematic. In fact, the second one has escalated:
AI hallucinations continue to be a persistent issue in LLM-powered AI applications;
Not only do people use AI chatbots as search engines, but companies like Google are now integrating search and chatbots (as I discussed in my latest deep dive), further amplifying LLM privacy issues.
As a result, most of the data protection challenges I raised two years ago remain, as I often tell participants in my AI Governance Training, including:
Privacy principles such as data minimization, purpose limitation, accuracy, and storage limitation are very difficult to implement in LLM-based AI systems (such as AI chatbots like ChatGPT);
Data subject rights are challenging, if not impossible, to apply and enforce;
The lawfulness of processing and legitimate interest remain 'elephants in the room' even after EDPB Opinion 28/2024, as I've written about extensively in this newsletter.
Last year, in my second live talk with Max Schrems covering the topic Privacy Rights in the Age of AI, we explored many of these issues. (It's an excellent one-hour conversation; don't miss it).
Talking about Max Screms, today, his privacy non-profit Noyb filed a second GDPR complaint against OpenAI over infringement on the accuracy principle, Article 5.1(d) of the GDPR.
This time, the complaint involves a Norwegian user who asked ChatGPT for information about himself. The chatbot invented a false story that portrayed him as a criminal, in a typical case of AI âhallucination.â
Noyb has filed a complaint with the Norwegian Data Protection Authority, arguing that OpenAI violated the principle of data accuracy. Noyb is asking the authority to:
Order OpenAI to delete the defamatory output;
Fine-tune its model to eliminate inaccurate results;
Impose an administrative fine to prevent similar violations in the future.
My thoughts on the intersection of AI and data protection remain the same:
It's unclear whether AI âhallucinationsâ can ever be resolved;
It's unclear whether LLM-powered AI systems will ever be able to comply with the accuracy principle and other GDPR-based rules, rights, and principles;
OpenAI and other AI companies will keep having a hard time in the EU.
As always, I'll keep you posted!
đŧ AI Governance Training: Spring Cohort
Facing AI-related challenges at work? Registration for the 20th cohort of my AI Governance Training is open. (The April cohort sold out a month early.)
Starting in early May, this training includes 15 hours of live sessions with me (updated daily), curated self-study materials, quizzes, a one-year paid subscription to this newsletter, a networking session with peers, a training certificate, and 13 CPE credits.
I designed this program to help participants critically understand AI's legal and ethical challenges, going beyond standard certifications and checklists. I have already trained more than 1,100 professionals, and you can read some of their testimonials here.
Cohorts are limited to 30 participants and fill up fast. Secure your spot:
*We offer discounts for students, NGO members, and individuals in career transition. To apply, fill out this form.
đĒ China's New AI Law
China has enacted a new law with transparency measures for AI-generated content, set to enter into force in September 2025.
Surprisingly, this law is more detailed than the EU AI Act's provisions on the topic. Other countries should take note! Key obligations include:
"I. Adding text prompts or general symbol prompts or other signs at the beginning, end, or appropriate position in the middle of the text, or adding prominent prompt signs in the interactive scene interface or around the text;
II. Adding voice prompts or audio rhythm prompts or other signs at the beginning, end, or appropriate position in the middle of the audio, or adding prominent prompt signs in the interactive scene interface;
III. Add prominent warning signs at appropriate locations on the images;
IV. Add prominent warning signs at the beginning of the video and at appropriate locations around the video. Prominent warning signs may be added at appropriate locations at the end and in the middle of the video.
V. When presenting a virtual scene, a prominent reminder logo shall be added at an appropriate location on the starting screen, and a prominent reminder logo may be added at an appropriate location during the continuous service of the virtual scene;
VI. Other generated synthetic service scenarios shall add prominent prompt signs based on their own application characteristics.
When service providers provide functions such as downloading, copying, and exporting generated synthetic content, they should ensure that the files contain explicit identification that meets the requirements."
âWhen reviewing applications for listing or online release, Internet application distribution platforms shall require Internet application service providers to state whether they provide AI-generated synthesis services. If Internet application service providers provide AI-generated synthesis services, Internet application distribution platforms shall verify the materials related to the identification of their generated synthetic content.â
âService providers shall clearly state the methods, styles and other specifications for generating synthetic content identifiers in the user service agreement, and prompt users to carefully read and understand the relevant identifier management requirements."
đ Eliminate Choice, Disempower & Replace
đ AI governance is essential to protect fundamental rights. To learn more, read my latest deep dive on the topic: AI Is Dehumanizing the Internet.
đ Share your thoughts on LinkedIn, X, or in the comments section below.
âī¸ AI-Generated Works and Copyright
A U.S. Court of Appeals has rejected copyright protection for AI-generated works that lack a human author. The court ruled that a human must be involved. Here is a quick summary of the case:
"In this case, a computer scientist attributes authorship of an artwork to the operation of software. Dr. Stephen Thaler created a generative artificial intelligence named the 'Creativity Machine.' The Creativity Machine made a picture that Dr. Thaler titled 'A Recent Entrance to Paradise.' Dr. Thaler submitted a copyright registration application for 'A Recent Entrance to Paradise' to the United States Copyright Office. On the application, Dr. Thaler listed the Creativity Machine as the workâs sole author and himself as just the workâs owner.
The Copyright Office denied Dr. Thalerâs application based on its established human-authorship requirement. This policy requires work to be authored in the first instance by a human being to be eligible for copyright registration. Dr. Thaler sought review of the Officeâs decision in federal district court and that court affirmed.
We affirm the denial of Dr. Thalerâs copyright application. The Creativity Machine cannot be the recognized author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being. Given that holding, we need not address the Copyright Officeâs argument that the Constitution itself requires human authorship of all copyrighted material. Nor do we reach Dr. Thalerâs argument that he is the workâs author by virtue of making and using the Creativity Machine because that argument was waived before the agency."
đŦ The Global AI Race: Regulation and Power
As the AI race heats up, you can't miss my one-hour talk with Anu Bradford, the scholar who coined the term "Brussels Effect." We explored the intersection of the Brussels Effect, AI regulation, and the global AI race. Watch it here:
đ Watch all my previous AI governance talks on my YouTube Channel.
đ OpenAI's AI Policy Proposal
OpenAI has published an AI policy proposal urging the Trump administration to adopt a more nationalistic and growth-oriented approach to the AI race. Read what it says about DeepSeek and the threat posed by China:
"In advancing democratic AI, America is competing with a CCP determined to become the global leader by 2030.
Thatâs why the recent release of DeepSeekâs R1 model is so noteworthyânot because of its capabilities (R1âs reasoning capabilities, albeit impressive, are at best on par with several US models), but as a gauge of the state of this competition.
As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm.
And because DeepSeek is simultaneously state-subsidized, state-controlled, and freely available, the cost to its users is their privacy and security, as DeepSeek faces requirements under Chinese law to comply with demands for user data and uses it to train more capable systems for the CCPâs use.
Their models also more willingly generate how-toâs for illicit and harmful activities such as identity fraud and intellectual property theft, a reflection of how the CCP views violations of American IP rights as a feature, not a flaw. (...)
While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing.
The AI Action Plan should ensure that American-led AI prevails over CCP-led AI, securing both American leadership on AI and a brighter future for all Americans."
đŧ India's AI Competency Framework
India has published its AI Competency Framework for public sector officials, and it's an excellent initiative toward targeted AI literacy efforts. Other countries should take note. Here are the goals and objectives of the framework:
Provide a foundational understanding of AI, including its core functionalities and limitations.
Define behavioural, functional, and domain-specific competencies required for public sector officials.
Enhance awareness of emerging AI technologies and their implications for government services.
Identify opportunities to integrate AI for improved efficiency and service delivery.
Enable informed policymaking and regulatory oversight.
Develop targeted training and capacity-building programs.
Establish a structured approach to career progression and performance evaluation within government roles.
đ Thousands of people have joined our Learning Center and receive our emails with must-read papers and additional resources:
đ AI Book Club: What Are You Reading?
We've recently announced our 18th recommended book: "Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy," by Kashmir Hill.
đ See the full book list and join 2,400+ readers who never miss our book recommendations:
đ The EU Announced New AI Factories
Most people are unaware that the EU aims to become a leader in AI, not just in AI regulation. At the core of its strategy are AI Factories. Can it catch up with the U.S. and China?
First, what are AI Factories? According to the EU Commission:
"AI Factories will bring together the key ingredients that are needed for success in AI: computing power, data, and talent. They will provide access to the massive computing power that start-ups, industry and researchers need to develop their AI models and systems. For example, European large language models or specialised vertical AI models focusing on specific sectors or domains."
This week, the EU announced a second wave of AI factories. The six new AI factories will join the seven existing ones launched in December (see the map below with the countries involved). Here's the announcement:
"Austria, Bulgaria, France, Germany, Poland, and Slovenia will host the newly selected AI Factories, supported by a combined national and EU investment of around âŦ485 million.
The factories will offer privileged access to AI startups and small-and-medium sized enterprises (SMEs), fostering growth and more effective scaling up.AI Factories are a core pillar of the Commissionâs strategy for Europe to become a leader in AI, bringing together 17 Member States and two associated EuroHPC participating states.
The infrastructure and services provided by AI Factories are essential for unlocking the full potential of the sector in Europe.
Backed by the EUâs world-class network of supercomputers, these factories will bring together the key ingredients for AI innovation: computing power, data, and talent.
This will enable AI companies, particularly SMEs and startups, as well as researchers, to enhance the training and development of large-scale, trustworthy and ethical AI models.
As announced by President von der Leyen at the AI Action Summit in Paris, the Invest AI initiative aims to mobilise up to âŦ200 billion of European investments in AI.
This will include the deployment of several AI Gigafactories across Europe, which will be massive high-performance computing facilities designed to develop and train next-generation AI models and applications."
đ AI Literacy Packages
As I wrote in a recent edition of this newsletter, AI literacy is both a legal obligation, as in the EU AI Act, and a professional necessity.
To help teams streamline their AI literacy efforts, we offer two packages focused on the legal and ethical challenges of AI:
Team Subscription: Your entire team will receive my weekly analyses on emerging challenges in AI, exclusive to paid subscribers;
AI Governance Training: Groups of 3 or more participants from the same company receive a discounted rate on my 15-hour training program. Request your discount here.
đŧ Looking for a Job in AI Governance?
I've curated the list below with 40 job opportunities in AI governance. All of them are from the last few days:
đ Each week, we send job seekers an email alert with new job openings in AI governance and privacy. Increase your chances by exploring our global job board and subscribing to our free weekly alerts:
If youâre finding this newsletter valuable, share it with colleagues, and consider subscribing if you havenât already. There are team subscriptions, group discounts, gift options, and referral bonuses available.
Thank you, and see you soon!
Luiza