👋 Hi, Luiza Jarovsky here. Welcome to our 166th edition, read by 50,000+ subscribers in 160+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
👉 A special thanks to NowSecure for sponsoring this week's free edition of the newsletter. Check them out:
The core of AI governance is knowing where and how AI is being used. While organizations remain liable for data security, they might not be aware that confidential data is being transmitted to AI models in the cloud due to the apps' opaque supply chains. NowSecure exposes the AI that's been lurking in the shadows of your organization: learn more here.
*Promote your AI governance or privacy solution to 50,000+ readers: sponsor this newsletter
🐋 The DeepSeek Effect
We were still getting used to the new hype—AI agents—when DeepSeek took over the headlines, and now it seems to be all we can talk about. Today, I want to discuss the “DeepSeek Effect,” or the three main ways in which it is transforming the global AI ecosystem.
For those who’ve been offline in the last few days: DeepSeek is a China-based AI company that recently released a model called DeepSeek-R1, which explicitly rivals OpenAI's model o1 but required far fewer resources and significantly less money to train. Another not-so-small detail: on Monday, Nvidia and Broadcom's shares each dropped 17% (both companies produce AI chips), wiping out $800 billion in market cap.
In this context, the first part of the DeepSeek effect is that it defies the mainstream paradigm that better AI models must always be bigger, resource-intensive, and more expensive. With much fewer resources, infrastructure, and money, they trained a model many say has a very similar performance to OpenAI's o1.
This effect becomes even more prominent when we remember that just last week, OpenAI CEO Sam Altman, standing beside President Trump at the White House, announced a $500 billion investment in AI (read more about Project Stargate below). Is all that money truly necessary to secure American leadership in the AI race, or is it just for show?
The second part of the DeepSeek effect is that the company made it even clearer that the AI race has nothing to do with models and parameters and everything to do with geopolitics and national defense. Why do I say that?
When DeepSeek started taking over the headlines, many paid attention to the company's privacy policy, stating that it collects, among other information, keystroke patterns from users. It also says this information may be stored in servers located in China. Many feel uneasy with that, especially in the U.S. (the TikTok saga shows why). The U.S. Navy, for example, issued a warning to its members to avoid using DeepSeek “in any capacity” due to “potential security and ethical concerns.”
Additionally, yesterday, the Italian Data Protection Authority requested DeepSeek to confirm what personal data is collected, from where, for what purposes, and—not subtly—whether they are stored on servers located in China.
Lastly, the third aspect of the DeepSeek Effect is the extreme competitive pressure it added to other AI companies, especially American ones. Competition is good, and maybe DeepSeek is the competitor OpenAI needed to increase its innovative potential.
In this context, today, OpenAI stated that there is evidence that DeepSeek distilled the knowledge out of OpenAI's models, breaching its terms of use and infringing on its intellectual property. If you have read this newsletter in the past two years, you know OpenAI faces a growing number of AI copyright lawsuits from creators and media companies. Intellectual property does not seem to be their big passion—unless, of course, they are being hypocritical.
For me, beyond the legal aspect, OpenAI was bothered from three angles:
ego (“you trained your model based on ours, and now everybody only talks about you”);
economic pressure (“you probably destroyed our market value when you trained a powerful model with a fraction of the cost”);
nationalism (“we don't want the best AI models to be Chinese”).
Will things calm down a bit in the coming weeks? 2025 has barely started, and the AI newsfeed is already out of control.
👉 On Sunday, paid subscribers will receive a must-read edition on emerging challenges in AI governance: don't miss it!
📈 AI Governance Careers Are Surging: Get Ahead
I invite you to join the 18th cohort of our AI Governance Training and gain the skills to meet the emerging challenges in the field. This 12-hour live online program goes beyond standard certifications and offers:
8 live sessions with me, curated self-study materials, and quizzes;
A training certificate and 13 CPE credits pre-approved by the IAPP;
A networking session with peers and office hours for career-related questions.
Don't miss it: join 1,100+ professionals who have advanced their careers with us:
*If cost is a concern, we offer discounts for students, NGO members, and individuals in career transition. To apply, fill out this form.
🥊 OpenAI vs. Deepseek
OpenAI states that there is evidence that DeepSeek distilled the knowledge out of OpenAI's models, breaching its terms of use and infringing on its intellectual property. Here's what everyone in AI should know:
1. What is knowledge distillation in AI?
According to IBM: "Knowledge distillation is a machine learning technique that aims to transfer the learnings of a large pre-trained model, the 'teacher model,' to a smaller 'student model.' It’s used in deep learning as a form of model compression and knowledge transfer, particularly for massive deep neural networks.
The goal of knowledge distillation is to train a more compact model to mimic a larger, more complex model. Whereas the objective in conventional deep learning is to train an artificial neural network to bring its predictions closer to the output examples provided in a training data set, the primary objective in distilling knowledge is to train the student network to match the predictions made by the teacher network."
2. What do OpenAI's terms of use say:
"What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not:
- Use our Services in a way that infringes, misappropriates or violates anyone’s rights.
- Modify, copy, lease, sell or distribute any of our Services.
→ *Attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law).
- Automatically or programmatically extract data or Output (defined below).
- Represent that Output was human-generated when it was not.
- Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services.
→ *Use Output to develop models that compete with OpenAI."
➡️ If they manage to prove the distillation, it might be a violation of OpenAI's terms of use, and they might take legal action against DeepSeek. If it happens, this will be a long and challenging litigation process; also, remember that OpenAI is based in the U.S., and DeepSeek is based in China.
➡️ What wasn’t on your 2025 bingo card was OpenAI becoming an advocate for intellectual property rights, right?
🇮🇹 Italian DPA vs. DeepSeek
The Italian Data Protection Authority (DPA) has requested information from DeepSeek, citing potential risks to the data of millions of people in Italy. It seems DeepSeek might not last long in the EU. Full release in English:
"The Guarantor for the protection of personal data has sent a request for information to Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the companies that provide the DeepSeek chatbot service, both on the web platform and on the App.
The Authority, considering the potential high risk for the data of millions of people in Italy, has asked the two companies and their affiliates to confirm which personal data are collected, from which sources, for which purposes, what is the legal basis of the processing, and whether they are stored on servers located in China.
The Guarantor also asked companies what kind of information is used to train the artificial intelligence system and, in the event that personal data is collected through web scraping activities, to clarify how users registered and those not registered to the service have been or are informed about the processing of their data.
Within 20 days, companies must provide the Authority with the requested information."
🚀 Daily AI Governance Resources
Thousands of people receive our daily emails with educational and professional resources on AI governance, along with updates on our live sessions and training programs. Join our learning center:
📜 Trump Signed an Executive Order on AI
After revoking Biden's Executive Order on AI, Trump signed a new one. Here's what it says:
1. Purpose
"The United States has long been at the forefront of AI innovation, driven by the strength of our free markets, world-class research institutions, and entrepreneurial spirit. To maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas. With the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.
This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence."
2. Policy
"It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security. (...)"
3. Definition
"For the purposes of this order, 'artificial intelligence' or 'AI' has the meaning set forth in 15 U.S.C. 9401(3)."
4. Developing an Artificial Intelligence Action Plan.
"(a) Within 180 days of this order, the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant, shall develop and submit to the President an action plan to achieve the policy set forth in section 2 of this order."
🎙️ Join my Live Talk with Anu Bradford
The year has barely begun, and the global AI race is at full speed. What will this mean for AI regulation and governance? Here's why you can't miss my next live talk:
Anu Bradford is a professor of law and international organizations at Columbia University and a leading scholar on global economy and digital regulation. She coined the term 'Brussels Effect'―often discussed in the context of AI regulation―and published a book with the same name.
More recently, Bradford published the book "Digital Empires: The Global Battle to Regulate Technology," where she explores the global battle among the three dominant digital powers―the U.S., China, and the EU―and the choices we face as societies and individuals.
In this live talk, we'll discuss:
How the U.S., the EU, and China are strategically positioning themselves in the global AI race;
How the three dominant digital powers are approaching AI regulation and the practical implications of each approach;
The Brussels effect in the context of AI regulation;
and more.
👉 To participate, register here.
🎬 Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
📚 AI Book Club: What Are You Reading?
📖 More than 2,100 people have already joined our AI Book Club and receive our bi-weekly book recommendations.
📖 The 16th recommended book was “The Tech Coup: How to Save Democracy from Silicon Valley” by Marietje Schaake.
📖 Ready to discover your next favorite read? See our previous reads and join the AI book club:
⚖️ CharacterAI Denies Legal Claims
CharacterAI answered the lawsuit filed by the teenage victim's mother and denied the allegations. This is a major AI liability case, and everyone in AI should pay attention. CharacterAI's main arguments & my comments:
A. The First Amendment Bars All Plaintiff’s Claims
- Plaintiff’s claims challenge expressive speech
- Tort liability would violate the public’s right to receive speech
- No exceptions to the First Amendment apply
B. The Product Liability Claims Fail
- CharacterAI is a service, not a product
- The alleged harms flow from intangible content
C. The Negligence-Based Claims Fail
- No special relationship
- No physical custody and control
The Court should decline to expand state tort liability for expressive content
D. Plaintiff’s Other Claims Fail Under Florida Law
- Negligence per se (must be dismissed)
- Unjust enrichment (must be dismissed)
- Wrongful and survivor action (must be dismissed)
Conclusion: "CharacterAI requests that Plaintiff’s claims be dismissed."
➡️ I find this part legally tricky:
"The alleged harms flow from intangible content. Courts have consistently held that product liability law does not extend to harms from intangible content—even if purveyed in a tangible medium, such as a book, cassette, or “electrical pulses through the internet.” (...). Plaintiff’s product liability claims fail this standard because they are explicitly based on the words exchanged in S.S.’s conversations with Characters. (...) Plaintiff may contend that her claims challenge tangible “design choices,” analogizing to cases like T.V. v. Grindr, LLC, (...), and Brookes v. Lyft, (...). The product liability claims in those cases were 'not based on expressions or ideas' transmitted to or from users of the platforms."
➡️ My comments:
→ CharacterAI denies that it's a product, but its interface design choices directly affect the level of transparency and protection available to users;
→ Having analyzed CharacterAI's user interface, it has clearly failed to provide enough transparency and guardrails through its service's interface design. Interface design should be considered an integral part of the service in cases involving AI chatbots;
→ CharacterAI should be fully responsible for its design choices and the lack of user protection. For me, some of its design practices resembled dark patterns (such as the low contrast between the red-colored-small-font text warning at the top and the dark background of the main interface).
→ User protection, not only through code-level guardrails but also through UX design, should be mandatory, especially when so many vulnerable people are using these AI chatbots.
📢 Spread AI Governance Awareness
Enjoying this edition? Share it with friends and colleagues:
🇺🇸 Global AI Race: Project Stargate
A group of American companies announced a $500 billion investment in AI to secure American leadership. Here's what you need to know:
→ The name of the project is "Stargate." It was announced that it will be a new company that plans to invest $500 billion over the next 4 years in building new AI infrastructure in the United States.
→ Stargate will start building data centers and other AI-related infrastructure in Texas. The data centers are already under construction; 10 have already been built so far.
→ The initial investment is expected to be $100 billion and will continue growing in the upcoming years. SoftBank, OpenAI, Oracle, and MGX will be the initial equity funders in Stargate. Other project partners include Microsoft, Arm, and NVIDIA.
→ According to OpenAI CEO Sam Altman: "This will be the most important project of this era (...) for AGI [artificial general intelligence] to get built here, to create hundreds of thousands of jobs, to create a new industry centered here."
→ It was also announced that hundreds of thousands of jobs are expected to be created, though no details were provided on what types of jobs (?).
→ There seem to be high expectations for AI-powered advancements in medicine. Perhaps some inflated hype? Sam Altman said: "As this technology progresses, we will see diseases get cured at an unprecedented rate. We'll be amazed at how quickly we're curing this cancer, and that one, and heart disease, and what this will do to the ability to deliver very high-quality healthcare, the costs, but really the cure of the diseases at a rapid, rapid rate. I think it will be among the most important thing this technology does."
💬 Algorithmic Speech
The paper "Speech Certainty: Algorithmic Speech and the Limits of the First Amendment," by Mackenzie Austin and Max Levy, is a must-read for lawyers and legal scholars interested in AI. Key quotes:
"In this Article, we make two arguments. First, that the principle of speech certainty defines the limits of the First Amendment. And second, because machine learning algorithms run afoul of this principle, their output is not speech within the meaning of the First Amendment and thus falls outside its protection."
"The speech certainty principle is the simple idea that if you don’t know what you’re saying when you say it, then whatever you said isn’t your “speech” within the meaning of the First Amendment. At the Founding, when only oral, written, and printed speech was possible, all speech necessarily fit within that understanding. Since then, communications technology evolved, giving way to the telegraph, radio, television, and the internet. But although speech could now be transmitted across vast distances, instantaneously and en masse, the speech certainty principle held. In any medium, the speaker always knew what she said when she said it. Over centuries, this elemental feature of speech therefore revealed itself as a cornerstone of First Amendment jurisprudence, most recently in the doctrines of editorial discretion and expressive conduct."
"In Moody v. Netchoice, the Supreme Court tentatively ventured that 'some platforms, in at least some functions, are indeed engaged in expression.' But it expressly and repeatedly put an asterisk on that conclusion: it was based only on the existing, undeveloped record. Further development of that record will show in fact what this Article has explored in theory: that the machine learning models on which social-media platforms rely to rank, recommend, and remove content on their feeds does not match the definition of editorial discretion as the Court articulated in Moody. Instead, it will show that the platforms can never be certain that the content published by those models will align with what they intended to publish. In fact, because these probabilistic machine learning models will always be wrong at least some of the time, it is guaranteed that the platforms will publish precisely what they intended not to publish. In other words, the platforms’ algorithmic output lacks speech certainty, and thus doesn’t qualify as “speech” within the meaning of the First Amendment.”
🔥 Job Opportunities in AI Governance
Below are 10 new AI Governance roles posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
🇺🇸 Contentful: Senior Legal Counsel, AI Governance & IP - apply
🇺🇸 Sony Interactive Entertainment: Director, AI Governance - apply
🇬🇧 SBS: Data & AI Governance Lead - apply
🇮🇹 Generali: Data & AI Governance Specialist - apply
🇺🇸 Stryker: AI Governance Engineer - apply
🇮🇹 EY: AI Governance - apply
🇺🇸 Relyance AI: Software Engineer, AI Governance - apply
🇺🇸 Oxy: AI Ethics & Risk Manager - apply
🇳🇱 Deeploy: Responsible AI Strategy Lead - apply
🇬🇧 GSMA: Senior Manager, Responsible AI - apply
🔔 More job openings: Join thousands of professionals who subscribe to our AI governance & privacy job board and receive weekly job opportunities.
💡 Keep the Conversation Going
Thank you for reading! If you enjoyed this edition, here's how you can continue the conversation:
1️⃣ Help spread AI governance awareness
→ Start a discussion on social media about this edition's topic;
→ Share this edition with friends, adding your critical perspective;
2️⃣ Go beyond
→ Stay ahead in AI: upgrade your subscription and start receiving my exclusive weekly analyses on emerging challenges in AI governance;
→ Looking for an authentic gift? Surprise them with a paid subscription.
3️⃣ For organizations
→ Teams promoting AI literacy can purchase 3+ subscriptions here at a discount and 3+ seats in our AI Governance Training here;
→ Companies offering AI governance or privacy solutions can sponsor this newsletter and reach thousands of readers. Fill out this form to get started.
See you soon!
Luiza
I have to call out the piece on speech certainty - only lawyers and policy analysts would charge in full of certainty where epistemologists would hesitate.
Even if we fully understood how our own brains work to produce thought and speech, which we don't, their argument only holds true if you consider the output of the AI as the speech of its owners. If you do, then yeah, the fact that it doesn't say what they want it to say might make it fail some definition of speech.
This is missing the point though, the fact it doesn't say what they want it to say, and can be unpredictable, is precisely why it's not their owners speech but their own. An AI might not meet any definition of sentience (yet?) but this argument boils down to "we can't give the vendor of free speech to these machines, because they're acting too human to be considered extensions of their owners).