Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The EU Template for AI Models · UK-OpenAI Partnership · And More

The EU Template for AI Models · UK-OpenAI Partnership · And More

My weekly AI governance curation to help you stay ahead | Edition #221

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Jul 27, 2025
∙ Paid
14

Share this post

Luiza's Newsletter
Luiza's Newsletter
The EU Template for AI Models · UK-OpenAI Partnership · And More
3
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 221st edition, featuring my weekly curation of essential papers, reports, news, and ideas on AI governance, now reaching over 70,400 subscribers in 170 countries. It's great to have you on board! To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond


🎓 Join the 23rd cohort in September

If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 23rd cohort of my 15-hour live online AI Governance Training, starting in September.

Cohorts are limited to 30 people, and over 1,200 professionals have already participated. Many described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].

Join the Next Cohort


The EU Template for AI Models · UK & OpenAI Partnership · And More

This week's essential news, papers, reports, and ideas on AI governance:

1. The news you can't miss:

  • The EU published the template for the mandatory summary of the content used for AI model training, an important step for AI transparency. The purpose of this summary (which must be made publicly available) is to increase transparency and help ensure compliance with copyright, data protection, and other laws.

  • OpenAI and the UK have agreed to a voluntary, non-legally binding partnership on AI to support the UK's goal of 'building sovereign AI in the UK.' Pay attention to how it treats AI as an end, not as a means.

  • Singapore has developed Southeast Asian Languages in One Network (SEA-LION), a family of open-source LLMs that better capture Southeast Asia’s peculiarities, including languages and cultures. Multilingualism has been fueling the new AI nationalism.

  • Trump's AI Action Plan restricts the enforcement power of the Federal Trade Commission (FTC) against AI companies, and the FTC has already begun deleting articles that were criticizing abusive AI practices.

  • Elon Musk: "We’re going to make Baby Grok xAI, an app dedicated to kid-friendly content." After Grok praised Hitler and made sexualized anime bots available to kids, Musk wants to start a 'kid-friendly AI.' Children shouldn't use AI unsupervised. Here's what every parent should know.

  • The search engine DuckDuckGo has a new feature that lets people hide AI-generated images. The company says its philosophy about AI features is “private, useful, and optional” (and this is an interesting reaction to "AI-first," a corporate trend I have recently criticized).

  • Estonia has launched an extremely interesting and balanced nationwide initiative (called AI Leap 2025) to integrate AI into the education system. Other countries should take note.

  • French AI company Mistral released the report “Our contribution to a global environmental standard for AI,” taking the lead in environmental transparency by releasing what it calls the first comprehensive lifecycle analysis of an AI model.

  • Meta claims that its AI model Llama 4 is open-source. However, according to the latest EU guidelines, the company will likely not be able to benefit from the EU AI Act's open-source exemptions.

  • Anthropic joins OpenAI and Mistral in announcing it will sign the EU's voluntary Code of Practice for providers of general-purpose AI models. The company praised the Code and Europe's AI Continent Action Plan (it also commented on America's AI Action Plan; read my thoughts on section 4 below).


2. Must-read academic papers

  • “From Turing to Tomorrow: The UK's Approach to AI Regulation” (link) by Oliver Ritchie et al. This paper provides a much-needed in-depth analysis of the UK's approach to AI regulation (which differs from the EU and the U.S.) and discusses regulatory options based on selected AI-related risks.

  • “A Primer on the Different Meanings of ‘Bias’ for Legal Practice” (link) by Tara Emory & Maura R. Grossman. The paper covers different uses of the term ‘bias’ in AI systems and proposes a practical taxonomy to support legal, ethical, and technical discussions on the topic.

  • “A Taxonomy of AI Opacity in the EU: Rethinking Transparency, Traceability, Interpretability, and Explainability” (link) by Carlotta Buttaboni & Luciano Floridi. This paper critically examines the concept of AI opacity and proposes a taxonomy that distinguishes its four attributes (transparency, traceability, interpretability, and explainability) from a legal perspective.

  • “The Impossibility of Fair LLMs” (link) by Jacy Reese Anthis et al. This paper discusses the challenges involved in evaluating and implementing fairness in the context of LLMs from a technical perspective. This is especially important given the legal and ethical importance of fairness considerations.

3. New reports and relevant documents

  • This week, the White House published the long-awaited America's AI Action Plan. To learn more, read my essay on the topic, which covers my highlights from the 23-page plan, Trump's live announcement, and the U.S. approach to AI governance.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share