Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
OpenAI Said Yes, Meta Said No · ChatGPT Agent Risks · And More

OpenAI Said Yes, Meta Said No · ChatGPT Agent Risks · And More

My weekly AI governance curation to help you stay ahead | Edition #219

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Jul 20, 2025
∙ Paid
21

Share this post

Luiza's Newsletter
Luiza's Newsletter
OpenAI Said Yes, Meta Said No · ChatGPT Agent Risks · And More
3
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 219th edition, with my weekly curation of essential papers, reports, news, and ideas on AI governance, now reaching over 69,300 subscribers in 168 countries. It's great to have you on board! To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond


👉 Before we start, a special thanks to hoggo, this edition's sponsor:

Finally, a compliance tool designed by former DPOs. As a General Counsel or DPO, you know it first-hand: endless vendor reviews, spreadsheet tracking, and missing critical vendor changes. hoggo’s AI cuts assessment time by 80%, provides real-time vendor monitoring, and generates audit-ready docs. Join teams that have reclaimed 400+ hours annually - try it now.


OpenAI Said Yes, Meta Said No · ChatGPT Agent Risks · And More

Below is my weekly AI governance curation to help you stay ahead. It includes essential news, papers, reports, and ideas:

1. The news you can't miss:

  • The EU recently published the final draft of the EU AI Act's code of practice for general-purpose AI models and invited companies to sign it. OpenAI said it plans to sign and nudged the EU to be more competitive. Meta said it does not plan to sign and accused the EU of overregulation and overreach.

  • The EU Commission has also published practical guidelines on the scope of obligations for providers of general-purpose AI models under the EU AI Act. Pay special attention to pages 6-9, covering the discussion of when an AI model is a general-purpose AI model.

  • OpenAI launched ChatGPT Agent, which complements Operator and Deep Research. Agentic AI applications increase privacy and security risks exponentially (read my summary here), and unfortunately, the pace of AI development is much faster than the pace of AI literacy.

  • The agentic AI company Manus AI (read my article about it) relocated its headquarters from China to Singapore to access Nvidia’s advanced AI chips and bypass U.S. restrictions. Manus AI has recently raised $75 million, and the U.S. Treasury Department is reviewing this investment.

  • The EU launched a call for applications to join the Advisory Forum under the EU AI Act. This is an important EU-level governance body that will provide technical expertise and advice to the EU Commission. Professionals and interested parties from civil society, academia, and industry can apply.

  • Denmark is among the countries pushing the EU to include the AI Act in the simplification package in December. Reviewing a law that has recently entered into force shows internal instability and increases legal uncertainty.

  • Germany is not satisfied with its current position and wants to lead in AI, competing directly against the U.S. and China. This is another example of the new AI nationalism I have been writing about.

  • The State of Virginia launched the U.S.’s first agentic AI regulatory review. The announced goal of this measure is to reduce regulatory burdens and keep rules and guidance documents updated. I am curious to watch this development, especially given the risk of ‘AI-hallucinations’ and bias.


2. Must-read academic papers

  • “The Promise and the Peril of the Use of Generative AI in Litigation” (link) - This article explores the use of generative AI in litigation in light of the risks of AI hallucination and cognitive biases, and discusses solutions including education, certification, sanctions, and prohibitions.

  • “Should AI Write Your Constitution?” (link) - This is a provocative paper that challenges AI skepticism and critically discusses how to potentially use AI in making and reforming constitutions, as well as in other democratic processes.

  • “Generative Misinterpretation” (link) - This paper argues that generative AI is not suitable for judicial interpretation and discusses the reliability and epistemic gaps that make ‘generative interpretation’ methodologically and socially illegitimate.

  • “Copyright Exceptions and Fair Use Defences for AI Training Done for 'Research' and 'Learning', or the Inescapable Licensing Horizon” (link) - This paper offers an in-depth legal defense of licensing for AI training, especially in light of existing exceptions such as for research and learning.

3. New reports and relevant documents

  • The Future of Life Institute published its AI Safety Index and warned that AI capabilities are accelerating faster than AI risk management practices. It found, for example, that “only 3 of the 7 surveyed companies reported substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism.”

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share