Luiza's Newsletter

Luiza's Newsletter

Switzerland's National AI Model · Albania's AI Minister · And More

My weekly curation of news, papers, and ideas that will help you understand AI's legal and ethical challenges, emerging trends, and potential paths forward | Edition #234

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Sep 16, 2025
∙ Paid
5
1
Share

👋 Hi everyone, Luiza Jarovsky here.

Welcome to the 234th edition of my newsletter, trusted by over 77,900 subscribers interested in AI governance, AI literacy, the future of work, and more.

It is great to have you here!


🎓 Expand your learning and upskilling journey with these resources:

  • Join my AI Governance Training (yearly subscribers save $145)

  • Register for our job alerts for open roles in AI governance and privacy

  • Sign up for weekly educational resources in our Learning Center

  • Discover your next read in AI and beyond with our AI Book Club


🔥 Join the last cohort of 2025

If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, join the 25th cohort of my 16-hour live online AI Governance Training in November (the final cohort of the year).

Each cohort is limited to 30 people, and more than 1,300 professionals have taken part. Many described the experience as transformative and an important step in their career growth. *Yearly subscribers save $145.

Join


Switzerland's National AI Model · Albania's AI Minister · And More

My weekly curation of news, papers, and ideas that will help you understand AI's legal and ethical challenges, emerging trends, and potential paths forward.

1. The news you cannot miss:

  • Switzerland's national AI model “Apertus” (Latin for “open”) is now available. According to the official release, it is fully open, transparent, and multilingual (40% of the data is non-English), built in compliance with Swiss data protection and copyright laws, as well as the EU AI Act. As I have written a few times in this newsletter, the new AI nationalism is growing, and Switzerland's prioritization of transparency and legal compliance may set a new standard, especially for EU countries aiming to protect fundamental rights. Read more about the Swiss model here.

  • Albania became the first country to appoint an AI system as a government minister. It's called “Diella,” and it will be in charge of all public procurement. According to the country's Prime Minister Edi Rama, all decisions on tenders will be taken out of the ministries and given to Diella. The goal of this measure is to reduce corruption in Albania. Will this AI use case be successful? Read more about it and see the official avatar here.

  • AI chatbots are dangerous, and the U.S. is finally taking action. The FTC issued 6(b) orders to Google, OpenAI, Meta, xAI, CharacterAI, Snap, and Instagram. The focus of these orders is to understand what steps these seven companies have taken to prevent the negative impacts that AI chatbots can have on children. Depending on how these inquiries go, we might see more targeted enforcement actions soon. Learn more about the FTC orders here.

  • The 257-year-old Encyclopaedia Britannica is suing Perplexity for copyright infringement, showing what happens when traditional publishers, legal uncertainty, and aggressive AI players clash. Read a few selected quotes from the lawsuit.

  • The 2024 National Assessment of Educational Progress's reading evaluation of a nationally representative sample of U.S. students concluded that in 2024, the average reading score at grade 12 was 3 points lower than in 2019. Compared to the first reading assessment in 1992, the average score was 10 points lower. These statistics are worrying, especially given the rise of AI chatbot deployment in the educational system and the uncertain impact on reading skills (which likely adds to the negative impact social media has had over the past two decades).

  • Real Simple Licensing (RSL) is a new collective rights non-profit that helps online publishers and creators protect their rights and negotiate compensation from AI companies. The platform enables publishers and creators to receive compensation when their content is used to generate an AI result. Read more here.

  • In a recent interview, OpenAI's founder Sam Altman said: "I actually don't worry about us getting the big moral decisions wrong... Maybe we'll get those wrong, too." To understand the current state of AI, watch this strange exchange between Sam Altman and Tucker Carlson on deciding the future of the world and believing in a higher power.

  • Mira Murati is one of the few leading women in the AI industry. Many of us instinctively root for her and expect her to drive change in AI. After leaving her position as the CTO of OpenAI, she founded Thinking Machines, which has recently raised $2 billion. Although the company has not launched any products yet, it recently published an interesting blog post titled "Defeating Nondeterminism in LLM Inference." Read my first impressions.

  • "Supremacy: AI, ChatGPT, and the Race that Will Change the World," by Parmy Olson, is a great read for everyone interested in AI, and it is the 28th recommended book of my AI Book Club. Read about the book here and join the club here (it is free).

  • I launched a new three-part series on “Becoming Future-Proof,” available in full to paid subscribers. Read the first essay here or upgrade.

*If you would like to share a specific ethical or legal development in AI or your thoughts on a specific event, reply to this email or use this form.


2. Interesting papers to download and read:

I. “AI Openness: A Primer for Policymakers” by the OECD (link):

“Decisions to release model weights should carefully consider potential benefits and risks. Falling compute costs and more accessible fine-tuning methods lower the barriers to both use and misuse, enhancing the potential advantages of open-weight models while also increasing the risk of harmful applications.”

II. “The Impact of LLM Adoption on User Behavior” by Nicolas Padilla et al (link):

“Our primary results suggest that concerns about LLMs substituting for web browsing may be well-founded, at least for a subset of online content provider. In particular, we find that after adopting LLMs, users make fewer searches in traditional search engines, including for question searches and both short and longer queries.”

III. “How People Use ChatGPT” by Aaron Chatterji et al (link):

“(…) the three most common ChatGPT conversation topics are Practical Guidance, Writing, and Seeking Information, collectively accounting for nearly 78% of all messages. Computer Programming and Relationships and Personal Reflection account for only 4.2% and 1.9% of messages respectively.”

*If you are a researcher in AI ethics or AI law and would like to have your recently published paper featured here, reply to this email or use this form.


3. Ideas to think about and act on:

AI chatbots require a radically different approach to AI policy (and this will not be easy).

I wrote my first article, warning against the dangers of AI chatbots, in February 2023, covering ‘AI companions’ with a specific focus on Replika.

Those were the early months of the generative AI wave. However, at that moment, it was already clear that:

  • AI anthropomorphism is dangerous, leading to potentially harmful emotional dependence and attachment (in 2023, the company behind Replika had to send users information about suicide prevention when the Italian Data Protection Authority ordered them to restrict personal data processing; read more about it in this paper by Daniella DiPaola and Ryan Calo).

  • Companies would deploy all sorts of unethical practices to make people become attached to chatbots, as emotional AI manipulation is lucrative (you can read my 2023 article about CharacterAI and my recent article on unethical AI marketing).

What was not clear yet in 2023 and is much clearer now is that:

Keep reading with a 7-day free trial

Subscribe to Luiza's Newsletter to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture