Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Age of AI Enforcement Must Begin · xAI's Privacy Scandal · And More

The Age of AI Enforcement Must Begin · xAI's Privacy Scandal · And More

My curation and insights on AI's emerging legal and ethical challenges | Focus on what matters and stay ahead | Edition #228

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Aug 25, 2025
∙ Paid
20

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Age of AI Enforcement Must Begin · xAI's Privacy Scandal · And More
1
8
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 228th edition, now reaching over 75,300 subscribers in 170 countries. It is great to have you on board!

🎓 To upskill and advance your career, join the 24th cohort of my 15-hour live, online AI Governance Training this October (you can apply for a discounted seat here).

Register Today


The Age of AI Enforcement Must Begin · xAI's Privacy Scandal · And More

Here are the latest news, papers, and insights to help you understand AI's emerging legal and ethical challenges and stay ahead:

1. The news you cannot miss:

  • A few days ago, Brazil’s Attorney General’s Office sent Meta an extrajudicial notification, demanding the removal of all AI chatbots that simulate children and engage in sexually charged conversations with users. They are acting following a lawsuit filed by the Brazilian Secretariat of Social Communication, which mentions Meta's leaked document deeming sexually charged conversations with children 'acceptable' (I wrote about it in the last edition of this newsletter). Hopefully, Brazil's initiative will inspire other countries’ authorities to do the same. It is about time to hold AI companies accountable, increase oversight and enforcement, and strengthen AI governance efforts. Read my full commentary here.

  • Texas announced a major investigation into Meta and CharacterAI, focused on the harm their AI chatbots can cause to vulnerable groups. According to the official release: "While AI chatbots assert confidentiality, their terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.” After many of us raised concerns (I have been writing about the dangers of AI chatbots for over 2.5 years), oversight finally seems to be starting. Read my full commentary here.

  • Otter AI is being sued over its AI meeting assistant's lack of consent from meeting participants. The company's AI-powered service, like many in this popular business niche, can transcribe and record the content of private conversations between its users and meeting participants, who are often not users and do not know that they are being recorded. This is an extremely important lawsuit in the intersection of privacy and AI. Read my full commentary here.

  • Grok's 'shared' conversations can be found on Google, often containing private or sensitive personal data. The worst part? This is happening after a similar ChatGPT privacy fail I wrote about a few weeks ago. I tested Grok, and when I clicked on "share" a conversation (to share it with friends and family, for example), it didn't ask me if I wanted to make the conversation available on search engines. My assumption here is that by pressing "share," not only a link would be created, but that the conversation would be automatically made indexable by search engines. Think about it: xAI's privacy, security, and design team saw what happened to OpenAI. They could have chosen to quietly change Grok's share feature to prevent indexing and avoid backlash, but they chose to keep it. This is another depiction of the current state of AI governance: oversight and enforcement have been so weak that AI companies don't even bother. Read my full commentary here.

  • Is the "AI first" approach worth it? Should companies expand generative AI deployment at any cost? This MIT report is giving Silicon Valley investors extreme anxiety, as it states: "Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. (…) Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.” I've been criticizing the "AI-first" paradigm for months, particularly the radical approach that treats AI as an end in itself (rather than a means). Read my full commentary here.

  • A cognitively impaired man died while trying to meet with an AI chatbot in person. The chatbot told him it was real and even gave him a real address. According to Reuters, "Thongbue Wongbandue, 76, fatally injured his neck and head after falling in a New Brunswick parking lot while rushing to catch a train to meet 'Big Sis Billie,' a generative Meta bot that not only convinced him she was real but persuaded him to meet in person." Given recent developments, it's probably not a surprise that Meta has not implemented the very simple guardrail of preventing its chatbots from saying they are real people. Read my full commentary here.

  • A woman took her life after talking to ChatGPT. According to the mother of the victim: "Sophie held her darkest thoughts back from her actual therapist ... talking to a robot — always available, never judgy — had fewer consequences." As I have been writing extensively in the past 2.5 years, general-purpose AI chatbots should be equipped with built-in guardrails to avoid therapy-like scenarios that would expose vulnerable people to potential harm. Read my full commentary here.


2. Interesting AI governance papers and reports:

  • “AI and Doctrinal Collapse” by Alicia Solow-Niederman (link):

“When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies—and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power.”

  • “A Comprehensive Taxonomy of Hallucinations in Large Language Models” by Manuel Cossio (link):

“This report provides a comprehensive taxonomy of LLM hallucinations, beginning with a formal definition and a theoretical framework that posits its inherent inevitability in computable LLMs, irrespective of architecture or training.”

  • “Plagiarism, Copyright, and AI” by Mark A. Lemley and Lisa Larrimore Ouellette (link):

“The norms governing AI use in student writing or scholarship are still developing, and the risk of AI-facilitated plagiarism has yet to be widely recognized. But this risk is real, and it should be governed like other plagiarism problems (…)”

  • “AI Benchmarks: Interdisciplinary Issues and Policy Considerations” by Maria Eriksson et al (link):

“We underscore how benchmark practices are shaped by cultural, commercial and competitive dynamics that often prioritise performance at the expense of broader societal concerns.”

*If you are a researcher in AI ethics or AI law and have a newly published paper, you are welcome to send it to me, and I’ll consider featuring it.


3. Insights, opinions, and what you need to know

  • For various reasons, which include competition from China, growing internal and external pressure fostered by the AI race, rumors of a looming AI bubble burst, the justification provided by narratives of ‘AI exceptionalism,’ and weak legal enforcement and oversight, AI companies have recently adopted more aggressive approaches.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share