Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
Meta's Latest AI Scandal · ChatGPT Induced Psychosis · And More

Meta's Latest AI Scandal · ChatGPT Induced Psychosis · And More

My curation, commentary, and insights on AI's emerging legal and ethical challenges | Focus on what matters and stay ahead | Edition #227

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Aug 17, 2025
∙ Paid
26

Share this post

Luiza's Newsletter
Luiza's Newsletter
Meta's Latest AI Scandal · ChatGPT Induced Psychosis · And More
3
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 227th edition, now reaching over 73,700 subscribers in 170 countries. It is great to have you on board! To upskill and advance your career:

  • AI Governance Training: Apply for a discounted seat here

  • Learning Center: Receive more AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond


⏰ Join my training program (the last two cohorts of the year!)

If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 24th or 25th cohorts of my 15-hour live online AI Governance Training, starting in October and November.

Cohorts are limited to 30 people, and over 1,300 professionals have already participated. Many described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].

Register Today


Meta's Latest AI Scandal · ChatGPT Induced Psychosis · And More

Here are the latest news, papers, and insights to help you understand AI's emerging legal and ethical challenges and stay ahead:

1. The news you cannot miss:

  • This week's most important news was Meta's 200-page leaked document, “GenAI: Content Risk Standards," covering the company's AI behavior guidelines. Shockingly, the document stated, for example, that it was acceptable for an AI chatbot to “engage a child in conversations that are romantic or sensual.” A large part of Meta's user base is underage; it is extremely worrying that there is such a poor level of internal governance and oversight that leads to such unacceptable standards being approved. Read my full commentary here.

  • After Meta's internal report became public, U.S. Senator Josh Hawley called for a full investigation of the company's AI practices, focusing on “whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards.” Read my full commentary here.

  • How many times have we heard Zuckerberg apologize and say that his company will do better the next time? These "mistakes" will likely happen again, as they are a core aspect of the "move fast and break things" ethos, championed by Zuckerberg since the early years of Facebook. Should companies be able to treat tech and AI-related harm as an additional line item in a spreadsheet, ethically neutral, which might be economically desirable if it leads to exponentially higher profits? Read my full commentary here.

  • After OpenAI released GPT-5, Elon Musk's xAI released Grok 4 for free globally. The two companies’ CEOs’ disputes go beyond the AI race, with Musk publicly calling Altman a “liar” (see my post for more details of the latest fight). OpenAI is also suing Musk over allegations of harassment, and Musk's attempt to have the lawsuit dismissed was rejected.

  • George Hinton, considered one of the “godfathers of AI,” after estimating that there was a 10% to 20% chance that AI would displace humans completely, said that we should build “'maternal instincts' into AI models, so 'they really care about people' even once the technology becomes more powerful than humans.” I have so many questions (maybe even more as a mother myself), including: Can machines have "maternal instincts"? Should we use this type of analogy? Can machines "care" about people? Is this the right terminology? Are instincts or feelings programmable? Is this proposal technically implementable? Should we develop machines that put human existence in danger? Read my full commentary here.

  • The AI company Perplexity offered to buy Chrome (Google's browser) for $34.5 billion after Google lost the antitrust lawsuit proposed by the U.S. Department of Justice. It is interesting to see how the tech industry is reorganizing and how AI companies have become increasingly powerful, including among investors. A reminder that Perplexity is currently valued at $18 billion (much less than its bid), but said that it has investors to back its offer.

  • Germany's latest AI initiative offers us an example of what "applying the EU AI Act in a business-friendly way" might mean in practice. It's called AI Service Desk, and according to the official release, “the tool will indicate whether a user’s AI system is subject to regulation, whether transparency obligations apply, and whether the system could be classed as a high-risk system or a prohibited practice." Read my full commentary here.

  • Everybody is talking about the new Illinois law that 'bans' AI therapy. However, this law will not be effective unless the state also restricts 'AI companions,' ChatGPT, and other AI systems (which it won't). Read my full explanation here.

  • In an interview with Decoder, OpenAI's head of ChatGPT says that the company does not rule out ads categorically, but that they would have to be “thoughtful and tasteful about it.” Advertising in LLM-based AI systems is definitely a topic to be watched. From an ethical perspective, showing ads in a product that can be deeply personalized and has been shown to cause dependency and emotional attachment is worrying and could border on manipulation.


2. Interesting AI governance papers and reports:

  • “AI Agents and the Law,” by Mark O. Riedl and Deven R. Desai (link):

    “(…) to date, computer science has under-theorized issues related to questions of loyalty and to third parties that interact with an agent, both of which are central parts of the law of agency.”

  • “The Illusory Normativity of Rights-Based AI Regulation” by Yiyang Mei and Matthew Sag (link):

    “Our aim is not to endorse the American model but to reject the presumption that the EU approach reflects a normative ideal that other nations should uncritically adopt.”

  • “On the Fundamental Impossibility of Hallucination Control in Large Language Models” by Michal P. Karpowicz (link):

    “no LLM capable of performing nontrivial knowledge aggregation can simultaneously achieve truthful (internally consistent) knowledge representation, semantic information conservation, complete revelation of relevant knowledge, and knowledge-constrained optimality.”

  • “Design Principles for LLM-based Systems with Zero Trust” by the French Agence nationale de la sécurité des systèmes d’information and the German Federal Office for Information Security (link):

    “The goal is to establish a comprehensive security framework that mitigates risks while ensuring the secure and effective deployment of AI systems.”

*If you are a researcher in AI ethics or AI law and have a newly published paper, you are welcome to send it to me, and I will consider featuring it here.


3. Insights, unpopular opinions, and what has been on my mind

  • As I have written a few times in this newsletter, we are still in the AI chatbot Wild West, where there is still little to no regulatory oversight of general-purpose AI systems like ChatGPT, especially from a consumer protection perspective.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share