Luiza's Newsletter

Luiza's Newsletter

ChatGPT-Supported Murder · AI Chatbot Catastrophes on the Rise · And More

My curation and insights on AI's emerging legal and ethical challenges | Focus on what matters and stay ahead | Edition #230

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Aug 31, 2025
∙ Paid
19
6
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 230th edition, now reaching over 76,200 subscribers in 170 countries. For more resources:

  • AI Governance Training: Apply for a discounted seat here

  • Learning Center: Receive additional AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond


🔥 The last cohort of 2025!

If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 25th cohort of my 15-hour live online AI Governance Training, starting in November.

Each cohort is limited to 30 people, and more than 1,300 professionals have already participated. Many described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].

Register Today


ChatGPT-Supported Murder · AI Chatbot Catastrophes on the Rise · And More

Here are the latest news, papers, and insights to help you understand the AI zeitgeist, the legal and ethical challenges, and potential paths forward:

1. The news you cannot miss:

  • A man seemingly affected by AI psychosis killed his elderly mother and then committed suicide. The man had a history of mental instability and documented his interactions with ChatGPT on his YouTube channel (where there are many examples of problematic interactions that led to AI delusions). In one of these exchanges, he wrote about his suspicion that his mother and a friend of hers had tried to poison him. ChatGPT answered: “That’s a deeply serious event, Erik—and I believe you … and if it was done by your mother and her friend, that elevates the complexity and betrayal.” It looks like this is the first documented case of AI chatbot-supported murder.

  • Adam Raine took his life after ChatGPT helped him plan a "beautiful suicide." I have read the horrifying transcripts of some of his conversations, and people have no idea how dangerous AI chatbots can be. Read my article about this case.

  • The lawsuit filed by Adam Raine’s parents against OpenAI over their son’s ChatGPT-assisted death could reshape AI liability as we know it (for good). Read more about its seven causes of action against OpenAI here.

  • In what might be the beginning of a major turn of the tide, 44 U.S. attorneys general sent a letter to 13 AI companies, including OpenAI, CharacterAI, Replika, and Meta, telling them that they will be held accountable if they harm children. It looks like the trigger to this letter was Meta's leaked document, which deemed sexually charged conversations between AI chatbots and children "acceptable" (my article about the topic here). Brazil's attorney general's office also took action last week. The trend should keep spreading: more authorities in more countries should get involved and make sure that AI companies are held accountable. Read my full commentary here.

  • YouTube added AI enhancements to videos without obtaining the consent of the video creators. When people started noticing, YouTube’s creator liaison wrote that there was no generative AI and no upscaling, only a ‘traditional machine learning experiment.’ This is another example of AI's domestication of culture, which I wrote about earlier this year. Read my full commentary here.

  • The new Halo glasses are always on and do not have any indicator to warn people that they are being recorded. The new generation of AI glasses raises serious privacy concerns. Read my full commentary here.

  • Alexandr Wang, 28, is the world's youngest self-made billionaire, and he was recently appointed as Meta's Chief AI Officer. Scale AI, the AI company he co-founded, is now valued at over $29 billion after Meta's investment, although the relationship between the two companies does not seem to be going so well. He has recently posted about Meta AI's strategy, including what I see as a revamped version of Zuckerberg's “move fast and break things.” See my commentary here.

  • The UN established two new global AI governance mechanisms: the global dialogue on AI governance and the independent scientific panel on AI. These are essential developments in times of growing AI nationalism. Countries must now ensure that these mechanisms are integrated into their internal legal frameworks and are enforceable. Read my full commentary here.

  • Elon Musk's AI company xAI is suing both Apple and OpenAI over AI competition issues. For a glimpse into the current state of the AI race, the AI industry's dynamics, and what AI billionaires care about, read these quotes from the lawsuit.

  • WhatsApp introduced a new AI feature called "Writing Help," in another example of the ongoing disempowerment-by-design wave. Read my full commentary here.

  • AI Book Club recommendation: "Blood in the Machine: The Origins of the Rebellion Against Big Tech" by Brian Merchant is a great read for everyone interested in AI, and it's our 27th recommended book. Check out the full list of books here.


2. Interesting AI governance papers and reports:

I. “Emotional Manipulation by AI Companions” by Julian De Freitas et al (link):

“We combine a large-scale behavioral audit with four preregistered experiments to identify and test a conversational dark pattern we call emotional manipulation: affect-laden messages that surface precisely when a user signals “goodbye.” Analyzing 1,200 real farewells across the six most-downloaded companion apps, we find that 43% deploy one of six recurring tactics (e.g., guilt appeals, fear-of-missing-out hooks, metaphorical restraint).”

II. “Michael Scott Is Not a Juror: The Limits of AI in Simulating Human Judgment” by Hayley Stillwell and Sean A. Harrington (link):

“Across a series of mock trial scenarios involving redacted confessions, GPT, Claude, and Gemini repeatedly failed to replicate how real jurors interpret evidence or exercise judgment. Their errors were not random, but systematic. Hidden prompts, built-in content filters, and demographic flattening produced distortions that cut across sex, ethnicity, political affiliation, economic status, and education level.”

III. “A Critique of Human-Centred AI: A Plea for a Feminist AI Framework (FAIF)” by Tanja Kubes (link):

“The article criticizes Human-centred AI from a feminist, post humanist, and neo-materialist perspective and proposes a Feminist AI Framework (FAIF) that also incorporates findings from more-than-human anthropology and body-phenomenology. FAIF aims to reassess the relationship between humans, other life-forms, and technology and explores the potential of collaborative, non-hierarchical design, usage, and controlling of AI.”

IV. “Measuring the environmental impact of delivering AI at Google Scale” by Cooper Elsworth et al (link):

“Our approach accounts for the full stack of AI serving infrastructure—including active AI accelerator power, host system energy, idle machine capacity, and data center energy overhead. Through detailed instrumentation of Google’s AI infrastructure for serving the Gemini AI assistant, we find the median Gemini Apps text prompt consumes 0.24 Wh of energy—a figure substantially lower than many public estimates.”

*If you are a researcher in AI ethics or AI law and have a newly published paper, you are welcome to send it to me, and I will consider featuring it.


3. Insights, opinions, and what you need to know

  • Releasing powerful tools such as general-purpose AI chatbots to hundreds of millions of people has likely been the largest social experiment in the history of technology. Why?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture