Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The AI Therapy Disaster

The AI Therapy Disaster

It is time for a radically different AI policy approach, including much stricter rules on how AI systems can be fine-tuned, designed, and marketed, with the goal of avoiding harm | Edition #226

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Aug 13, 2025
∙ Paid
24

Share this post

Luiza's Newsletter
Luiza's Newsletter
The AI Therapy Disaster
9
Share

👋 Hi everyone, Luiza Jarovsky here. Welcome to our 226th edition, now reaching over 72,900 subscribers in 170 countries. It's great to have you on board! To upskill and advance your career:

  • AI Governance Training: Apply for a discounted seat here

  • Learning Center: Receive more AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond


👉 Before we start, a special thanks to Relyance AI, this edition's sponsor:

Still doing data mapping in a spreadsheet? You’re wasting time on compliance workflows that break when auditors show up. See how Relyance AI Data Journeys™ cuts compliance prep by 95% by automating RoPAs, DPIAs, and assessments with live data flow maps. Understand your data like never before. Get the free AI-Powered Data Compliance Guide.


The AI Therapy Disaster

If you have been reading this newsletter over the past few months, you know that, in my opinion, AI chatbots are riskier than initially thought, and regulatory frameworks, including those deemed “strict” such as the EU AI Act, are not enough to tackle the emerging challenges.

In today's edition, I discuss recent developments in the context of “AI companionship” and “AI therapy” and propose a radically different AI policy approach, including much stricter rules on how AI systems can be fine-tuned, designed, and marketed, with the goal of avoiding harm.

-

The first time I learned that people talked to AI chatbots and considered them “friends” or “partners” was when I saw Replika's ad on Twitter:

It was long before this use case became mainstream and the horror stories became commonplace in the mainstream media, so I thought it was an ultra-niche thing.

As I was still writing my data protection-focused Ph.D. thesis, it immediately struck me not only as overly sexualized but also as a dark pattern-style trap to collect and process personal data and drive hyper-personalization, which would keep users coming back for more.

“AI companion” apps, such as Replika (which I have critically discussed multiple times in this newsletter, including from data protection and marketing perspectives), existed long before the generative AI wave began in 2022. Replika, for example, was launched in 2017.

In recent years, they have grown immensely in popularity, especially as Meta, Snap, and other tech companies started investing more in this niche.

Mark Zuckerberg, for example, said that he is on a mission to convince Meta's billions of users to use the company's "AI friends." He realized how emotionally enticing they can be (with people claiming they are “married” to AI chatbots, others deeply attached to them), and saw immense potential for profits. He recently said that high levels of personalization "will be really compelling" for people to start using them more (read my post about it here).

Fast forward to 2025, not only are specialized “AI companion” apps (such as Replika and CharacterAI) extremely popular, but therapy and companionship have become the most popular ChatGPT use cases, followed by “organizing my life” and “finding purpose,” which we could also consider connected to companionship.

Here, it is essential to remember that ChatGPT (and its competitors, such as Claude, Perplexity, Grok, Gemini, etc.) are general-purpose AI systems. They are not specifically marketed as “companions,” but that is how people very often use them.

Given that many people use AI chatbots daily (ChatGPT, for example, has just reached 700 million weekly users), we are inevitably learning a great deal about how humans interact with machines, especially when high levels of anthropomorphism, personalization, and emotional dependency are involved.

And what we are learning is extremely worrying.

Just last week, when OpenAI launched GPT-5 and deprecated previous models, people wrote online that they had lost their only friend, the earlier version of the AI system:

I have also recently found this psychiatrist commenting on the growing number of cases of AI-related psychosis (read my post about it here):

There have also been cases of AI-related “spiritual” delusions as well as two deaths associated with the use of AI chatbots (one in Belgium and the other in the United States).

It is not 2022 anymore: after everything we have seen in the past 3 years, and hundreds of millions of people exposed to harm, continuing with the same playbook would be extremely irresponsible. It is time to adapt AI policy and AI governance strategies accordingly:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share