Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
⛔ The Hidden Dangers of AI Chatbots
Copy link
Facebook
Email
Notes
More

⛔ The Hidden Dangers of AI Chatbots

Lawmakers have vastly underestimated how risky AI chatbots can be. It's time to take action before it's too late | Edition #202

Luiza Jarovsky's avatar
Luiza Jarovsky
May 07, 2025
∙ Paid
21

Share this post

Luiza's Newsletter
Luiza's Newsletter
⛔ The Hidden Dangers of AI Chatbots
Copy link
Facebook
Email
Notes
More
4
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 202nd edition, now reaching 60,400+ subscribers. To upskill and advance your career:

  • AI Governance Training: Join my 4-week program in July

  • Learning Center: Receive free AI governance resources

  • Job Board: Explore job opportunities in AI governance & privacy

  • Subscriber Forum: Participate in our daily discussions on AI


👉 A special thanks to hoggo, this edition's sponsor:

hoggo empowers legal and privacy teams with a complete third-party compliance solution. Their AI-powered platform handles everything - from quick vendor assessments to continuous monitoring and streamlined management. This automation replaces manual reviews and makes compliance 80% faster. Luiza's readers get 20% off annual plans. Discover the full suite.


⛔ The Hidden Dangers of AI Chatbots

What is getting clearer every day is that lawmakers, policymakers, and advocates have vastly underestimated how risky AI chatbots can be.

Take the EU AI Act, for example, which is considered one of the world's strictest AI laws (hint: it's not as strict as most people think).

There, AI chatbots like ChatGPT or Replika are not, per se, considered high-risk. Their providers will have to comply with residual obligations, including transparency measures (which are subject to poorly written exceptions).

Currently, most countries either do not regulate AI chatbots at all or mandate only soft transparency obligations focused on metadata and provenance signs, deepfake labeling, and AI disclosure.

This is not enough, and countries must realize that before it's too late.

Two and a half years have passed since November 2022, when ChatGPT, the most popular AI chatbot in history, was launched.

The initial public fascination has mostly faded, but AI chatbots, the flagship application in the ongoing generative AI wave, are still everywhere, used by hundreds of millions of people worldwide.

However, a lot has changed, and the past 30 months have made it clear that AI chatbots’ risk profile is higher than what was initially thought.

Currently, we are living in the AI chatbot Wild West, where AI companies do as they please, move fast, break things, and optimize for profit, whatever the human and social costs involved.

This must change, and they deserve immediate attention from lawmakers and policymakers, who should set clear legal and ethical boundaries on how AI chatbots are developed and deployed.

Companies deploying AI chatbots should also pay attention, as their practices might lead to liability and other legal risks.

Below, I explain why factors such as:

  1. How they're being developed and optimized

  2. Rising use cases and manipulative patterns

  3. Recent incidents that have led to harm (even death)

lead to new ethical and legal implications, which must be urgently addressed.

1. How they're being developed and optimized

In the absence of rules prohibiting them, AI companies will use all the tools at their disposal to increase profits. That's how capitalism works.

To exemplify, in a recent video (don't miss it), Mark Zuckerberg said that the average American has fewer than 3 friends but has a demand for 15 or more. He proposed AI friends as a solution, and said:

"As the personalization loop kicks in and the AI just starts to get to know you better and better, I think that will be really compelling."

Notice how he seems to be referring to a drug ("kicks in") when he talks about personalization.

As Meta knows very well from its practices on Facebook and Instagram, personalization can actually be addictive: the dopamine hits that it creates feel like a drug for many people, who keep coming back for more.

My guess is that he let that slip by mistake (that's not how the PR or legal team instructed him), but this is probably how he and other people at Meta describe the psychological process behind manipulation, and apply it consciously and consistently to all their products.

My questions:

  • Should Meta be allowed to openly apply this intrusive level of personalization and exploit personal data to push its 1 billion+ users to “befriend” its AI chatbots?

  • Should Meta be allowed to manipulate users through AI chatbots, ignoring psychological and social consequences, to make sure they keep engagement high and increase their profits?

Is it fair? Is it ethical? Is it legal? Should it be allowed?

My answer to these questions is:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More