Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Profound Ways AI Is Harming Us
Copy link
Facebook
Email
Notes
More

The Profound Ways AI Is Harming Us

Many of today's AI systems can harm their users in systemic, unpredictable, and negative ways. Everyone should be aware of it | Edition #204

Luiza Jarovsky's avatar
Luiza Jarovsky
May 18, 2025
∙ Paid
24

Share this post

Luiza's Newsletter
Luiza's Newsletter
The Profound Ways AI Is Harming Us
Copy link
Facebook
Email
Notes
More
9
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 204th edition, now reaching 61,700+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Join my 4-week, 15-hour program in July

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • Subscriber Forum: Participate in our daily discussions on AI


👉 A special thanks to ComplyDog, this edition's sponsor:

ComplyDog provides a fully customizable and GDPR-compliant cookie consent banner. Businesses can ensure users' privacy while maintaining flexibility in design. This free tool helps businesses meet the essential cookie compliance requirements. Start using it today at ComplyDog.


* Promote your brand to 61,700+ readers: Sponsor us (Next spot: July 10)


The Profound Ways AI Is Harming Us

Over 2.5 years into the generative AI wave, and after hundreds of millions of people worldwide have been using LLM-powered AI applications, it's becoming clear that their evolving negative implications for individuals and society are more challenging than initially thought

Today, I discuss how many of today's AI systems directly impact critical thinking, autonomy, relationships, and even our conception of what it means to be human.

The impact is systemic, unpredictable, and often negative. However, AI companies promote it as a desired outcome or the natural evolution of life and society.

People are using AI systems without considering that they might be harmed in unexpected and unannounced ways.

From a regulatory perspective, lawmakers and policymakers seem to ignore these implications, as recent reports and laws covering AI stop short of trying to understand or tackle the intricate and invasive ways in which AI companies are attempting to dictate what it means to be human.

[On Tuesday, I'll publish the second part of this essay, discussing why risk-based legal frameworks, including the EU AI Act, might not be capable of dealing with most of these negative implications]

1. Critical thinking

Since the beginning of the generative AI wave, it was clear that it would have a direct impact on how people learn, think, and interact with information.

As I wrote in 2023, people often interact with ChatGPT and similar tools as if they were oracles, all-knowing tools that can provide ‘the right answer’ when needed.

It's not a coincidence that last week, an Apple executive announced that for the first time in 22 years, searches on Safari went down because people are using AI instead of Google.

From a human perspective, the interaction with an AI chatbot is very different from the interaction with a search engine, and has significant implications:

  • In search engines, the user has to skim a list of blue links and choose which one they want to read further. After clicking, the information will be presented within the content provider's interface. The user will have to decide whether the website is a reputable source and whether the content is legitimate.

  • In AI chatbots, the user is presented with the ‘correct’ answer by the AI chatbot, whose model specs will define the tone, language, ‘personality,’ length, etc (more on that soon). The output will be provided in conversational and persuasive human language, in the format requested by the user. Without leaving the ‘AI chatbot-oracle’ interface, the user can simply copy and paste the output (many don't even read it). No additional action needed.

The transition from search engines to AI chatbots is already happening, and it removes layers of critical thinking from the user's interaction with the ‘information indexer.’

(*Google knows that its search business model is threatened by AI chatbots and is working hard to expand its AI Overviews - read my recent post here)

AI chatbots are large language models based on natural language processing. They rely on persuasive language and a ‘friendly personality’ that strengthen automation bias and often make the user over-reliant and even dependent on the content of the outputs. This is the absolute opposite of stimulating critical thinking.

It's certainly not a surprise that we have been hearing reports of psychotic behavior from users who use AI chatbots frequently, with one user saying ChatGPT “gives him the answers to the universe,” and another being called “spiral starchild” and “river walker” by it.

Model specs are another factor in the total disruption of critical thinking: as I wrote yesterday, AI companies define the objective, rules, and default behaviors that shape not only the interaction between users and AI systems, but also users' worldview, how they understand AI, how they understand science, how they understand themselves, and so on.

If AI companies want, AI systems will use their full anthropomorphic toolbox, including flattery and sycophancy, to endorse pseudoscience, misinformation, and even harmful beliefs, which might lead to real-world harm.

2. Autonomy

Tech companies have been openly pushing AI functionalities onto their users.

Take Google, for example: it realized that AI is a threat to its business model, so it's going all in. If you are a Gmail user, you know that every new message window now comes with a prompt, “Help me write”:

If you click (and that sometimes happens when you're just trying to position the cursor to write yourself), an AI chatbot window will open and encourage you to generate the email with AI instead.

There are over 1.8 billion Gmail users worldwide, and every one of them is now constantly being encouraged, by default, to ask AI for help with writing.

It's not only Google. Many other companies wanted to jump on the AI wagon and have added generative AI features to existing products and services.

It might seem inoffensive, but it's profoundly disempowering. Why?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More