🌐 AI Is Dehumanizing the Internet
Emerging AI Governance Challenges | Paid Subscriber Edition | #179
👋 Hi, Luiza Jarovsky here. Welcome to the 179th edition of my newsletter, read by 55,700+ subscribers in 165+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance, and regulation. It's great to have you here!
🌐 AI Is Dehumanizing the Internet
The rise of AI has brought massive changes to the internet, a slow process that began at least 20 years ago.
Recent developments show that this transformation is accelerating and will likely lead to the full dehumanization of the internet, leaving us disempowered, easily manipulable, and entirely dependent on companies that provide AI services.
In this edition of the newsletter, I explain the AI-powered dehumanization process, how it impacts us, and some of its ethical and legal implications.
1️⃣ First Stage: Eliminate Choice
Although Generative AI has dramatically accelerated the internet's dehumanization process, we have been experiencing the negative impact of AI-powered applications for at least 20 years.
In the early 2000s, the first major AI-powered recommendation systems emerged, introduced by companies like Amazon, Netflix, and YouTube. As Amazon describes:
“For two decades now, Amazon has been building a store for every customer. Each person who comes to Amazon sees it differently, because it’s individually personalized based on their interests. It’s as if you walked into a store and the shelves started rearranging themselves, with what you might want moving to the front, and what you’re unlikely to be interested in shuffling further away.”
The idea of shelves rearranging themselves for each individual, based on what Amazon thinks they might like, offers a glimpse into a dystopian reality where people are infantilized and disempowered, relying on Big Brother to conduct their lives.
These recommendation systems become especially harmful when applied to social media feeds. YouTube was among the first platforms to implement AI-powered recommendations. Here's how it explains its success:
“Today, our system sorts through billions of videos to recommend content tailored to your specific interests. (…) Unlike other platforms, we don’t connect viewers to content through their social network. Instead, the success of YouTube’s recommendations depends on accurately predicting the videos you want to watch.”
Platforms like YouTube are constantly trying to predict what type of content users want to consume. In doing so, they are actively shaping how people’s minds are occupied over minutes, hours, days, months, and years.
Allowing companies to dictate what people watch next has become globally normalized, with little to no regulatory or policy efforts to curb it. However, the consequences are significant.
A 2023 study on YouTube found that:
“a growing proportion of recommendations comes from channels categorized as problematic (e.g., “IDW,” “Alt-right,” “Conspiracy,” and “QAnon”), with this increase being most pronounced among the very-right users. Although the proportion of these problematic recommendations is low (max of 2.5%), they are still encountered by over 36.1% of users and up to 40% in the case of very-right users.”
Another area where AI-power recommendation systems have caused negative consequences is children's privacy. A 2024 study of 2,880 video thumbnails found that:
“12 search terms popular with children yielded recommended video thumbnails that contained a high prevalence of attention-engaging and problematic content such as violence or frightening images.”
Regarding AI-powered social media feeds, Facebook launched its News Feed in 2006, explaining that it “is personalized to each user and is only viewable by that person.” Many immediately raised privacy concerns about Facebook's data collection practices, which enabled detailed personalization.
Beyond privacy issues, AI systems that decide what users will see next often prioritize divisive, sensational, and radical content. This leads to negative emotions, potential mental health issues, and increased polarization.
In 2021, Facebook whistleblower Frances Haugen stated:
“Facebook's mission is to connect people all around the world. When you have a system that you know can be hacked with anger, it's easier to provoke people into anger. And publishers are saying, 'Oh, if I do more angry, polarizing, divisive content, I get more money.' Facebook has set up a system of incentives that is pulling people apart."
Tech companies conveniently ignore the ethical and legal downsides of AI-powered recommendation systems and feeds, including filter bubbles, polarization, manipulation, misinformation, mental health issues, bias, misinformation, privacy concerns, lack of transparency, and disempowerment.
Despite public outrage and numerous studies demonstrating how these issues affect people, tech companies continue to promote these systems by default, as if there were no alternative ways to operate their platforms and social networks.
Companies also present these AI-powered features as essential personalization tools, suggesting that without them, users would be lost in a sea of irrelevant content or products.
In recent years, "responsible AI" has become a buzzword. I have written before that if companies truly care about responsible AI beyond the empty legal language of their AI policy documents:
Why not help users understand how these systems work and implement tools to help people exercise autonomous and critical choices?
Why not build, by default, a prominent control panel where users can constantly be aware of the personal data leading to the AI-powered recommendations and allowing them to change how recommendations are calibrated?
Why not allow people to turn off AI-powered recommender systems?
They do not take these steps for two main reasons, which serve as a reminder that companies should not be allowed to regulate themselves, as their incentives do not align with users' best interests:
Tech companies prioritize profit over users’ well-being, autonomy, and fundamental rights;
The "magic-like" recommendation system is more hyped and sells more (or generates more engagement) than a user-mediated control panel.
With little oversight and no decisive scrutiny over AI-powered features that eliminate choice, the tech sector has felt free to advance to the next stage of the dehumanization process.
2️⃣ Second Phase: Disempower People
The second phase of the dehumanizing process began in 2022, when Generative AI applications such as Midjourney and ChatGPT became widely available.