Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
AI as a Manipulative Informational Filter

AI as a Manipulative Informational Filter

AI adds unsolicited noise, bias, distortion, censorship, and sponsored interests to raw information, altering how people understand the world and exposing society to new risks | Edition #215

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Jul 02, 2025
โˆ™ Paid
28

Share this post

Luiza's Newsletter
Luiza's Newsletter
AI as a Manipulative Informational Filter
9
Share

๐Ÿ‘‹ Hi, Luiza Jarovsky here. Welcome to our 215th edition, now reaching 65,800+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read in AI and beyond

  • Become a Subscriber: Read my full analyses and stay ahead:


๐ŸŽ“ Join the 23rd cohort in September

If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 23rd cohort of my 15-hour live online AI Governance Training, starting in September.

Cohorts are limited to 30 people, and over 1,200 professionals have already participated. Many have described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].

Join the Next Cohort


AI as a Manipulative Informational Filter

As the generative AI wave advances and we see more examples of how AI can negatively impact people and society, it gets clearer that many have vastly underestimated its risks.

In today's edition, I argue that due to the way AI is being integrated into existing systems, platforms, and institutions, it is becoming a manipulative informational filter.

As such, it alters how people understand the world and exposes society to new systemic risks that were initially ignored by policymakers and lawmakers, including in the EU.

-

Two weeks ago, Marc Andreessen said on Jack Altman's podcast:

โ€œAI is going to be the control layer for everything (โ€ฆ) how you interface with the education system, with the healthcare system, with transportation, employment, with the government, with the law, right? It's going to be AI lawyers, AI doctors, AI teachers (โ€ฆ)โ€

and also

โ€œthe analogy is not with cloud or the internet, but with the invention of the microprocessor. This is a new kind of computer; everything that computers do can get rebuilt. Everything is going to be rebuiltโ€

This is, of course, the hopeful view of a tech entrepreneur and venture capitalist who is interested in this cycle of destruction and reconstruction, high valuation, hype, and profit.

At the same time, recent developments in tech show that Andreessen's view is already unfolding. Additionally, the โ€˜control layerโ€™ he refers to is also, from a cognitive perspective, a manipulative informational filter whose medium and long-term consequences have been ignored from ethical and legal perspectives.

Powered by the AI-first and AI fluency trends, companies everywhere are racing to adopt AI as an end, not a means, and forcing employees to uncritically embrace AI or lose their jobs (well, many will likely be fired anyway).

This creates a new set of incentives and dramatically changes corporate culture: companies in every sector are now primarily focused on both profit and AI-powered automation, multiplying AI's ubiquity, as well as its informational influence on people.

There is also the hardware aspect, which is essential when discussing informational filters. AI is already embedded in the systems that power computers and smartphones, and each new operating system update adds more AI features, ensuring that people will only interact with information through AI lenses.

Meta's glasses and OpenAI's mysterious companion-surveillance gadget are designed to reinforce the informational filter, potentially adding AI intermediation to a person's every single daily interaction.

It reminds me of what Shoshana Zuboff once said: โ€œIt is no longer enough to automate information flows about us; the goal now is to automate us.โ€

AI is a manipulative informational filter because it adds unsolicited noise, bias, distortion, censorship, and sponsored interests to raw human content, data, and information, significantly altering people's understanding of the world. How?

This post is for paid subscribers

Already a paid subscriber? Sign in
ยฉ 2025 Luiza Jarovsky
Privacy โˆ™ Terms โˆ™ Collection notice
Start writingGet the app
Substack is the home for great culture

Share