Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
Against Dystopia: The Post-AI Humanism
Copy link
Facebook
Email
Notes
More

Against Dystopia: The Post-AI Humanism

AI policy and regulation must be grounded in the protection and empowerment of the people affected by it. I call this approach Post-AI Humanism | Edition #206

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
May 25, 2025
∙ Paid
23

Share this post

Luiza's Newsletter
Luiza's Newsletter
Against Dystopia: The Post-AI Humanism
Copy link
Facebook
Email
Notes
More
7
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 206th edition, now reaching 62,100+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Join my 4-week, 15-hour program in July

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • Subscriber Forum: Participate in our daily discussions on AI


☀️ Ready for Summer Training?

If you are looking to upskill and explore AI's legal and ethical challenges in an in-depth and interactive way, I invite you to join the 22nd cohort of my 4-week, 15-hour live online AI Governance Training, starting in mid-July.

Cohorts are limited to 30 participants, and over 1,200 professionals have already participated. Many have described the experience as transformative (testimonials here); this could be the career boost you need. I hope to see you there!

Join the Next Cohort


Against Dystopia: The Post-AI Humanism

In the past edition, I discussed the shortcomings of risk-based AI regulations, including the EU AI Act, especially in light of emerging AI risks and the profound ways it is harming us.

Today, I focus on potential paths forward for policy and regulatory frameworks on AI, grounded in the protection and empowerment of the people affected by it. I call this approach Post-AI Humanism, and split the discussion into three parts:

  1. Don't focus on what AI can do: focus on how it affects people instead

  2. AI is spreading fast: the legal approach must be explicitly protective, especially with vulnerable populations

  3. AI is evolving fast: the legal approach must be future-proof by default

If we want to avoid dystopian scenarios, we must focus on why AI is different from a technical, legal, and social perspective, and protect people accordingly.

A reminder that here I focus on the legal aspects, which can coerce companies to behave accordingly. However, Post-AI Humanism can also inspire cultural and social changes that make tech companies understand that the public demands higher ethical standards.

1. Don't focus on what AI can do: focus on how it affects people instead

Regulating technology is challenging. When regulating with AI, there are additional challenges.

First, “AI” encompasses an extremely heterogeneous group of applications. Personal voice assistants, self-driving cars, computer vision devices, AI chatbots, and many more can be grouped as “AI,” despite having little in common regarding how they work.

Regardless of their heterogeneity, AI laws, including the EU AI Act, frame legal obligations and exceptions based on capabilities and intended use, following a typical product safety template. There are two main issues here:

  • In the case of general-purpose AI systems like ChatGPT, the capabilities and ‘intended use’ are extremely broad, leaving obligations weak and uncertain, and allowing malicious actors to exploit them;

  • Given AI's inherent adaptability, the line between ‘intended use’ and ‘misuse’ will often be gray, delaying enforcement and allowing harm to spread.

From a post-AI humanistic perspective, AI laws and policies would focus on how they affect people. In the example of a multimodal general-purpose AI chatbot, regardless of the announced capabilities or ‘intended use’:

  • If it leads children and mentally vulnerable people to become obsessed, dependent, possibly resulting in death, it should be considered high-risk or prohibited and regulated accordingly;

  • If it lets people create realistic deepfake videos which might lead to widespread misinformation campaigns, it should be considered high-risk or prohibited and regulated accordingly;

  • If it has a potent memory that can store all a user's preferences and prompt history, as well as rely on anthropomorphic language to manipulate and persuade, it should be considered high-risk or prohibited and regulated accordingly.

2. AI is spreading fast: the legal approach must be explicitly protective, especially with vulnerable populations

Different from many technologies, AI is advancing fast, and there is a global AI race going on. Countries are changing priorities and flexibilizing their legal approach to “win” this race (including the EU).

With that in mind, from a post-AI humanistic perspective, AI laws must:

  • Be explicitly protective, recognizing the AI race's incentives and the immense information, economic, and power asymmetry between AI companies and their users. Laws and policies should be strict, straightforward and avoid exploitable exceptions;

  • Consider the AI literacy divide (read my article about it). Most people are not tech-savvy, don't know how AI works or how to use it, and will probably not be as competitive in the age of AI;

  • Take into account different populations and how their vulnerabilities can be exploited through AI. Children, older people, people with mental health issues are examples of groups that should receive special attention.

3. AI is evolving fast: the legal approach must be future-proof by default

If we want to focus on protecting and empowering the people affected by AI, legal future-proofing mechanisms should be the rule and designed by default. How?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More