π We Must Fix AI Chatbots' Design
We got it wrong with social media. Letβs not do the same with AI
π Hi, Luiza Jarovsky here. Welcome to our 200th edition, now reaching 59,600+ subscribers! If you're looking to upskill and advance your career, check these out:
AI Governance Training: Join my 4-week live online program
Learning Center: Receive free additional AI governance resources
Job Board: Explore job opportunities in AI governance and privacy
Subscriber Chat: Participate in daily discussions on AI
π A special thanks to TrustWorks, this edition's sponsor:
Just for Luiza's subscribersβlast week available! Get free access to TrustWorksβ award-winning AI Governance platform. Instantly identify AI use in your organization (including shadow AI), streamline risk classification under the EU AI Act, and access detailed insights for AI risk management. Donβt miss outβstart your free trial today!
π We Must Fix AI Chatbots' Design
Technology is not neutral: it's always trying to shape user behavior in a way that benefits the company.
As a consequence, design is also not neutral. Every time a company designs a new interface for a website or app, it tries to optimize every detail to shape the user's behavior in a way that will benefit the company, not the user.
People often forget this. Nothing is by chance: colors, sizes, fonts, language, buttons, notifications, layers, menus, settings, and every single detail in a website or app are carefully thought out by dozens, in some cases hundreds, of professionals to shape the user's behavior in a specific way.
The Social Networks Disaster
Let's take social networks as an example. The monetization usually happens through ads. This means that companies will optimize every single element of the social network's interface to ensure that:
More people become users
Users stay a reasonable amount of time each time they use it
Users actively interact with various interface elements, including liking, commenting, sharing, sending messages, joining groups, clicking on ads, and more
That's why:
There is a never-ending feed that keeps adults and children awake at night, doomscrolling
The first posts or videos that appear when users open the social network are often extremely attention-grabbing, carefully picked by the AI algorithm to be irresistible to that specific user
The notification badges, usually bright red with a number inside, appear on almost every social network, reminding users that their activity is measured by the number of people who like, comment on, or share their posts and comments
Notifications are essential for design optimization, as they power the dopamine cycle that wires usersβ brains and keeps them coming back. Every new like, comment, or share is a new dopamine shot, encouraging them to post more and to crave more shots.
In my opinion, social networks are a complete disaster from a design perspective.
Regulation and enforcement took too long. Companies benefited from a regulatory wild west for years, aggressively optimizing every design aspect for profit while ignoring well-being, mental health, and fundamental rights.
It's 2025, and many elements of social networksβ design are still poorly translated into policies, laws, and enforcement actions.
Unfortunately, in terms of how they design their products and the practical implications for people, social media companies have an almost free pass.
AI: The Clock is Ticking
We cannot repeat the same mistakes with AI, especially after seeing what inertia caused in the context of social networks.
AI chatbotsβ design should be optimized to respect fundamental rights
If we do nothing, AI companies will optimize AI chatbotsβ design for profit, regardless of the consequences.
We must act now. Professionals from various fields, including design, user experience, human-computer interaction, law, compliance, policymaking, and others, must get involved.
We are, unfortunately, already quite late. Here are examples of popular AI chatbots whose design doesn't respect fundamental rights: