Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
Rerouting AI Governance

Rerouting AI Governance

AI governance has become significantly stiffer and overly focused on standardized frameworks, which has made it more distant from AI's real-world implications. We must change that | Edition #214

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Jun 27, 2025
∙ Paid
19

Share this post

Luiza's Newsletter
Luiza's Newsletter
Rerouting AI Governance
3
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 214th edition, now reaching 65,000+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Apply for a discount here

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • AI Book Club: Discover your next read and expand your knowledge

  • Become a Subscriber: Read all my analyses:


👉 A special thanks to ComplyDog, this edition's sponsor:

ComplyDog helps online businesses stay compliant with GDPR through its comprehensive portal, customizable cookie banner, and full compliance solution. Streamline data protection with a simple, all-in-one service. Get started with a free trial.


Rerouting AI Governance

Recent AI trends such as the corporate obsession with AI-first and AI fluency, “AI companionship” becoming mainstream, the EU's narrative shift (which I have called the Washington Effect), tech executives quest for the third AI-powered device (in addition to smartphones and computers), and this week's two AI copyright decisions (1, 2) on fair use have made me reflect deeply on the current state of AI governance.

In today's edition, I argue that over the past two years, AI governance has become significantly stiffer and overly focused on standardized frameworks, which has made it more distant from AI's real-world implications, allowing interested parties to exploit it.

I propose a swift rerouting to help it achieve its mission of protecting humans from the negative implications of AI.

-

If you are an AI governance ‘insider,’ you probably noticed that the field has changed in the past two years. It has become more formalistic, standardized, and stiff.

There are frameworks, standards, infographics, checklists, and cheatsheets everywhere. At the same time, it seems that there are more tech companies, AI developers and deployers, as well as other stakeholders in the AI value chain, that simply don't care.

They move fast and break things (with AI), without fear of public scrutiny, backlash, or accountability, as if we were back in 2004.

It is as if the field had been “domesticated.” As if publicly announcing a responsible AI strategy and filling out a compliance form were all it would take for a company to get a free pass and continue with the AI practice that is more profitable.

It should not be this way, especially if we want to govern AI.

It is time to take the next step.

To protect humans, AI governance must take a firmer stand, establish clear boundaries, and fiercely protect the resources and tools, including technical, legal, and ethical, that enable humans to live, create, grow, and thrive.

Human well-being, development, and dignity should be absolute priorities. Ignoring the early signs that AI is invading and degrading the human space is the depiction of ungovernance.

A few examples of how this ‘rerouting’ of AI governance works in practice:

I have been writing extensively about the dangers of anthropomorphic AI chatbots, especially as we have seen more and more cases of people getting dependent and emotionally attached to them (some claim to have “married” chatbots), chatbot-related psychotic delusions, and particularly unsafe interactions, which have already led to two deaths.

AI governance as a field should focus not only on a much more robust body of social science research on AI's real-world impact, as I wrote on Sunday, but also on setting boundaries and taking a stronger stand to protect what is human and the space humans need to thrive.

When people are giving up real-world relationships to propose to an AI chatbot, or when they are interacting so obsessively with their “AI friends” that their mental health is pushed to such a state that we already have two examples of suicide, it might mean that the risk is too high.

It might be time to recognize that human well-being and mental health are incompatible with these types of human-AI interactions, and making them freely available borders on negligence.

Examples of a tougher and evidence-based AI governance stand on anthropomorphic AI include banning or limiting their access to vulnerable audiences, pushing for greater transparency on real-world impact (which would include social science approaches), and demanding that the companies behind them improve their safety.

Copyright is another field where we can observe AI governance's current stiffness and lack of shielding of the human space to thrive:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share