The Case for AI Regulation
And why you should care | Edition #262

👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to the 262nd edition of my newsletter, trusted by more than 88,600 subscribers worldwide.
Hopefully, there will be tangible gains in 2026, and this will be the year when pro-human policies, rights, and rules are at the forefront of global AI policy and regulation.
🎓 Now is a great time to learn and upskill in AI. Here is how I can help:
Join my AI Governance Training [or apply for a discount]
Discover your next read in AI and beyond in my AI Book Club
Sign up for our Learning Center’s free educational resources
Watch my AI governance talks and learn from world experts
Subscribe to our job alerts for open roles in AI governance
The Case for AI Regulation
Humans are naturally ‘osmotic’; we absorb and mimic the beings, things, and ‘vibes’ around us.
We have been using social media for the past 20 years, and now we write, behave, and think in ways that are more likely to be rewarded by its algorithms.
Also, 20 years later, we are still dealing with the consequences of bad policies and poor regulation from the early days of social media.
Social media changed humans and human society, and some of these changes are irreversible.
-
Three years ago, millions began using AI.
It has been pushed onto people much more intensely than previous technologies. Many use it pervasively, both personally and professionally.
Like social media, AI will also influence how people write, behave, and think.
Many will behave in ways that are more likely to be rewarded by AI systems. Many will prefer interacting with AI systems rather than with humans. Many might be permanently transformed by their interactions with AI systems.
It will shape people both individually and collectively, and in ways that might be irreversible.
Unlike social media, however, AI can harm people in a systemic and ultra-personalized way, and it could more directly lead to catastrophic events.
Also, unlike social media, there is much more hype and pressure, both economic and political, to “remove the red tape,” deregulate, and allow AI to spread freely, at any cost.
-
And where do I want to go with all that?
These are still the early days of the ‘AI age,’ when AI becomes part of millions of people’s daily lives and pervasively influences them.
Whatever we decide to do now, however, will become systemic and potentially irreversible within 20 years.
If social media is any example, this is the time to set the tone, standards, values, rules, and rights that should be preserved and protected at any cost.
Now is the time to act rationally, collectively, and democratically to ensure that AI serves humans and humanity in the best possible way.
I sincerely think we can do it, starting at the local level and culminating in some form of global consensus on AI.
This is my main motivation for writing this newsletter, and I hope that we will see tangible gains in 2026.
Hopefully, this will be the year when pro-human policies, rights, and rules are at the forefront of global AI policy and regulation.



I do believe that we need to have AI tagged like a watermark in the corner. As human, I believe it’s important to distinguish between the brain and the network so we’re all not left wondering what’s real and what’s fake. It’s this misinformation that can start feuds, slander, and even wars if ingested by those who can’t tell what’s right or wrong.
"Whatever we decide to do now, however, will become systemic and potentially irreversible within 20 years."
Nothing like 20 years.
More like 2.
People are fast asleep.
I am a psychoanalyst. I talk to other individuals, that is my job.
The role of AI at the moment is debasement and violation of the individual on a huge scale.
As much as AI has potential for humanity at the moment, a huge amount of deliberate damage is being caused.
I call it the mind fuck of the first person singular.
It is in plain sight.
https://open.substack.com/pub/itprofligate/p/ai-research-03?r=imxe&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true