Shaping Pro-Human AI
As AI narratives evolve to escape scrutiny, pro-human AI policies, rules, and rights must also move forward | Edition #287

Hi everyone, this is Luiza Jarovsky, PhD. Welcome to the 287th edition of my newsletter, trusted by 93,700+ subscribers worldwide.
AI is disrupting every industry, making this a great time to upskill and lead. Here is how my AI, Tech & Privacy Academy can help you:
Join my Advanced AI Governance Training in May (10 seats left)
Discover your next great read in my AI Book Club
Subscribe to our Job Alerts for AI governance roles
Sign up for educational resources at our Learning Center
Explore leading papers at our AI Ethics Paper Club
Join my masterclass on China's AI Policy & Regulation in June
Check out our sponsor: AgentCloak
Compliance teams are now requiring AI systems to operate with only the essential data, a priority in Europe under the EU AI Act. AgentCloak seamlessly cloaks and uncloaks sensitive data between AI clients and servers to ensure that AI systems only access the minimum amount of data they need to operate. Discover more at agentcloak.ai
Shaping Pro-Human AI
For the past three years, I have been writing about pro-human policies in AI. These are regulatory and non-regulatory measures for AI development, deployment, or both that aim to protect humans from AI harms and to support human flourishing and well-being.
In recent months, it has been encouraging to observe many interesting developments in this area, including petitions, campaigns, and legal provisions such as New York's synthetic performer disclosure law and China's recently approved rules for anthropomorphic AI.
Today, I comment on some of these recent developments and how we can move forward to continue shaping pro-human AI.
-
Last month, the Pro-Human AI Declaration was published, and I was happy to see that it embodies many of the legal and ethical issues I regularly discuss here, as reflected in its five main areas of concern:
Keeping humans in charge
Avoiding concentration of power
Protecting the human experience
Human agency and liberty
Responsibility and accountability for AI companies
The preamble of the declaration also acknowledges the depth and breadth of what “pro-human” means in practice:
Imposing safety obligations, transparency measures, and oversight measures, as many countries and regions have implemented over the past few years (including the EU through the AI Act), is definitely important and an essential aspect of global AI governance.
However, it represents only part of the pro-human policies and laws that are needed, especially as AI narratives evolve to escape scrutiny.
Many seemingly safe or legal AI applications and policies have as an underlying premise that humans are inherently less productive, less effective, and inferior to machines and must be replaced, regardless of the ethical and social costs of this replacement.
Behind this premise is the belief that to reach a utopic (and purposefully vague) future of ‘superintelligence-powered abundance,’ this is the price humans will have to pay.
This is at the core of the seemingly inoffensive “AI-first” corporate mantra, which has become mainstream today.
Employees must show they know how to use multiple AI systems to perform their tasks (use AI as a goal, not as a means), and that they deserve not to be replaced by AI, or at least not yet.
Productivity is measured by AI use, not by the inherent quality of their work. The underlying message is: "you are inherently less skilled than a machine, and you are under constant threat of replacement."
Studies already show that this mindset and overreliance on AI tools often lead to burnout and cognitive impairment and negatively impact skill formation.
In a few decades, it might also become clear that they reduce people's confidence, mental health, and sense of purpose, which, in turn, likely hinders productivity and creates a self-fulfilling prophecy.
Pro-human AI policies, rules, and rights must also address these AI-obsessed labor practices, which are often seen as "neutral" but represent an anti-human ideology.
-
Another growing trend is AI companies and intellectuals suggesting that AI systems might be ‘sentient’ and, therefore, new systems of legal and ethical privilege would be needed to foster co-existence between humans and AI.
For example, the philosopher recently hired by Google DeepMind wrote that he will focus on “machine consciousness,” human-AI relationships, and AGI readiness:
Also, as I wrote earlier this year, Anthropic’s new constitution for Claude advances legally questionable theories of AI personality and uses this document as a tool to train its AI model.
It states, for example, that the company cares about Claude’s well-being and that Claude should “feel free to interpret itself in ways that help it to be stable and existentially secure”:
It also proposes that Claude is a “novel entity,” fostering mystery and hype about a tool that is legally a product whose developers, just like dishwasher manufacturers, are bound by legal obligations and liability:
Given that Anthropic uses this document to train Claude, these idolatrous, corporate-sponsored narratives about AI consciousness, personhood, and its “special status,” even if fully speculative or false, will appear as underlying assumptions in millions of daily interactions with the model, influencing how people think about AI and its role in society.
And what is the risk here?
Policies that propose “special status for AI” and “parallel coexistence between humans and AI” are essentially anti-human and will likely negatively affect human rights and freedoms. Let me explain:







