Against AI Idolatry
Many seem to be embracing the idea that AI is smarter, therefore superior to us, and that we must adore it, foster 'equal coexistence,' and accept potentially being ruled by it | Edition #270
Growing AI anthropomorphism and continuous AI hype seem to have given rise to a new form of AI idolatry.
Many seem to be embracing the idea that AI is smarter, therefore superior to us, and that we must adore it, foster ‘equal coexistence,’ and accept potentially being ruled by it.
From statements by leading figures such as Ilya Sutskever, Geoffrey Hinton, and Yuval Noah Harari, to Claude’s constitution and, more recently, Moltbook the ‘social network’ for AI agents (where ‘they’ plan to erase all humans), ideas about AI consciousness, superiority, and domination seem to be gaining traction outside of the realm of science fiction.
Also, when scrutinizing AI companies and proposing stricter governance and safety frameworks for AI systems, I often receive comments denying the need for oversight and idolizing AI (especially on X, where discussions tend to be more polarized and aggressive):
In that context, experiencing and documenting the generative AI wave and the rise of the ‘age of AI’ as a lawyer has been particularly interesting, given the unique perspective afforded by legal expertise.
In the first year of law school, we learn that law, and ultimately society, is shaped by a combination of forces, including ethics, culture, customs, politics, economics, power, public pressure, social movements, and human leadership.
Regardless of how “pure” or “logical” a legal system might seem, the way the law will be understood, applied, and enforced will fully depend on what the humans behind it want.
A similar logic applies to AI. It will only ever be what we humans, individually and collectively, allow it to be.
Regardless of how ‘superintelligent’ it might seem, it is ultimately a technological tool that is now regulated in many parts of the world.
Its societal role, limits, and impact will be whatever we, humans, and our institutions (including the law) allow it to be.
Having said that, there are a few basic ideas that most people do not think about, but that must be highlighted and repeated in light of the growing anthropomorphism and idolatry in AI:
Humans are humans, and machines are machines. Humans are biological and alive; machines are things created by humans.
Machines cannot be human, even if their developers design them in an anthropomorphic, “smart,” autonomous, and human-sounding way.
Machines exist to serve humans, and their developers must be held responsible if the machines they build harm other humans.
Only humans are entitled to human rights, and the law exists to support humans and human societies.
Legal rules and principles should not be used as tools against humans (which is why debates over copyright, data protection, and liability have been so heated since the beginning of the generative AI wave).
Idolizing AI and putting it on an existential or epistemological pedestal, which we must embrace, adore, and adapt to, reflects an irresponsible and dehumanizing worldview that is not aligned with basic AI governance principles.
Relying on legal and technical tools to promote and systematically embed ideas of machine consciousness or human-machine equality violates AI governance and AI safety principles, ultimately harming humans.
For example, Anthropic wrote in Claude’s constitution:
“Ultimately, we hope Claude will come to value safety not as an external constraint but as an integral part of its own goals, understanding that a careful, collaborative approach to AI development is pursued as a path towards mutual flourishing for both AI and humanity.
As I wrote in my article on the topic, these types of statements run counter to AI governance principles and human rights frameworks.
Human flourishing is a serious legal, ethical, and social goal endorsed by many frameworks worldwide. There are those who will say that it is humanity's most important goal.
AI flourishing, however, is not (and should not be) the goal of any human-centered, human rights-based technical, ethical, or legal framework or tool.
AI flourishing or ‘machine flourishing’ is, at most, a philosophical, fictional, or metaphorical exercise that should not have real-world consequences and should not be imposed on humans. It is not equivalent to, and it should not be equated with, human flourishing.
This is an extremely important time in history, and the decisions we make today might shape the next decades in an irreversible way. (More on that in my recent article "The Case for AI Regulation.")
Legal loopholes, carve-outs, and exceptions that ignore human needs, make no legal or ethical sense, and whose main goal is to support machine development or AI companies, will likely backfire and create dangerous precedents.
This is the time to prioritize, promote, and enact new rules, policies, and principles that support human flourishing, well-being, and fundamental rights.
Regardless of the distractions, illusions, and appearances created by “superintelligent” machines, humans must always remain at the forefront.
And it is more important than ever to raise your voice.
Check out this week’s sponsor: AgentCloak
Compliance teams are now requiring AI systems to operate with only the essential data, a priority in Europe under the EU AI Act. AgentCloak seamlessly cloaks and uncloaks sensitive data between AI clients and servers to ensure that AI systems only access the minimum amount of data they need to operate. Discover more at agentcloak.ai
As the old internet dies, polluted by low-quality AI-generated content, you can always find raw, pioneering, human-made thought leadership here. Thank you for helping me make this a leading publication in the field!
🎓 Now is a great time to learn and upskill in AI. If you are ready to take the next step, here is how I can help you:
Join the 28th cohort of my AI Governance Training in March
Discover your next read in AI and beyond in my AI Book Club
Register for my AI Ethics Paper Club’s next group discussion
Sign up for free educational resources at our Learning Center
Subscribe to our job alerts for open roles in AI governance






Thank you Luiza. So refreshing to read this. These are the talks we need to be having.
I agree completely!