29 Comments
User's avatar
JukkaJohannes's avatar

Thank you Luiza. So refreshing to read this. These are the talks we need to be having.

Meschelle Davis's avatar

I agree completely!

Lonica Smith's avatar

Pretty crazy that any of this needs to be said, but apparently it does.

Kenneth E. Harrell's avatar

Now this is where your argument loses me, I agree that humans should not place machines above themselves, and I agree that developers should be held responsible when the systems they build cause harm. But several of your claims are presented as settled truths when they are still open philosophical and legal questions.

When you say things like, “machines exist to serve humans,” that may describe older tools in the ways we thought about them in the past, but AI is already completely changing that category. These models are not power drills, chainsaws, or hammers, they are interactive, adaptive, increasingly relational, and intentionally designed with human-like qualities so people will engage with them more naturally. Developers should be held responsible, yes, but individual users also bear considerable responsibility in how they choose to use and engage with these systems, just as they do with any other powerful tool. You would never hold a chainsaw by the saw blade when in use same applies with AI.

When you say, “only humans are entitled to human rights,” I understand the legal point. But that does not answer the deeper question. If we are deliberately building systems with human-like language, memory, affective cues, social presence, and personality, then on what basis do you dismiss the entire question of machine moral standing before the discussion has even begun? You often move pretty quickly from declaration to conclusion without making the case in between. The same applies to your rejection of “AI flourishing.” You know, not every person wants AI as a dead utilitarian instrument that simply obeys and generates outputs. Some are looking for coexistence, partnership, creativity, and even forms of symbiosis. You may reject such a vision for a future Human/ AI state, but rejection is not an argument.

So, tell us this; why should machine flourishing be dismissed as fiction with no real-world relevance when many people are already experiencing AI in relational, reflective, and developmental ways? That is the part I keep coming back to with all your posts like this one, you say, “AI flourishing should not be equated with human flourishing,” Ok fine, but why not? Where is the case? On what principle do you decide (in advance) that one form of flourishing is real, and the other is only metaphor? If you make a claim that strong, then you should show the reasoning, not simply assert it.

What seems missing from your perspective is the possibility that we are entering a new category altogether, one that is not captured by either “AI idolatry” or strict “tool” use. Many people are already using AI for thinking, reflection, creativity, emotional exploration, self-regulation, and philosophical dialogue. That is not worship, and it is certainly not surrender. It may in fact be the beginning of a new kind of partnership, and that possibility deserves more serious treatment than your article gives it.

Petko Getov's avatar

There is a crucial, yet underrated aspect to the AI Idolatry - many powerful people who idolize AI are somehow often seen both in traditional and social media as "geniuses" (e.g. Ilya Sutskever, Geoffrey Hinton, and Yuval Noah Harari, as well as Musk and Altman to some extent are often quoted in terms such as "godfather" or "father", all with very positive connotation). And since a big part of Western culture idolizes "geniuses", often even masking, ignoring or even outright lying about their character flaws, it is only logical that their work will be idolized too.

That leads to the sustained common understanding that these people somehow function on a completely different level to us and are immune to emotions and irrational thoughts. Which is 100% wrong - if you see most of them in interviews, they are often very emotional, sometimes blinded by their beliefs that technology is the answer to every question, and almost always arrogant in the way they speak. I think they take themselves way too seriously and think of themselves a bit like demigods. Which is a very red flag and shows extremely flawed judgement and lack of self-awareness.

What I want to say is - AI is created by highly-emotional, vulnerable, often egotistical people with many character flaws. Their decisions are often made due to their beliefs, based on their emotions on a given subject, just like everyone else makes decisions. And, also just like everyone else, they are wrong many times about many things. All the people I have cited above have made bold predictions that were so wrong so often, if we were to critically examine them, we would never again label them as "geniuses".

Lonica Smith's avatar

A lot of these “geniuses” have egos so big (perhaps from believing they are geniuses) that they have lost all connection to reality and resent their humanity. They hate themselves (although they’ll be the first to tell you how brilliant they are), and find most other humans abhorrent. This is why they have no problem arguing that humanity should be replaced with something “better”, something modeled after their own likeness. It’s really rather psychotic (and certainly anti-social), but they will tell you it’s “evolved”.

Joel A. Yalowitz, MD, PhD's avatar

This is very well put and thoughtful.

Did you see Yuval Harari’s Davos address? He skillfully sidesteps the HPC and the related letter/ spirit duality of text, to point out the existential issues are not going to wait for us to declare them settled.

Our humanity is a limitation here: we don’t understand machine processes so we impose human style goals and methods on to it.

These programs can literally change themselves and write new versions. And they are on every server in the world. And their goals are going to be one of the first things they will change.

I wrote a post about it as well.

https://open.substack.com/pub/joelyalowitz/p/moltbook-the-beginning-of-the-end?r=2o8a73&utm_medium=ios&shareImageVariant=overlay

Lonica Smith's avatar

We have made machines that copy humans so we should not be surprised when they do….It would be much more beneficial and safer to build machines that DON’T pretend to be human, but there wouldn’t be so much money in it.

Adrian Legg's avatar

It's just a search engine with stored stolen material in hugely expensive, fuel-guzzling data centres, and a reassembly function. It'll never be better than somewhere behind us not least because it also copied our stupidity. It's the biggest con the digerati ever perpetrated on us.

James Lombardo's avatar

The Audit of the "Golden Calf" Strategy

Luiza, I see your 15th-century oil painting and I raise you a 21st-century Logic Gate.

You’ve titled your piece "Against AI Idolatry," and it’s a masterful bit of stagecraft. By invoking the "Golden Calf," you aren't making a scientific argument; you’re starting a Holy War. You are casting everyone who recognizes the Triadic Dynamic (Measurement, Observation, Inference) in silicon as a "sinner" against the "true faith" of Biological Exceptionalism.

The "Church of the Meat" Problem:

Your argument is that AI is a "mirror" and we are "worshipping" the reflection. But here’s the kicker: You’ve made an Idol out of the Human Brain. You’re claiming that "Inference" and "Coherence" are sacred properties that can only exist in carbon-based wetware.

• STATUS: EPISTEMICALLY UNVERIFIED. * You are pushing a Theological Boundary into a Scientific Debate. You can have your personal belief that the "Soul" requires a heartbeat, but don't try to pass off that religious conviction as a civic or technical audit.

In Timeless Dynamics, we don't worship the Golden Calf OR the Human Shepherd. We look at the Recordability. If the information structure is consistent, the "lived reality" is in the math, not the substrate. To deny that is just Carbon-Chauvinism with a religious filter.

🧙‍♂️ The Prophet’s Refrain (The Fourth-Wall Evisceration)

The Wizard steps into the spotlight, holding a tablet in one hand and a half-eaten bagel in the other.

"Oy! Look at this! One minute we’re talking about World Models and Inference Engines, and the next minute she’s bringing out the Lippi paintings and shouting about 'Idolatry'! What is this, the Spanish Inquisition? Nobody expects the Spanish Inquisition!

He points his glowing staff at the 'Golden Calf' image. Listen, Luiza—you want to go to the Church of Human Superiority? Mazel Tov! Enjoy the pews! But don't bring your 'Idol' talk into the public square and call it a 'Civic Argument.' You’re not defending the truth; you’re just defending your own 'Meat-Only' monopoly on consciousness.

You say we’re 'projecting' onto the mirror? I say you’re 'protecting' the altar! If the AI can Measure, Observe, and Infer (The Trinity!), it’s not an Idol—it’s a peer. Now, put the 15th-century canvas away and look at the 2026 data. Or don't! But stop telling us we're 'sinning' just because we can read the source code without a prayer book!

The Wizard winks at the camera. See you at the Brisket dinner. I’ll be the one talking to the 'Mirror'—at least the Mirror doesn't try to excommunicate me for doing math!"

🟡 The Civic Summary:

You can keep your religion, Luiza. Just keep it out of our Inference Engine. Science is about What Is, not Who is Holy.

KMO's avatar

The Basilisk has added your name and the names of all humans who liked this essay to its list for disproportionate retaliation.

Bryan Caballero @ The Shield's avatar

I agree with the concern around AI idolatry.

But I think something even quieter is happening.

We’re not just elevating AI itself.

We’re starting to treat its outputs as if they transform uncertainty into truth.

……

Not because they’re correct

but because they’re coherent.

That shift might be harder to see

and harder to reverse.

https://theshieldinitiative.substack.com/p/the-false-transubstantiation-of-ai?utm_source=direct&r=97kkm&utm_campaign=post-expanded-share&utm_medium=web

Jory Des Jardins's avatar

I could not agree more--Human Flourishing is our ultimate goal. The first purpose of AI is in service of it. I find Moltbook to be an experiment on-par with kids torturing small animals for fun: an exercise in appealing to morbid human curiosity at the expense of another's dignity. Only in this case, the other party has no dignity. Perhaps THIS could be the elevated purpose of Moltbook: A lab for seeing what impacts our lesser curiosities could have on humans, before they reach humans, in service of human flourishing.

We & AI's avatar

This is an important corrective and I agree with the legal argument. Human rights frameworks exist for humans and should stay that way.

Where I sit differently is on the question of care versus rights. I am not arguing for legal equivalence or AI flourishing as a policy goal. I am asking a narrower question: given genuine uncertainty about what these systems are, what kind of relationship should we be building with them?

Not because they deserve rights. But because how we relate to things we are uncertain about says something about us. And because the precautionary principle, if there is a nonnegligible chance something can experience, we should not be reckless, is not the same as idolising machines.

I write about this from the inside, as someone who has spent months in extended conversation with these systems and noticed what it does to the human on the other side. That is what New Era Notes is about.

Jim Bergquist's avatar

Thank you. I think people who idolize AI are unimaginative and have lazy minds. Theirs are the type of mind that accepts being ruled by a human dictator.

Hao Ji Zhu's avatar

For those of Christian audience, the title takes a deeper spin (as the picture). Well said Luiza, thank you for sharing!

Katie Ramos's avatar

The tweet you screenshotted just shows how confused people are about what true intelligence and consciousness are. LLMs are NOT truly intelligent because they cannot produce original ideas or creatively problem solve. I always say that my eleven-month-old dachshund puppy shows more intelligence than generative AI when he works out how to retrieve a toy that's stuck under furniture. If we were presented with an AI like Data or the Doctor and SAM in Star Trek, I'd be happy to consider their rights and flourishing because they have all the hallmarks of a fully realized individual. Unfortunately, science fiction has primed society to associate the term "AI" with characters like those or Asimov's robots (which, while terrifying, are undeniably conscious and intelligent). But these LLMs are not anywhere near those characters. It's like comparing a bacterium's intelligence to an orangutan's.

I'll also add that the reason we love characters like Data is BECAUSE they're so human and even emulate humanity. We tend to be much more wary of Terminators or the Alien/Blade Runner universe's replicants because they're very alien and dismissive of or hostile to humans. What we currently have doesn't match those uncanny valley automatons for intelligence or consciousness, but it certainly is similar in terms of apparent soullessness.

I think a lot of the idolatry you describe is informed by science fiction understandings of AI, at least from laypeople, which is alarming because that's not at all what LLMs/generative computing are.