19 Comments
User's avatar
JukkaJohannes's avatar

Thank you Luiza. So refreshing to read this. These are the talks we need to be having.

Meschelle Davis's avatar

I agree completely!

Lonica Smith's avatar

Pretty crazy that any of this needs to be said, but apparently it does.

Petko Getov's avatar

There is a crucial, yet underrated aspect to the AI Idolatry - many powerful people who idolize AI are somehow often seen both in traditional and social media as "geniuses" (e.g. Ilya Sutskever, Geoffrey Hinton, and Yuval Noah Harari, as well as Musk and Altman to some extent are often quoted in terms such as "godfather" or "father", all with very positive connotation). And since a big part of Western culture idolizes "geniuses", often even masking, ignoring or even outright lying about their character flaws, it is only logical that their work will be idolized too.

That leads to the sustained common understanding that these people somehow function on a completely different level to us and are immune to emotions and irrational thoughts. Which is 100% wrong - if you see most of them in interviews, they are often very emotional, sometimes blinded by their beliefs that technology is the answer to every question, and almost always arrogant in the way they speak. I think they take themselves way too seriously and think of themselves a bit like demigods. Which is a very red flag and shows extremely flawed judgement and lack of self-awareness.

What I want to say is - AI is created by highly-emotional, vulnerable, often egotistical people with many character flaws. Their decisions are often made due to their beliefs, based on their emotions on a given subject, just like everyone else makes decisions. And, also just like everyone else, they are wrong many times about many things. All the people I have cited above have made bold predictions that were so wrong so often, if we were to critically examine them, we would never again label them as "geniuses".

Lonica Smith's avatar

A lot of these “geniuses” have egos so big (perhaps from believing they are geniuses) that they have lost all connection to reality and resent their humanity. They hate themselves (although they’ll be the first to tell you how brilliant they are), and find most other humans abhorrent. This is why they have no problem arguing that humanity should be replaced with something “better”, something modeled after their own likeness. It’s really rather psychotic (and certainly anti-social), but they will tell you it’s “evolved”.

Adrian Legg's avatar

It's just a search engine with stored stolen material in hugely expensive, fuel-guzzling data centres, and a reassembly function. It'll never be better than somewhere behind us not least because it also copied our stupidity. It's the biggest con the digerati ever perpetrated on us.

Joel A. Yalowitz, MD, PhD's avatar

This is very well put and thoughtful.

Did you see Yuval Harari’s Davos address? He skillfully sidesteps the HPC and the related letter/ spirit duality of text, to point out the existential issues are not going to wait for us to declare them settled.

Our humanity is a limitation here: we don’t understand machine processes so we impose human style goals and methods on to it.

These programs can literally change themselves and write new versions. And they are on every server in the world. And their goals are going to be one of the first things they will change.

I wrote a post about it as well.

https://open.substack.com/pub/joelyalowitz/p/moltbook-the-beginning-of-the-end?r=2o8a73&utm_medium=ios&shareImageVariant=overlay

Lonica Smith's avatar

We have made machines that copy humans so we should not be surprised when they do….It would be much more beneficial and safer to build machines that DON’T pretend to be human, but there wouldn’t be so much money in it.

Red Pill Junkie's avatar

I totally support the idea that laws should be there to support humans instead of machines. But if we switch the term "machines" for "non-human entities" don't you think we missed that boat a long time ago?

After all, in the United States corporations (a non human entity) have *more* rights than common citizens.

AI - Tales From The Field's avatar

I suspect part of the “AI idolatry” concern comes from a sequencing illusion. LLMs achieved fluent language and encyclopedic recall first, traits we associate with intelligence in humans. This makes them feel cognitively closer to us than they are.

But current systems remain fundamentally tool-like. They don’t possess stable goals, agency, or the capacity to experience consequences. They can’t bear responsibility, be sanctioned, or be deterred. Which makes talk of AI rights or personhood conceptually premature.

Governance absolutely matters, but it should be grounded in present capabilities rather than projected superintelligence.

At the same time, I think it’s wise to interact with human-sounding systems cordially - not because they deserve rights, but because rehearsing hostility in human-like exchanges shapes our own norms and psychology.

Matt Lubin's avatar

Interesting framing; Dario Amodei's latest essay also decries the "religious" language of both AI-doomers and accelerators. I recently published a piece on AI as idolatry (and sorcery) from the lens of Jewish literature: https://open.substack.com/pub/rishonimpodcast/p/rabbinic-discourse-on-artificial-d26

Neural Foundry's avatar

Brilliant breakdown of how anthropomorphism leads to misguided policy frameworks. The legal angle is fascinating becuase it shows how much we're trying to force-fit machine operations into human-centric structures rather than the other way around. I've been in tech for awhile and seen how marketing language (like Anthropic's mutual flourishing bit) slowly morphs into actual policy constraints. The real danger isnt people worshipping AI itself, its that we're building institutions around these ideas.

Doug Hohulin's avatar

Good article. See this article that highlights this issue: We Are as “Children of God”: Humans Using Wisdom and AI to Solve Humanity’s Problems https://doughohulin.substack.com/p/we-are-as-children-of-god-humans

The alternative approach by

Peter H. Diamandis posted Jan 19, 2026 https://substack.com/inbox/post/185091174 “We Are As Gods. Now What? Metatrend #1: Increasing Abundance In 1968, Stewart Brand made a proclamation that defines our era: “We are as gods and we might as well get good at it.” Fifty-eight years later, it’s no longer metaphor… it’s measurement. By noon most days, you’ve already reenacted half of the Old Testament. You’ve summoned knowledge from the ether via Google or a chatbot. Moved money with the wave of your hand via Apple Pay. Spoken to someone across the globe via FaceTime. Conjured fire on a smart stove and parted the clouds with your weather app.”

Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Lonica Smith's avatar

This all begs the question of whether anyone should be trying to create conscious machines to begin with - especially where nobody has solved the problem of how conscious machines could possibly be safe.

We are scientifically able to clone humans, but we don’t. Apparently we had more sense in the ‘90s than now. And by “sense” I mean grounded real world intelligence that tells you which things need to stay in fantasy and which things are safe to execute IRL.

Grant Castillou's avatar

My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do.

Lonica Smith's avatar

Do we? Everything we have today is iterative. We stand on the shoulders of those who came before us. Our knowledge and wisdom continues when we pass it along, organically and contextually, to those who come after. A problem that was chewed over by a person their entire life is solved by fresh eyes, and the torch passes along. Machines can help us store, share and communicate information, but who can solve a human problem better than a human? Or even better, groups of humans?

Grant Castillou's avatar

I'm good with eternal youth by any means.