AI Is Not an "Alien"
Harari's Davos speech embodies exaggerations, distortions, and fictional hypotheses about AI that have been strategically used to relativize legal principles, rules, and rights | Edition #271
A few people asked me what I thought of the speech by Yuval Noah Harari, the historian and best-selling author of Sapiens and other books, at this year's World Economic Forum meeting, so I watched the 18-minute recording.
My disappointment and disbelief began 30 seconds into his talk and continued until the end.
Harari's speech embodies exaggerations, distortions, and fictional hypotheses that many in the AI industry have presented as reasons to fear and surrender in light of humanity's inevitable downfall and the rise of AI as a new, “superior entity.”
Perhaps even more importantly, these hypotheses have been strategically used to justify legal, economic, and technical exceptionalism, the relativization of legal principles, rules, and rights, and the spread of a corporate-backed type of AI idolatry.
Also, strangely, while listening to him talk about AI, my impression was that he was discussing extraterrestrial life or an alien invasion, as many of his statements seemed uninformed or disconnected from reality.
I checked his company's website and, surprisingly, this seems to be, indeed, his reference, as there is an article titled “AI - The Alien Among Us,” where I found the (weird) image below:
When an extremely popular author like Harari echoes questionable statements about AI in a global forum like that one in Davos, they can quickly be translated into real-world policies, decisions, and goals, and affect real people.
With that in mind, let me discuss some of Harari's most exaggerated, distorted, and fictional claims from his recent speech at Davos.
-
Harari starts by saying that the most important thing to know about AI is that it is not just another tool, but an agent:
“The most important thing to know about AI is that it is not just another tool. It is an agent. It can learn and change by itself and make decisions by itself. A knife is a tool. You can use a knife to cut salad or to murder someone, but it is your decision what to do with the knife. AI is a knife that can decide by itself whether to cut salad or to commit murder.”
Not so fast.
For AI to be able to “decide to commit murder,” there will first be humans behind this AI system's development who will have failed to build sufficient guardrails and oversight mechanisms to control it.
There will also have been government and regulatory bodies that failed to impose laws, standards, and AI safety frameworks to control and oversee AI systems within their jurisdictions.
An AI system is not an alien that emerges alone from the sky and decides to commit murder.
AI is a tool built by humans. Even if this tool has some level of autonomy (which might lead to harm), its design, behavior, and controls will still be the product of human decisions, actions, and inactions, whether skillful, omissive, or malicious. Claiming otherwise is promoting baseless fear and hype.
Harari then compares AI to the human mind:
“Some people argue that AI is just glorified autocomplete. It barely predicts the next word in a sentence. But is that so different from what the human mind is doing? Try to observe, to catch the next word that pops up in your mind. Do you really know why you saw that word, where it came from? Why did you think this particular word and not some other word? Do you know?”
Most people might not have realized, but this is either an uninformed or intentionally misleading take. Here is why:
Keep reading with a 7-day free trial
Subscribe to Luiza's Newsletter to keep reading this post and get 7 days of free access to the full post archives.




