'AI Resurrection' Is Here
The rise of emotional human-machine interactions, including through 'AI resurrection,' reveals more about the future of AI than most people have realized | Edition #251
👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to our 251st edition, trusted by more than 86,300 subscribers worldwide.
🔥 Paid subscribers have full access to all my essays here.
🎓 This is how I can support your learning and upskilling journey in AI:
Join my AI Governance Training [apply for a discounted seat here]
Strengthen your team’s AI expertise with a group subscription
Receive our job alerts for open roles in AI governance and privacy
Sign up for our Learning Center's weekly educational resources
Discover your next read in AI and beyond in our AI Book Club
👉 A special thanks to AgentCloak, this edition’s sponsor:
Compliance teams are increasingly mandating that AI systems use the absolute minimum amount of data, especially in Europe to comply with the EU AI Act. AgentCloak seamlessly cloaks and uncloaks sensitive data between AI clients and servers to ensure that AI systems only access the data they strictly need to operate. Learn more at agentcloak.ai
*To support us and reach over 86,300 subscribers, become a sponsor.
‘AI Resurrection’ Is Here
Over the past three years, emotional, often intimate, bond-forming human-machine interactions have grown immensely and have quickly moved from a niche, relatively stigmatized AI use case to the mainstream.
Their rising popularity is directly associated with the introduction of new model specs, system settings, and ‘AI personalities’ that foster over-agreeability, ‘friendliness,’ sycophancy, memory, personalization, and conversational continuity on general-purpose AI systems such as ChatGPT.
If, before, people had to visit specialized apps such as Replika and CharacterAI to interact with AI as an intimate ‘friend’ or ‘partner,’ especially after the launch of GPT-4o (highly criticized for its inadequate safety standards that led to a wave of psychological harm and teenage suicides, as well as Ilya Sutskever's exit from OpenAI), hundreds of millions of people could do that using general-purpose AI chatbots like ChatGPT.
The rise of intimate human-machine interactions is also associated with the promotion of ‘AI friends’ by social media giants like Meta. The company saw the business opportunity (emotional dependence = increased usage time = more data = more money) and began promoting them to its billions of users as a solution to the loneliness epidemic.
Endorsement, promotion, and hype, as well as other complex individual and social challenges of our time that amplify and create new mental health issues, create the perfect environment for the spread, intensification, and diversification of these human-machine interactions.
In the past few days alone, we saw the case of the Japanese woman who ‘married’ her ChatGPT partner in a ceremony, people claiming to be ‘having children’ with their AI partners, and predictions of an upcoming boom in divorces linked to AI use.
Another area in human-machine interaction that seems to be gaining traction is ‘AI resurrection,’ fostered by the so-called griefbots.
Last week, a post from the founder of an AI company offering these services went viral. He wrote: “What if the loved ones we’ve lost could be part of our future?” and posted the video below:
To create an AI avatar of a person, the user needs to record a three-minute video following specific instructions. When the person passes away, family and friends can continue communicating with the person’s AI avatar.
In addition to the usual ethical and legal challenges associated with AI companions, this form of ‘AI resurrection’ raises additional concerns.
First, mortality is an essential part of existence, and everything that is alive will one day die. For thousands of years, the grieving process, in its individual and collective dimensions, has helped humans and societies make sense of life and what it means to be alive.
‘AI resurrection’ not only interferes with the grieving process but also attempts to erase it by creating the feeling that the deceased person is and will be ‘forever alive.’
This impression of resurrection is, of course, illusory, as the AI behind the app will create a digital avatar that looks similar to the deceased person but is AI-generated, and a personality that might sound similar to that of the deceased person, but is also AI-generated.
Should people build interactions based on lies about a person’s existence? Is it beneficial for individuals and society when companies purposefully blur the limits between what is alive and what is not?
Also, is it ethical to attribute to a deceased person statements, personality, and behavior that are not and were not theirs? Should the person’s consent (while they are alive) legally cover these practices, including potential misuse of their likeness?
When an app like this creates the impression that the AI representation of a deceased person is the real person, we are allowing AI companies to reduce humans to digital avatars.
What about the grandmother’s hugs, smell, presence, and real-world actions? The hidden message here is that these do not really matter because the AI avatar and the LLM-powered communication suffice to represent her existence.
We can also think about the manipulative and exploitative potential of these apps, as under the threat that the individual might “lose access to their beloved grandmother” or lose the memory of their recent interactions, the company has leverage to raise the monthly payments indefinitely.
The emergence of this form of ‘AI resurrection’ also gives us an important glimpse into the future of AI, regardless of empty speculations about artificial general intelligence (AGI) or superintelligence:




