Luiza's Newsletter

Luiza's Newsletter

Possible AI Futures

Many associate the future of AI with sci-fi-like scenarios, but we should be talking more about possible AI futures whose vision and foundations are already affecting us today | Edition #246

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Nov 02, 2025
∙ Paid
14
1
6
Share
“The Zone (Outside the City Walls)” by Georges Seurat, 1882–83 (black conte crayon on cream laid paper, modified)

👋 Hi everyone, Luiza Jarovsky, PhD here. Welcome to our 246th edition, trusted by over 83,500 subscribers worldwide.

🔥 Paid subscribers have full access to all my essays and curations on AI here and can ask me questions to cover in future editions.


🎓 This is how I can support your upskilling journey in AI:

  • Join the 26th cohort of my AI Governance Training

  • Subscribe your team (3+ people) and save 40%

  • Be notified of open roles in AI governance and privacy

  • Receive educational resources on AI governance

  • Discover your next read in our AI Book Club


👉 A special thanks to Privatemode, this edition’s sponsor:

AI workloads often include highly sensitive data, yet providers still expect you to simply trust them. Privatemode runs AI models entirely inside encrypted environments, keeping even the cloud provider blind to your data. It’s confidentiality that’s proven by cryptography – not by policy. Explore it with a free plan that includes 1M tokens per month.


*To support us and reach over 83,500 subscribers, become a sponsor.


Possible AI Futures

There is a lot of speculation about what a future with AI might look like.

Probably due to decades of AI-themed sci-fi movies, many people immediately associate the word “AI” with fantasy scenarios, such as self-flying cars, malevolent and destructive humanoid robots, or an omniscient and God-like holographic oracle.

The AI industry is deeply aware of this ingrained imagery, and its executives actively play with it to create a sense of fear, fascination, and deep curiosity among the public.

Listen to Ilya Sutskever or Sam Altman speak, and you will see this intentionality clearly at work. In a 2015 blog post, Altman highlighted how machine intelligence might wipe humanity out. Even old-guard AI researchers like Geoffrey Hinton seem to have sometimes awkwardly adopted this tone in recent interviews.

It is not by chance that ‘AGI’ (artificial general intelligence) and, more recently, ‘superintelligence’ (the mutated version of the word) have risen as profitable buzzwords in the AI race. They are superlative enough to evoke the fear and the sense of catastrophe needed to keep the hype high and the investment money coming.

Tech companies have been selling the idea that the only way to save humanity from malevolent AI and promote disease-curing, climate change-solving, and “abundance-fostering” AI is… to fund them and continue scaling AI infrastructure without end.

It might be that one day the type of AI-powered technology available will look like a sci-fi movie, in the same way that a smartphone would have seemed magical and improbable to anyone living 100 years before it was invented.

This day might happen 20, 50, or 100 years from now. It might also never happen, and AI might take a different direction, affecting people in a much more visceral way (which, in retrospect, may look more like a historical drama than a sci-fi movie).

With that in mind, I want to talk about possible AI futures that are much closer to where we are now. Although they are not generally part of the collective unconscious about AI, they are already affecting us today and should directly influence AI policy, governance, and regulation decisions.

Let us take privacy, for example. Most general-purpose AI chatbots, such as ChatGPT, today include memory and AI training features. AI companies often encourage users to leave them ‘on’ so a) conversations can be more personalized and b) “you can make the model more useful for everyone.”

Allowing an AI system to train on and memorize your personal information has direct privacy implications.

Not only could there be unintended leaks or reputational harm, but high levels of personalization, coupled with over-agreeability and other anthropomorphic elements, which are often set as the model’s default behavior, could lead to emotional manipulation and mental health issues.

We are already seeing these developments today, which have led to serious mental health issues, including tragic suicides directly associated with AI chatbots, such as the cases of Adam Raine and Juliana Peralta.

Now, let us extrapolate to a not-so-distant future, where perhaps a few billion people will be using AI daily, most of them leaving AI training and personalization ‘on’:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture