AI's Acceleration Paradox
The AI industry's acceleration narrative ignores basic facts about the human body, the human mind, human behavior, and human societies. It might drag us to a dystopian future | Edition #278

👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to the 278th edition of my newsletter, trusted by 91,700+ subscribers worldwide.
As the old internet dies, polluted by low-quality AI-generated content, you can always find raw, pioneering, human-made thought leadership here. Thank you for helping me make this a leading publication in the field.
🎓 Now is a great time to learn, upskill, and get involved in AI governance. If you are ready to take the next step, here is how my Academy can help:
Join the 29th cohort of my AI Governance Training in May
Discover your next read in AI and beyond in my AI Book Club
Join our AI Ethics Paper Club’s group discussion on March 31
Sign up for free educational resources at our Learning Center
Subscribe to our Job Alerts for open roles in AI governance
Check out our sponsor: Didomi
Discover the 2026 Data Privacy Benchmark, Didomi’s annual study of global consent collection practices. Based on millions of consent interactions across 30,000+ websites and apps, it highlights key trends in consent rates, banner formats, user behavior, and regulatory impact, helping organizations strengthen their privacy and data strategies. Download the full report.
AI’s Acceleration Paradox
Among the main promises of the AI industry today are productivity and acceleration.
People would use AI to complete more tasks in less time, AI would create and complete more tasks autonomously, and AI would coordinate other AIs to create and complete more tasks autonomously.
As a consequence, the AI industry says, countries would economically benefit from this AI-driven increase in production.
We would enter an era of abundant intelligence in which even the most technically complex challenges would be solved in a short period of time, including, for example, curing all diseases.
The premises above are part of the AI industry's mainstream discourse today, and they help keep billions of dollars in investments flowing.
However, when you zoom in and look at how humans interact with AI and what AI-powered acceleration actually means, you realize that these promises are based on false premises.
They ignore basic facts about the human body, the human mind, human behavior, and human societies.
Also, these false premises are dragging us toward a dystopian future in which we will be forced to constantly minimize biological boundaries, ignore psychological needs, and devalue human expression to thrive in a world that prioritizes machines.
-
Regardless of how shiny, advanced, and impressive our technological tools might be, we, humans, are living entities.
As such, there are biologically coded boundaries, shaped over millions of years of evolution, that we cannot really escape.
The average human lifespan is 73 years, and we need around seven hours of sleep, three liters of water, and 2,000 calories a day to stay healthy.
Besides our physiological needs, we have emotional and psychological needs, including safety, love, belongingness, esteem, and self-fulfilment.
Not meeting any of these groups of needs might lead to illness or even death.
These are hard limits that cannot be increased with more computing power, larger datasets, new training methods, or more efficient chips.
Even though some of us use a series of digital tools in our daily routines, we are essentially analog. Our biological systems operate in a continuously variable physical way.
We are born, and one day we die. Death is the only certain thing in life.
As such, we consciously or unconsciously spend much of our time dealing with the challenges of our own finitude:
Why am I here?
What is my purpose?
How can I make my life more meaningful?
Being alive is finding meaning in every day of our existence, discovering our own journey, and making sense of it.
I am sorry to break the news to the AI industry, but it has nothing to do with acceleration or productivity.
I have never heard of anyone on their deathbed lamenting not being more productive or accelerated.
In fact, the opposite is true. According to Bronnie Ware's “Regrets of the Dying,” these were the most common five regrets people expressed in their last days:
I wish I’d had the courage to live a life true to myself, not the life others expected of me.
I wish I hadn’t worked so hard.
I wish I’d had the courage to express my feelings.
I wish I had stayed in touch with my friends.
I wish that I had let myself be happier.
People regret not having worked less. People regret not having lived a more meaningful life, truer to themselves and their feelings. People regret not spending more time with friends.
As humans, we crave connection, belonging, meaning, and love. Our activities should, ideally, support that. Everything else is a form of distraction that we might regret later.
What does this have to do with AI?
A large part of the discourse around AI is focused on the idea of acceleration. In practice, for the individual, AI-powered acceleration will often involve:
Delegating tasks to AI systems and losing control over work processes and outputs;
Failing to develop essential skills due to ongoing AI use for creative and intellectual tasks;
Deskilling due to constant AI-powered automation;
Having to manage a large number of AI-powered tasks without the understanding of how each process and decision-making works;
More isolation and disconnection from human teams due to ubiquitous automation;
Etc.
AI-powered acceleration often runs counter to human physical and psychological needs. They do not support connection, belonging, or meaning.
A recent study has shown, for example, that AI does not reduce work but intensifies it in a way that is not necessarily net positive:
“That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”
Another study showed that AI negatively impacts skill formation, especially among junior employees:
“Together, our results suggest that the aggressive incorporation of AI into the workplace can have negative impacts on the professional development of workers if they do not remain cognitively engaged. Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of real skill development.”
Moreover, it is unclear to me whether the proposed AI-powered acceleration will ever lead to the promised beneficial outcomes, such as curing all diseases and solving myriad other complex challenges.
One reason is the incompatibility between continuous acceleration and humans' ability to keep up with and maintain meaningful control over processes and decisions that could lead to these positive outcomes.
For AI capabilities to translate into meaningful human and societal benefits, they must make sense in a world of humans. Biological humans with hard-coded needs and boundaries.
Living entities who know they will die and try to find meaning, purpose, and connection in their daily lives.
If the AI industry continues to foster growing acceleration, but humans cannot maintain meaningful control, oversight, and scrutiny over AI-powered processes and outcomes, a possible consequence is uncontrolled chaos.
From that, there are a few potential futures, including:
We decide that AI systems are smarter, faster, and more intellectually capable than we are, so they should fully control the decision-making processes for the benefit of “progress and innovation”. We treat humans as limited entities that must delegate decisions to increasingly smart machines. We treat AI systems as “special entities” or “aliens” that we must respect. We end up not having full control of the outcomes, positive or negative.
We decide that AI is an extremely powerful technology that requires strict human oversight and control. We understand that AI is a tool that should foster human flourishing, so AI development and deployment should follow rules that focus on human-led innovation, human well-being, and the protection of fundamental rights. We might have to limit certain AI use cases so that vulnerable groups and catastrophic risks are prevented.
To be clear, the AI industry is currently fostering the first option above, as discussed in my recent articles about Claude’s new constitution, Yuval Noah Harari’s World Economic Forum speech, and AI idolatry.
I sincerely hope we can push for changes so that the future will be closer to the second option, where pro-human policies, rights, and rules are at the forefront, and where humans remain in control.
I hope you can join me in these efforts.




I agree Luiza. The acceleration paradox is real, and I'd name the mechanism more precisely: acceleration doesn't just outpace the body. It dysregulates it. A nervous system under constant demand without completion shifts into survival mode. Narrow focus. Shallow processing. Reduced tolerance for complexity. The very capacities AI claims to augment are the ones chronic acceleration destroys. The body doesn't speed up. It braces. And a braced system cannot learn, connect, or make meaning. The paradox isn't just that acceleration fails to deliver. It's that it degrades the organism it promises to serve.
Whilst I agree that AI is causing societal change and in some cases may negatively affect engagement and cognition, isn't this true of all new technologies? The invention of the printing press replaced skilled and thoughtful transcribers with machine operatives. The introduction of spreadsheets meant that fast mental calculation and concentration was automated with a few simple keystrokes. However, these technologies freed people up to carry out more demanding and productive tasks. I am sure AI will do the same, we just need to adjust.
What I am more concerned about is that technology invariably concentrates power and wealth, increasing inequality and increasing dependence on the owners of the technology. This is particularly true of AI, as there is a race to create superintelligent AGI that could establish a near monopoly over our lives.
For me the real question is, how do we use technology and distribute the wealth that it brings, so that people can work less and lead more fullfilling lives?