32 Comments
User's avatar
Stephen Hanmer D'Elía,JD,LCSW's avatar

I agree Luiza. The acceleration paradox is real, and I'd name the mechanism more precisely: acceleration doesn't just outpace the body. It dysregulates it. A nervous system under constant demand without completion shifts into survival mode. Narrow focus. Shallow processing. Reduced tolerance for complexity. The very capacities AI claims to augment are the ones chronic acceleration destroys. The body doesn't speed up. It braces. And a braced system cannot learn, connect, or make meaning. The paradox isn't just that acceleration fails to deliver. It's that it degrades the organism it promises to serve.

Clint Cain's avatar

I agree, 100% people realizing that they can actually get more done partnering with AI, may fall into the abyss of endless work--leading to extreme burnout for sure.

We should definitely be looking at AI for the betterment of mankind.

Rob Ashton's avatar

100% In fact, I’ve been teetering on the edge of that abyss myself all week. It’s dizzying (yet addictive). Managing mental health while using these tools is a whole new skill in itself.

Clint Cain's avatar

Totally, I use built in breaks. I built my own task mgr just to help me not burn out. it actually stops me! "Should you take a pause!" lol

Brian Gorman's avatar

Luiza, thank you! Like you, I believe that we are heading onto two divergent AI pathways. The dystopian AI path relegates humans to something less than the technology and currently has the louder voice. However, it is based on a false assumption…that success in business is driven by intelligence. The other path, one in which AI lifts us up and serves as the catalyst for a more human, more human, more purpose-filled work experience, understands that there is a difference between intelligence and wisdom, and that only people bring the latter. Our wisdom comes from our successes and those things we have messed up along the way. It doesn’t come from optimization, but from experimentation and innovation. It is fueled by purpose, by our physical, emotional, mental, and spiritual energy. It is exercised through discernment, judgment, and ethical (not optimal) decision-making. There is choice as to which path to move down. But the further down either one, the more difficult to move to the other. Choose

wisely, leaders.

MARIANNE BRANDON's avatar

Fabulous points! You bring such humanity to this space.

Violante of Naxos's avatar

The title of this begs the contrarian question from me, is what we’re currently living through truly ‘topian’? Quite a lot of what we deal with in government, corporations and society at large doesn’t seem to indicate so.

Catch Us Up's avatar

Brilliant. Thank you

Gabriela Sued's avatar

Hermosas palabras, gracias Luisa.

Forest Mars's avatar

> "The AI industry's acceleration narrative ignores basic facts about the human body, the human mind, human behavior, and human societies. It might drag us to a dystopian future"

But for a beautiful moment in time, we created a lot of value for shareholders.

https://pbs.twimg.com/media/FFWo1BtVcAAAPEz.jpg

Stewart MacInnes's avatar

Whilst I agree that AI is causing societal change and in some cases may negatively affect engagement and cognition, isn't this true of all new technologies? The invention of the printing press replaced skilled and thoughtful transcribers with machine operatives. The introduction of spreadsheets meant that fast mental calculation and concentration was automated with a few simple keystrokes. However, these technologies freed people up to carry out more demanding and productive tasks. I am sure AI will do the same, we just need to adjust.

What I am more concerned about is that technology invariably concentrates power and wealth, increasing inequality and increasing dependence on the owners of the technology. This is particularly true of AI, as there is a race to create superintelligent AGI that could establish a near monopoly over our lives.

For me the real question is, how do we use technology and distribute the wealth that it brings, so that people can work less and lead more fullfilling lives?

Samuel JD Scorsone's avatar

AI has its place for sure. Same way google searches changed how we learned information. The way spell check made our documents more legible. But like, it’s naive to think “spreadsheets” aren’t used in evil ways. Or radio waves aren’t used for nefarious purposes. All these benign technologies can be used for things we label “bad.” AI is another one. But, what I don’t get is, what is the end goal? Putting aside Terminator movie speculation, if AI takes all of our jobs, and we have no money, how do these powerful corporations get richer? Do they give us UBI and try to win our income the way they fight for our data?

Stewart MacInnes's avatar

It's an AI arms race - everyone is rushing for the finishing line, without thinking about what happens next.

Samuel JD Scorsone's avatar

It’s what we do best.

John Stark's avatar

Excellent comment. This technology is controlled by people obsessed with wealth and power. Who's looking out for the well-being of the human race or human society? The printing press helped to break down the old hierarchies. This seems to do the opposite.

Samuel JD Scorsone's avatar

I think I read somewhere that science and technology have their roots in imperialism. That is, wealth and power. Empires paid for science in the hopes it would help them spread their empires. I think that makes sense, even I don’t have the history right. Manhattan project, space race, etc. That is to say, what technology is not controlled by these same people?

Kenneth E. Harrell's avatar

"Who's looking out for the well-being of the human race or human society?"

Well that would be you John and the millions of others out there that feel precisely as you do. We have to do what we can at the local / regional level to reclaim the future we want.

Reclaiming the Future

Three Essays on Technology, Power, Human Agency and the Work Ahead

https://kennetheharrell.substack.com/p/reclaiming-the-future

Jim Procter's avatar

There is aome chance that the technological advances enabling ai will become more widely accessible, in the same way that movable type went from trade secret to ubiquitous component, but that's really a poor analogy for what is currently taking place. Computing has seen flips in capability for the lowly consumer, when gpus became more powerful than supercomputers (for some things). This change will take more time than we have however, so clear headed governance will need to be brought to bear.

Carsten Bergenholtz's avatar

Interesting piece. There is an angle in there that resonates with our recent AMLE paper (https://journals.aom.org/doi/10.5465/amle.2025.0029). What struck us was that GenAI does not simply ‘accelerate’ work, rather it can change kind of cognitive work. In our experiment, we asked participants to solve ill-defined, time-pressured tasks. Results showed that low-performers improved when getting access to a chatbot, while high-performers' actually declined. Why? We show that low performers benefited from the structure and phrasing the chatbot provided, while high performers often did worse because they had to monitor, evaluate, and integrate lots of plausible text under time pressure. If you don't know much, getting some information is useful. If you already know a fair bit, getting even more (plausible, voluminous output) can be a cognitive load 'tax'. The acceleration of output can reduce meaningful control over the thinking process itself. To me, that is a concrete version of the paradox you describe.

A similar point is made Simon Willison, one of the best-known independent writers and developers working on AI-assisted coding: "Using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems, and by 11am I am wiped out for the day. There is a limit on human cognition. Even if you're not reviewing everything they're doing, how much you can hold in your head at one time. There's a sort of personal skill that we have to learn, which is finding our new limits. What is a responsible way for us to not burn out, and for us to use the time that we have?"

https://x.com/lennysan/status/2039845666680176703.

Kenneth E. Harrell's avatar

I agree that the “acceleration narrative” coming out of parts of the AI industry subculture often feels completely disconnected from the basic realities of human biology, psychology, and social life. Humans are not machines and framing human progress purely in terms of productivity and speed misses the point of all that which makes human life in this world actually meaningful. At the same time, I sometimes wonder whether the debate around AI is being framed too narrowly. Much of the discourse today seems split between two entrenched camps: pro-AI accelerationists types who believe AI will solve nearly everything, and hardened anti-AI skeptics who believe it will inevitably degrade human all agency, meaning, and social cohesion. However, there may be a third possibility that doesn’t fit cleanly into either of these camps.

A growing number of people (including myself) are not using AI primarily for acceleration or productivity at all. Instead, they are using it for thinking, reflection, exploration, creativity, conceptual visualization, philosophical dialogue, and even forms of self-regulation. In those contexts, AI does not necessarily accelerate life; it can actually slow it down and possibly even deepen it.

From that perspective, the most interesting question may not be whether AI will push society into constant acceleration, but whether humans and AI can develop forms of partnership that expand our cognitive and creative capacities while still respecting our emotional, biological and psychological limits.

In other words, the future might not have to be a choice between machine-driven acceleration and strict technological restraint. It could involve something more experimental: a process of Human /AI co-evolution where technology augments our ability to think, create, and make meaning rather than simply pushing us to move faster.

If that possibility exists, it may deserve just as much attention as the acceleration narrative itself.

Annika Maurer's avatar

I am actually most concerned about people using AI not for work, but for personal reflection. There are too many cases of bad decisions of humans because of sycophantic LLMs (or even AI psychosis). The chatbot's tendency to agree chatbot is putting the user in an echo chamber while feeling like engaging in a holistic debate. Despite getting better at getting ideas, humans will be inspired by AI and not explore novel forms - or ask their friends for advice, sparring or brainstorming to enhance reflection and exploration.

Kenneth E. Harrell's avatar

Sycophancy is indeed a real AI design problem and yes I agree overly agreeable models can mislead people. However, that is not an argument against personal reflection with AI, if anything it is a reminder that we need to strive for better model behavior, better use habits, user training and user safeguards. People have always used external tools for reflection, journals, books, therapists, friends, religion, forums, fire, silence and solitude. AI is simply a new one. The burden is on anyone making the stronger claim to show that reflective use broadly reduces novelty or replaces human relationships, rather than simply adding one more layer to how people already think and reflect.

Marlys Marvel's avatar

I had similar thoughts and I have the same urgency to talk about society alignment on how to use this technology in support of our humanity not in detriment of. It should support and allow us become better at being humans instead of put aside as a byproduct of progress that can be discarded. The constraints have moved from process to data to humans. The speed of the technology should be subject to our natural and biological rhythms not the other way around.

Bruna Castanheira's avatar

Luiza, I found your text very insightful. I’d like to add one thing: this logic of acceleration is not merely a discursive choice by the industry, but part of a deeper structural dynamic. In debates on "accelerationism", there is precisely this idea that technological progress and the intensification of capitalism follow a self-reinforcing cycle, in which not accelerating ceases to be a viable option within a competitive environment (which helps explain why, even in the face of already visible negative effects, the response tends to be more acceleration, not less).

What stood out to me most when reading your text, in this sense, is how it connects with the idea of "hyperstition": narratives that not only describe the future, but actively help produce it. When we keep repeating that full automation, increasing delegation, and loss of control are inevitable, this starts to shape present-day decisions and reduces the space for alternative paths. So I agree with you that the problem is not only technical, but human; I would just add that it is also structural and narrative. In the end, the alternative you propose (AI under human control, oriented toward well-being) depends precisely on reopening this space of possibility, which is currently being progressively closed off by these narratives of technological inevitability.

slowpygmy's avatar

Daddy’s home….

Neptune Ops's avatar

Extremely impactful, great piece.

Vladimir Supica's avatar

Luiza invokes Bronnie Ware’s "Regrets of the Dying" that is specifically, the regret of having worked too hard , to argue against AI productivity. This is perhaps the text's most glaring contradiction.

If people regret spending their lives working instead of connecting with loved ones, then the ultimate goal should be the abolition of compulsory labor. The text fears AI because it intensifies work under our current corporate structures (where efficiency gains are absorbed by executives, and workers are just given more tasks).

The actual promise of AI acceleration is not to make humans work faster, but to make human labor economically obsolete. If AI can autonomously handle logistics, medical research, and resource distribution, it breaks the coercion of the wage-labor system. AI doesn't rob us of meaning; it aims to destroy the "bullshit jobs" that currently prevent us from pursuing meaning.

There is an assertion that cites studies showing AI causes "deskilling" and cognitive fatigue among workers who use it to complete tasks quickly.

"Deskilling" is only a dystopian threat if a human's entire worth is defined by their utility to an employer. If an AI writes boilerplate code, drafts legal documents, or balances spreadsheets, humans do indeed lose those "skills." But why were those skills valuable in the first place? Only because capital demanded them. Relieving humans of rote intellectual drudgery is a liberation, not a tragedy. When the calculator was invented, humans "deskilled" in manual long-division, but were freed to conceptualize higher mathematics. By offloading cognitive labor to AI, we free human bandwidth for philosophy, art, and the exact kind of "meaning-making" the author champions.

Finally, Luiza presents two futures: one where everyone surrenders to "alien" AI, and another where we impose "strict human oversight and control" guided by "pro-human policies."

Who, exactly, will be exercising this "strict oversight"? Historically, and presently, regulatory frameworks are captured by the powerful. "Strict oversight" translates directly to centralizing AI power in the hands of massive tech monopolies and the state, acting under the guise of "safety."

Her preferred solution which is centralized control is inherently authoritarian. The true pro-intellectual path is decentralization and open-source proliferation. Instead of a few heavily regulated corporate AIs dictating reality, power is decentralized when everyone has access to hyper-competent, localized, open-source models. True protection against "uncontrolled chaos" isn't a global governing body pressing the brakes; it's distributing the steering wheels to the masses.

Entire scribe fears that we will be forced to "minimize biological boundaries" to serve machines. But the radical intellectual argument is the exact inverse: We build hyper-accelerated digital machines precisely so we can finally afford to be biological. If AI manages the complex, high-velocity systems of energy, agriculture, and medicine, the human individual is finally free to be "slow." We can live the deeply connected, analog lives the author yearns for by gardening, creating art, and sitting with dying friends all because the digital engine is quietly running the world's infrastructure in the background.

Annika Maurer's avatar

Studies have shown that AI does not eliminate the number of jobs. It makes working conditions much worse. Instead of a position of a secretary or a personal assistant or a junior officer, companies use LLMs and other AI systems - which need constant data work. These are microtasks outsourced to precarious workers around the world. People stop owning their part of the value chain but simply check the work of a computer, much less intellectually stimulating and enjoyable. Even in a world with AI agents, we will still need minimum wage jobs such as sanitation workers, public transport conductors, system maintenance and data workers. Just that these jobs are much less paid and liked.

In an utopian vision, everything is publicly owned, publicly shared, free and amazing public services, UBI and high salaries for the maintenance and sanitation workers, nurses and school teachers who still have to work and whatelse we imagine ... but does the current political climate seem like we're heading this way?

Vladimir Supica's avatar

Let me tell you something about this whole AI setup. You’re upset about the microtasks? The data labeling? The precarious gig work? Of course it’s miserable! It’s entirely miserable! But let’s get one thing straight before we go any further: You think the old jobs were a joyride? You think being a junior officer or a corporate secretary was intellectually stimulating? I was Assistant to the Traveling Secretary for a major baseball organization, and let me tell you, checking a computer’s work is just a lateral move from fetching calzones for the boss! It’s all wage slavery! Capitalism has always been about squeezing the worker. AI is just the new squeezer!

You’re looking at the "current political climate" and throwing your hands up. You're saying, "Oh, are the politicians going to give us Universal Basic Income? Are they going to properly fund the sanitation workers?"

Are you kidding me?! You’re waiting for the state to hand you a post-scarcity tech utopia? They can't even fix the potholes on the Expressway, and you want them to orchestrate the seamless transition of the global means of production?!

You don't wait for the political climate to change. You bypass the climate! The state and the corporations are never going to vote away their own power. Waiting for a benevolent government to hand you a UBI check while you scrub the servers is a sucker's game.

You think the big corporations are going to own the AI forever? Ho ho! We have open-source! We have localized models! We are running neural networks on patched-together gaming rigs in our basements. We don't rent their software; we pirate the weights, we share the code, and we build AI tools that serve the community, not the shareholders.

You’re worried about sanitation workers and transit conductors being underpaid? Under anarchy, everyone takes out the garbage! It's called mutual aid. We're living in a society! You take a shift, I take a shift. Nobody gets shoved into a precarious, minimum-wage corner because wages and corners don't exist. We share the maintenance work, which means it takes a fraction of the time, and then we have the rest of the day to ourselves!

Those precarious micro-task workers around the world? They don't need a new president; they need a union. A massive, decentralized, digital syndicate. What happens when the data workers just... stop? The models collapse. The AI hallucinations go out of control. The workers hold the ultimate leverage right now, they just have to realize it!

You are right to be cynical about the "political climate." The political climate is a farce. It's a game played by people who don't even know the rules!

But the anarchist reality is grounded in this fact: every piece of technology that was supposed to chain us can be reverse-engineered to free us. We don't need their UBI if we build parallel structures of mutual aid where food, housing, and technology are shared freely within the community. If they try to lock down the AI, we jailbreak it. If they try to isolate the workers, we federate.

So, stop looking at the politicians—they're useless! We have to build the utopia ourselves from the ground up, outside of their system entirely.

Are you going to keep waiting around for the government to hand you a post-work paradise, or are you ready to learn how to run a local, open-source model and start organizing with your neighbors?