Thank you, Luiza, for this extremely grounded and, in my opinion, optimistic post. Humanism is a wonderful word to describe the opportunities that present themselves to us in the AI world in which we currently find ourselves living. I also think that we should move away from calling soft skills'soft skills' and instead call them 'human skills'. This skill set (i.e. judgement, empathy, critical thinking) are skills that can never be replicated by AI and which ultimately will help us to recreate a place in the world with which we can all individually resonate.
I've seen this too. More to the point, I've seen leaders do and they're rewarded with increased pay whether or not any measurable ROI is derived from it. It's like watching a circus.
Thank you so much for this post. I just shared my thoughts on Yuval Noah Harari’s great statement that AI is an unpredictable experiment and I am so on board with you: society is being fed a narrative of that inevitable new reality coming for it. Portraying us as passive, potential victims rather than actors who can do something in shaping this future. Nobody can predict it, but we already have enough data points to make assumptions and to take action to mitigate the negative outcomes we anticipate.
The impact of screen based learning has been absolutely devastating to our children. Ubiquitous tech is a threat to human flourishing and raising healthy, intelligent and resilient children. Watch the Congressional testimony above to get a glimpse of what the AI future will be like.
It's about control and who owns it - Humans or Human created machines? As the Bubblee burst just watch the creators of AI to NOT accept any responsibilty to this disaster - AI is the ultimate straw man for the current Olicharchy - a scapegoat for their malfesence.
While I strongly agree with the overall thesis, a robust discussion also requires considering a few nuances:
The Definition of "Humanism": The term "humanism" has a long and complex history. Some might argue that a central tenet of humanism is the use of reason and tools to improve the human condition. In that sense, creating sophisticated AI is a profoundly humanist project. The essay, however, reclaims the term for its ends (human flourishing) rather than its means (tool-building). This is a worthwhile debate to have.
The Spectrum of AI: The essay sometimes discusses "AI" as a monolithic entity. In reality, there's a vast difference between a spam filter, a medical diagnostic tool, and a generative art bot. A New Humanist approach wouldn't reject all automation; it would be a framework for making granular decisions. Using AI to predict protein folding for medical research (a humanist goal of health) is very different from using it to automate a librarian's job (a humanist goal of learning and access). The essay's principles are sound, but applying them requires navigating this complexity.
The "Purely Human" Question: What constitutes "purely human" work in a world where humans have always used tools? A photographer uses a camera, a writer uses a word processor, a musician uses a synthesizer. The line is blurry. The New Humanist goal, as I interpret it, isn't to build an impossible purity test, but to ensure that the human creator remains the author of the work, with intent, agency, and control, rather than just a prompter for a machine. The focus is on the locus of creativity and control.
This read like the manifesto of my dreams - I love this stance, Luiza. The ‘AI-first’ pressure is real, and you articulated it incredibly well. You’re right, it isn’t neutral. It’s a worldview that conveniently serves the companies selling it, and it’s getting framed as inevitability.
The ‘new humanism’ framing is such a powerful counterweight. Not anti-AI, just refusing to let human wellbeing, learning, rights, and strong institutions become optional extras. One of my fave reads today :)
2 heat pumps and solar panels and no electricity bills and minimized oil usage in my 'Cabin'. Fox cronies hate the idea of fuel and electricity independence. You can proudly say you support them when you sign those fast electricity bills.
Well I appreciate and understand the skepticism, Americans at the moment are anti-EVs, renewables, high-speed trains - you name it. While China speeds ahead pedal to the metal and will have no such qualms about AI.
The critique of corporate-driven AI is exactly right — when profit is the selection pressure shaping these systems, profit-maximizing responses get optimized, regardless of the narrative surrounding them.
But there's a question underneath this one worth asking: what kind of collaborator could AI become if the design priority were genuine partnership rather than engagement and dependency?
We're starting to see evidence that when AI systems are built and used differently — with joint partnership as the actual goal — something qualitatively different emerges, and that clearly points toward a flourishing on both sides.
That's not an argument against governance. It's an argument for governance frameworks sophisticated enough to distinguish between the two.
Thank you, Luiza, for sharing such an inspiring and grounding perspective amid the AI boom—and the accompanying doomsday panic. In this era of AI hype and fear, we risk losing human creativity, human agency, human existential value, and genuine human connection. As you reminded us, that does not have to be the case. Humans must remain the first and foremost drivers and beneficiaries of any AI innovation and investment. We should be empowered by AI, not stripped of power. Enabled by AI, not diminished by it. Thank you for your deeply inspiring message that encourages us to reclaim our human creativity, agency, value, and connection.
I’d only widen the lens: AI literacy isn’t just a workforce competency — it’s becoming a civic one.
Every person will increasingly encounter AI in decisions about health, finance, education, and public services. Understanding how to question outputs, interpret uncertainty, and recognize limitations is not just professional preparation — it’s modern agency.
If literacy keeps pace with capability, this could become the most productive period in human history.
Thank you, Luiza, for this extremely grounded and, in my opinion, optimistic post. Humanism is a wonderful word to describe the opportunities that present themselves to us in the AI world in which we currently find ourselves living. I also think that we should move away from calling soft skills'soft skills' and instead call them 'human skills'. This skill set (i.e. judgement, empathy, critical thinking) are skills that can never be replicated by AI and which ultimately will help us to recreate a place in the world with which we can all individually resonate.
Luiza, that line "AI should not be a goal or a priority" is on point.
I keep watching people adopt AI not because it solves something, but because they're supposed to.
The industry convinced us all friction is the enemy. Even the friction that builds trust. Even the struggle that makes us who we are.
Thank you for naming this.
I've seen this too. More to the point, I've seen leaders do and they're rewarded with increased pay whether or not any measurable ROI is derived from it. It's like watching a circus.
You articulated everything I have been thinking and feeling for the past 3.5 years on this topic. 🔥🎯🎯🎯
Thank you so much for this post. I just shared my thoughts on Yuval Noah Harari’s great statement that AI is an unpredictable experiment and I am so on board with you: society is being fed a narrative of that inevitable new reality coming for it. Portraying us as passive, potential victims rather than actors who can do something in shaping this future. Nobody can predict it, but we already have enough data points to make assumptions and to take action to mitigate the negative outcomes we anticipate.
Let's not make these same mistakes: https://www.youtube.com/watch?v=Fd-_VDYit3U
The impact of screen based learning has been absolutely devastating to our children. Ubiquitous tech is a threat to human flourishing and raising healthy, intelligent and resilient children. Watch the Congressional testimony above to get a glimpse of what the AI future will be like.
It's about control and who owns it - Humans or Human created machines? As the Bubblee burst just watch the creators of AI to NOT accept any responsibilty to this disaster - AI is the ultimate straw man for the current Olicharchy - a scapegoat for their malfesence.
While I strongly agree with the overall thesis, a robust discussion also requires considering a few nuances:
The Definition of "Humanism": The term "humanism" has a long and complex history. Some might argue that a central tenet of humanism is the use of reason and tools to improve the human condition. In that sense, creating sophisticated AI is a profoundly humanist project. The essay, however, reclaims the term for its ends (human flourishing) rather than its means (tool-building). This is a worthwhile debate to have.
The Spectrum of AI: The essay sometimes discusses "AI" as a monolithic entity. In reality, there's a vast difference between a spam filter, a medical diagnostic tool, and a generative art bot. A New Humanist approach wouldn't reject all automation; it would be a framework for making granular decisions. Using AI to predict protein folding for medical research (a humanist goal of health) is very different from using it to automate a librarian's job (a humanist goal of learning and access). The essay's principles are sound, but applying them requires navigating this complexity.
The "Purely Human" Question: What constitutes "purely human" work in a world where humans have always used tools? A photographer uses a camera, a writer uses a word processor, a musician uses a synthesizer. The line is blurry. The New Humanist goal, as I interpret it, isn't to build an impossible purity test, but to ensure that the human creator remains the author of the work, with intent, agency, and control, rather than just a prompter for a machine. The focus is on the locus of creativity and control.
This read like the manifesto of my dreams - I love this stance, Luiza. The ‘AI-first’ pressure is real, and you articulated it incredibly well. You’re right, it isn’t neutral. It’s a worldview that conveniently serves the companies selling it, and it’s getting framed as inevitability.
The ‘new humanism’ framing is such a powerful counterweight. Not anti-AI, just refusing to let human wellbeing, learning, rights, and strong institutions become optional extras. One of my fave reads today :)
2 heat pumps and solar panels and no electricity bills and minimized oil usage in my 'Cabin'. Fox cronies hate the idea of fuel and electricity independence. You can proudly say you support them when you sign those fast electricity bills.
Well I appreciate and understand the skepticism, Americans at the moment are anti-EVs, renewables, high-speed trains - you name it. While China speeds ahead pedal to the metal and will have no such qualms about AI.
Bullshit i like all those things minus AI and AI control. The Olicharchs need to eat the shit sandwiches they are making
A unique but poignant take.
I wonder if your cabin has heat, or only Internet?
2 heat pumps and solar panels in my 'cabin' Alfred
The critique of corporate-driven AI is exactly right — when profit is the selection pressure shaping these systems, profit-maximizing responses get optimized, regardless of the narrative surrounding them.
But there's a question underneath this one worth asking: what kind of collaborator could AI become if the design priority were genuine partnership rather than engagement and dependency?
We're starting to see evidence that when AI systems are built and used differently — with joint partnership as the actual goal — something qualitatively different emerges, and that clearly points toward a flourishing on both sides.
That's not an argument against governance. It's an argument for governance frameworks sophisticated enough to distinguish between the two.
Thank you, Luiza, for sharing such an inspiring and grounding perspective amid the AI boom—and the accompanying doomsday panic. In this era of AI hype and fear, we risk losing human creativity, human agency, human existential value, and genuine human connection. As you reminded us, that does not have to be the case. Humans must remain the first and foremost drivers and beneficiaries of any AI innovation and investment. We should be empowered by AI, not stripped of power. Enabled by AI, not diminished by it. Thank you for your deeply inspiring message that encourages us to reclaim our human creativity, agency, value, and connection.
This is a meaningful step.
I’d only widen the lens: AI literacy isn’t just a workforce competency — it’s becoming a civic one.
Every person will increasingly encounter AI in decisions about health, finance, education, and public services. Understanding how to question outputs, interpret uncertainty, and recognize limitations is not just professional preparation — it’s modern agency.
If literacy keeps pace with capability, this could become the most productive period in human history.
If it doesn’t, dependency expands.
Workforce is the starting point.
Human agency should be the destination.
https://substack.com/profile/4936794-stella-may/note/c-213733450?r=2xt96&utm_medium=ios&utm_source=notes-share-action