The New Humanism
AI companies are pushing an "AI-first" worldview that serves them, not society. We do not have to accept it | Edition #273
From AI-blamed layoffs to pretentious AI “constitutions,” from conspiring AI agents to AI safety departures, 2026 in AI has been frenetic. Maybe too frenetic.
The impression many people get from watching the latest AI developments is that there is an impending doom and that we have little to no control over what is about to happen.
That type of message has been spread by AI CEOs like Sam Altman and Dario Amodei, as well as authors like Yuval Noah Harari, who recently presented AI systems as if they were extraterrestrial alien visitors who deserve our immediate and absolute surrender.
Strangely, this narrative seems to be attached to the implicit conclusion that we must lower our heads and obey what AI companies say.
If they say it is time to prioritize AI and add it to every single task, we should do it, even if it makes no sense for our professional development or career goals, and even if we might not become more productive in the end.
If they say it is time to embrace AI-generated art for the sake of spreading AI-powered aesthetics, and that monotonic AI feeds are now the cool online entertainment, we should do it, even if it might just add more purposeless screen time to routines that are already unrewarding and draining.
If they say vibecoding and creating personalized AI agents with full access to our accounts are necessary to fully master AI, we should do it, even if the result is a useless, privacy-invasive tool that might create security vulnerabilities.
If they say governments worldwide should use AI at every decision-making level, we should support it, even if it might mean more biased decisions, less transparency, and the corruption of society's institutional fabric.
If they say schools and universities should embrace AI, we should accept it, even if it might lead to the further decline of reading and other scores, poorer learning, and early-onset AI dependence.
And I could go on…
This worldview has been conveniently pushed by AI companies, which are in desperate need of profits and usage growth to justify AI bubble-sized investments and keep them coming.
If you pay close attention to this narrative, it always projects possible futures of “abundance” that involve more AI usage, automation, and replacement, regardless of the consequences.
To be part of “the future,” you always have to be fully on board with the AI mindset. Otherwise, you are out.
Suspiciously, their views about abundance and a better future are never about greater well-being, more personal growth, better mental health, more thriving communities, stronger institutions, more robust legal frameworks, more effective health systems, more education, or greater equality.
These are things that would objectively make people and society better.
However, the AI industry is somehow managing to convince millions of people (and a very vocal online mass of supporters) that AI is the goal itself. Developing and using it. Regardless of the human and social outcomes.
But it does not need to be that way.
Even if some of my articles and posts might sound pessimistic, I am personally a very optimistic person.
I believe in the power of people, human connection, human learning, and human excellence (which is, again, why I write this newsletter and run my AI Governance Training interactively).
A possible and extremely positive byproduct of the generative AI wave, the apparent acceleration it has triggered, and the ideological pressure from the AI industry I described above is the rise of a new humanism.
To some extent, the new humanism is already underway on a small scale and is slowly spreading across specific communities and professional groups worldwide, as I gladly witness in every new cohort of my training when participants talk about their journey in the introductory meeting.
If you are an avid reader of this newsletter, chances are that you are also part of this movement, even though you do not see it as a “movement” yet.
So what is the new humanism, and in what sense is it a reaction to the generative AI wave?
Let me start with a quote from a recent Harvard Business Review article:
“[t]he changes brought about by enthusiastic AI adoption can be unsustainable, causing problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”
The pressure to use AI to produce more outputs in a shorter period of time often leads to a dead end, where the quality of the work is worse, and people feel disconnected from the process that produced it.
I see more and more people revisiting their career choices and seeking alternatives that allow them to develop professionally, align their work with their values and priorities, and remain in control of the work they produce.
The new humanism is not anti-AI. It just recognizes that AI should not be a goal or a priority.
The focus should always be on the humans behind the work and how they can grow, develop, learn, and thrive, regardless of the tools used.
The new humanism recognizes that AI-generated and AI-manipulated creative works differ in nature from purely human works.
Not only is it faster and cheaper to create art (or any type of content) with AI, but the AI output is also trained on human works and often competes against them, as recent copyright lawsuits have made clear.
Both types of art have their value and can co-exist, but without the right incentives and protections in place, AI-generated work swallows and destroys purely human work.
Through the lens of the new humanism, there will be effective financial, legal, and cultural protections for purely human works because we understand that they matter and are necessary for both artists and society.
Any AI-powered innovation should be seen as one among various possibilities for achieving a specific human or social purpose.
For some tasks, automation might be the right answer; however, this is certainly not true for all tasks, especially if we think that human development and well-being, fundamental rights, and the robustness of social institutions matter.
Also, the legal system matters. AI companies must respect the law and be held accountable when they cause harm. AI is not a “species” or an “alien”; it is a product made by humans. There are human laws regulating product safety and various other aspects of AI development and deployment. The new humanism reminds companies that they must abide by them.
The new humanism is still in its early years, but it is growing as more people realize that treating AI as a priority and a goal leads to a dead end that does not support human flourishing. It only supports AI companies and their investors.
Humans must be the focus of any and all rules, policies, and rights. Human flourishing must be the focus of culture, education, and all social institutions.
I am all in, as it should have been clear over the past four years of writing this newsletter, and I will continue fighting the AI hype tides for as long as it is necessary.
Will you join me?
This free edition is supported by Codacy:
AI helps your devs move fast, but also scales the risk. Codacy’s AI Risk Hub catches insecure AI coding patterns before they become issues. Block AI-specific risks like unapproved model calls, invisible unicode injections, and vulnerable dependencies before devs can merge them. Don’t leave a single line of AI code unchecked. Enforce one AI coding policy for all your projects.
As the old internet dies, polluted by low-quality AI-generated content, you can always find raw, pioneering, human-made thought leadership here. Thank you for helping me make this a leading publication in the field.
🎓 Now is a great time to learn and upskill in AI. If you are ready to take the next step, here is how I can help you:
Join the 28th cohort of my AI Governance Training in March
Discover your next read in AI and beyond in my AI Book Club
Join my AI Ethics Paper Club’s group discussion next week
Sign up for free educational resources at our Learning Center
Subscribe to our Job Alerts for open roles in AI governance





Luiza, that line "AI should not be a goal or a priority" is on point.
I keep watching people adopt AI not because it solves something, but because they're supposed to.
The industry convinced us all friction is the enemy. Even the friction that builds trust. Even the struggle that makes us who we are.
Thank you for naming this.
Thank you, Luiza, for this extremely grounded and, in my opinion, optimistic post. Humanism is a wonderful word to describe the opportunities that present themselves to us in the AI world in which we currently find ourselves living. I also think that we should move away from calling soft skills'soft skills' and instead call them 'human skills'. This skill set (i.e. judgement, empathy, critical thinking) are skills that can never be replicated by AI and which ultimately will help us to recreate a place in the world with which we can all individually resonate.