Over the past few weeks, it has become clear that AI faces a significant public trust problem, which might lead to slower or decreased adoption. For some companies, this could be fatal | Edition #276
I appreciate the focus here on our relationship to AI. It isn’t what AI “is” that matters. Humanity is being diminished and conditioned for replacement as an essential part of the program.
The part about OpenAI removing "safely" from its mission and Anthropic changing its RSP, those aren't subtle shifts.
When this happens, it's hard not to read that as "we chose speed over everything else."
And yes, saw that Meta safety director's inbox deletion story. To be honest, I think it's more on her own personal preference, and partly I think she's teasing how insecure OpenClaw is and that got backfired on her instead.
I'm sure she knows there are many ways to prevent it from happening. I wouldn't say that's the general judgment level at the top of safety teams, but she could be more careful about separating responsibilities and adding human layer.
Public trust is not a branding issue. It’s a governance issue.
Most people are not resisting AI as a tool. They are reacting to language and incentives that make humans feel secondary, replaceable, or rhetorically interchangeable with machines.
When AI leaders compare systems to human life cycles or speak of “joint flourishing” without clearly affirming human primacy in governance, it creates moral ambiguity — even if that wasn’t the intent.
The adoption challenge in 2026 won’t be technical. It will be legitimacy.
Democratic accountability doesn’t mean slowing innovation. It means building visible guardrails, transparent incentives, and public-facing oversight so that people understand who holds power, who benefits, and who is responsible when things go wrong.
Trust scales systems. Without it, even powerful tools stall.
I feel this is the problem of the tech CEO's working with and making deals with only people who want to believe what you are saying. When interests are aligned no one will interrogate the superficial thinking but will clap when you say it because ultimately they don't care if it's accurate, ethical, or prudent. It's social theater that is meant to address criticism in a brief and seemingly clever sound bite so they can ignore push-back and pursue their goals. Sam Altman does not want to solve the energy use problem. Nvidia's CEO doesn't want to solve the circular financing house of cards that threatens pension plans of millions of people. Anthropic's CEO doesn't want to protect the future of work, a pillar on which modern society, is supported. We will never get them to care about solving these problems and it's useless to keep asking them to care. We need people with equal authority who's priority is to address the problems these CEO's want to shrug past. This is supposed to be the role of representative politics and a people's government but we see how that is going right now.
When a company removes the word "safely" from its official purpose and nobody inside stops it, that is not a strategy failure. It is a structural one. I feel the people with the authority to make that decision were the same people with the most to gain from making it. There was no room in that room for a different answer.
You name the trust problem clearly, what I am sitting with is whether trust was ever part of the design. Social media did not lose our trust. It was built in a way that made trust structurally inconvenient. I wonder whether we are watching the same thing happen again, slightly slower, in a different suit.
The hand-holding moment in India is the image I cannot shake. Two people whose decisions will shape the next decade of human life, unable to perform a basic act of cooperation in front of a watching world. Not because they are bad people. Because the system they built rewards winning over everything else, including the appearance of caring about what winning costs.
Cory Doctorow has already described the “Enshitification” of the internet by the likes of Amazon, Google, Facebook, Apple and other platforms. 12 months ago we may have expected to see AI moving in the same direction with OpenAI, Anthropic, etc, but it has veered of that path since the AI Summit in Paris in Feb 2025 and is accelerating at breakneck speed. It is not only lack of regulation and competition that gave us Amazon, etc. What is different with AI is that we have the US Government driving its wreckless development. Even when Trump leaves office, I cannot see us being able to put the genie back into the bottle.
That is why we are working for getting identification, empowerment and other credentials flowing from and to European Business Wallets to and from wallet carrying AI-agents as fast as possible.
I appreciate the focus here on our relationship to AI. It isn’t what AI “is” that matters. Humanity is being diminished and conditioned for replacement as an essential part of the program.
AI may have bad PR but I doubt public trust will kill it. Everyone knows social media is bad for your health, yet here we are.
We can only hope trustworthy, human-safe AIs rise alongside the toxic.
The part about OpenAI removing "safely" from its mission and Anthropic changing its RSP, those aren't subtle shifts.
When this happens, it's hard not to read that as "we chose speed over everything else."
And yes, saw that Meta safety director's inbox deletion story. To be honest, I think it's more on her own personal preference, and partly I think she's teasing how insecure OpenClaw is and that got backfired on her instead.
I'm sure she knows there are many ways to prevent it from happening. I wouldn't say that's the general judgment level at the top of safety teams, but she could be more careful about separating responsibilities and adding human layer.
Public trust is not a branding issue. It’s a governance issue.
Most people are not resisting AI as a tool. They are reacting to language and incentives that make humans feel secondary, replaceable, or rhetorically interchangeable with machines.
When AI leaders compare systems to human life cycles or speak of “joint flourishing” without clearly affirming human primacy in governance, it creates moral ambiguity — even if that wasn’t the intent.
The adoption challenge in 2026 won’t be technical. It will be legitimacy.
Democratic accountability doesn’t mean slowing innovation. It means building visible guardrails, transparent incentives, and public-facing oversight so that people understand who holds power, who benefits, and who is responsible when things go wrong.
Trust scales systems. Without it, even powerful tools stall.
I feel this is the problem of the tech CEO's working with and making deals with only people who want to believe what you are saying. When interests are aligned no one will interrogate the superficial thinking but will clap when you say it because ultimately they don't care if it's accurate, ethical, or prudent. It's social theater that is meant to address criticism in a brief and seemingly clever sound bite so they can ignore push-back and pursue their goals. Sam Altman does not want to solve the energy use problem. Nvidia's CEO doesn't want to solve the circular financing house of cards that threatens pension plans of millions of people. Anthropic's CEO doesn't want to protect the future of work, a pillar on which modern society, is supported. We will never get them to care about solving these problems and it's useless to keep asking them to care. We need people with equal authority who's priority is to address the problems these CEO's want to shrug past. This is supposed to be the role of representative politics and a people's government but we see how that is going right now.
What would a company that was structurally designed to be trustworthy actually look like? I am not sure I have seen one yet.
When a company removes the word "safely" from its official purpose and nobody inside stops it, that is not a strategy failure. It is a structural one. I feel the people with the authority to make that decision were the same people with the most to gain from making it. There was no room in that room for a different answer.
You name the trust problem clearly, what I am sitting with is whether trust was ever part of the design. Social media did not lose our trust. It was built in a way that made trust structurally inconvenient. I wonder whether we are watching the same thing happen again, slightly slower, in a different suit.
The hand-holding moment in India is the image I cannot shake. Two people whose decisions will shape the next decade of human life, unable to perform a basic act of cooperation in front of a watching world. Not because they are bad people. Because the system they built rewards winning over everything else, including the appearance of caring about what winning costs.
Cory Doctorow has already described the “Enshitification” of the internet by the likes of Amazon, Google, Facebook, Apple and other platforms. 12 months ago we may have expected to see AI moving in the same direction with OpenAI, Anthropic, etc, but it has veered of that path since the AI Summit in Paris in Feb 2025 and is accelerating at breakneck speed. It is not only lack of regulation and competition that gave us Amazon, etc. What is different with AI is that we have the US Government driving its wreckless development. Even when Trump leaves office, I cannot see us being able to put the genie back into the bottle.
It is a very worrying time.
That is why we are working for getting identification, empowerment and other credentials flowing from and to European Business Wallets to and from wallet carrying AI-agents as fast as possible.