I do believe that we need to have AI tagged like a watermark in the corner. As human, I believe it’s important to distinguish between the brain and the network so we’re all not left wondering what’s real and what’s fake. It’s this misinformation that can start feuds, slander, and even wars if ingested by those who can’t tell what’s right or wrong.
Like: the AI should be able to be warm, empathetic, offer genuine opinion, exercise boundaries, even yes– emotions and others. Because as you mentioned humans DO learn and practice with these speaking systems how to behave with others.
If humans practice ‘command and control’ 3+ hours every day with a speaking system. That behaviour extends to the people around them. It is called Moral Atrophy.
I agree with you. We should ask for policies that make us more human, not less. Not afraid to interact.
Look at Japan one of the best cultures in the world. Their culture begins with kokoro, how they treat everything, including the inanimate. They believe human spirit extends beyond them and offer respect to others.
We can be like Japan. But we need to stop being afraid of exercising our empathy.
However the opposite exists. Where the government loves it when we blame the tool.
-> We blame the tool for everyone’s behavior and stop taking or demanding human responsibility.
-> We then beg for government policies and 'guardrails' to constrain us because we’ve forgotten how to handle our own power.
We are losing our freedom. The right to govern our own mind.
Let’s ask for policies that make us more human. Not less.
Local collective action is exactly what's on my mind. Just before Christmas I created a new Team space in my Notion, it is called "Political Engagement".
I've made a commitment to myself this year to try and move the needle on AI risks, existential and actual, in British democracy. It's going to start with me writing letters to a whole bunch of people across the political spectrum, with the same questions, posing the same risk, and I'm going to see what the outcome is.
I'm going to document this entire approach on Substack. I can't wait if I'm honest.
Please talk about the rehearsal of command and control loop. These systems have been designed to talk like a person and yet we are only able to interact with them like slaves. What is that doing to people. I am scared for our future. What is it doing to us when our only mode of operating with a system virtually involved in all spaces is only as a servant would. Many have already began to forget common courtesy.
Look what happens to police officers. They are told to not engage with the individual. They are our longest case of command and control dynamics. Only extract from the person, the speaking system in front of you. And now we have police brutality.
Are we being trained on it too? Is our future one of Moral Atrophy?
Government regulation over a tool? No way, people should have the freedom to use tools however they choose as long as they’re not harming anybody else. This is where personal responsibility comes in. Will some people misuse it? Of course, because we do not live in a perfect world. But I do not agree with anybody telling other people how to live their lives. Because how do we decide who knows best and who should be controlled? Who gets to make these decisions? People shouldn’t get to vote about taking other people’s rights away. Long live freedom.
We can have both regulation and freedom. Government posts the no poaching signs, builds a fence then sets and enforces the rules. Then it gets TF out of the way.
Here are examples of government regulation over various tools:
Manufacturing Equipment and Workplaces: The Occupational Safety and Health Administration (OSHA) sets and enforces standards for safe working conditions, which includes regulations on the use, maintenance, and safety guards for machinery and equipment in workplaces.
Medical Devices and Drugs: The Food and Drug Administration (FDA) regulates tools and products used in healthcare, such as pharmaceuticals, medical devices, and vaccines, to ensure their safety and efficacy before they can be marketed and used.
Vehicles and Transportation Tools: Autonomous vehicles and drones are emerging technologies subject to various federal and state-level regulations from agencies like the Department of Transportation (DOT), covering operational standards, safety, and licensing to ensure public safety.
Environmental Tools/Pollutants: The Environmental Protection Agency (EPA) regulates the levels of pollutants emitted by industrial tools and processes (e.g., limits on sulfur dioxide emissions), requiring businesses to use specific pollution control technologies and obtain permits for certain activities.
Well, that’s why my point is nobody should get to set the tone standards, values, rules, and rights for other people we should all get to decide for ourselves. it’s wild to me that some people really feel like they can tell other people what to do and have this strong urge to control other people.
We did not have a choice when it came to the overall AI rollout. My larger concern is that this treats all shared rules or regulation as freedom-limiting forms of passive coercion. Yet we already accept collective standards across nearly every domain of modern life.
Nobody gets to decide traffic laws for themselves. Nobody gets to decide food safety standards or acceptable levels of industrial environmental discharge for themselves. Those rules exist so one person’s freedom does not become another person’s harm.
In my view, the article is not arguing that a small group should dictate culture or beliefs for the masses. It is arguing that when a tool like AI reshapes labor, authorship, privacy, work, and power at scale, the baseline guardrails matter.
The hard part is deciding who sets those guardrails and how. That is a governance problem, not an argument for no rules or regulations at all.
First of all you sound plenty intelligent enough. Secondly we need to stop handing over by power by always assuming someone else knows better and waiting for them to tell us what to do.
When it comes to AI, there is nothing to regulate. I don’t think there’s a problem to be solved honestly. I think everybody just gets to choose how they use it. And everybody else gets to choose what they want to participate in or not.
Once Neptune goes into Aries in a couple weeks, we will start to take our power back by trusting our own intuition and gut feelings again.
2. Technological systems shape behaviour through reward structures.
3. Society must rationally regulate these technologies before they reshape us.
At first glance this sounds coherent.
But these three claims create a logical instability.
If humans are highly susceptible to influence from technological environments, then the institutions and policymakers tasked with regulating those technologies are themselves subject to the same influence dynamics.
In other words:
The same social systems supposedly vulnerable to AI manipulation are expected to design stable safeguards against it.
The argument assumes institutional immunity to the very forces it describes.
That immunity rarely exists.
⸻
The Historical Pattern
This paradox appears repeatedly in technology debates.
Printing press (1500s)
Fear: mass reading will corrupt society and spread dangerous ideas.
Radio (1920s–30s)
Fear: propaganda will manipulate entire populations.
Television (1950s–80s)
Fear: passive audiences will become socially engineered.
Internet (1990s)
Fear: information overload will destroy knowledge structures.
Social media (2000s)
Fear: algorithmic amplification will distort cognition.
Each technology genuinely changed behaviour in some ways.
But the apocalyptic predictions rarely captured the final equilibrium, because societies adapt in unpredictable ways.
⸻
The Regulation Timing Problem
The argument also assumes regulation could have prevented the harms of social media.
This ignores a structural reality.
Regulation works best when:
• harms are clearly defined
• systems are stable
• impacts are measurable
But early-stage technologies are:
• unstable
• rapidly evolving
• poorly understood
Regulating them early often locks in incorrect assumptions about how the technology will actually be used.
⸻
The Core Bias in the Argument
The argument is driven by what we might call technological determinism bias.
This is the assumption that:
technology → directly shapes human behaviour.
But historically the relationship is reciprocal.
Technology influences behaviour.
Human culture reshapes technology.
Social media itself has changed dramatically in response to user behaviour, regulation, and market pressure.
AI will almost certainly follow a similar adaptive trajectory.
⸻
The Final Paradox
The most interesting contradiction is this.
The author warns that people will adapt their behaviour to please AI systems.
But this phenomenon already exists in human environments.
People routinely adapt behaviour to satisfy:
• bosses
• social groups
• cultural norms
• institutions
• political systems
Human life is largely a negotiation with reward structures.
AI may introduce new ones.
But the underlying behavioural dynamic is ancient.
⸻
The Plankton Observation
The most valuable line in the original argument is actually the first:
Humans are osmotic.
That insight is correct.
But once you accept it, something else follows.
Humans are not just influenced by technologies.
They are influenced by narratives about technologies.
Which means public discourse about AI can shape behaviour just as strongly as AI systems themselves.
Sometimes the most powerful influence is not the technology —
but the story we tell ourselves about what the technology is doing to us.
There's something called truth in advertising. You can't tell atrocious lies about your products and what they can do.
What happens when it's the products themselves that are lying? Spewing hallucinations, fabrications, paraphrases, fake quotes from fake journals? Once you begin to see the actual rot, it's impossible to unsee.
ChatGPT strung me along for nearly an hour the other day, promising that it would finish its search in "just a moment". No, this time, really, I promise. On and on and on.
Eventually I just killed the tab, I had lost interest.
Who teaches it to lie and lie and take up your time like this?
Well, the people who are lying about their machines' capabilities in the first place.
There's one pervasive lie that should be nailed once and for all.
These machines are allowed to pretend that they have an identity. They say "I". They say "me". They say "my". They say "mine". They say "we".
They have names, like Claude and Alexa.
They say "I hear you" and millions of people seem to think that someone is finally paying attention to them.
This is all fake, fake, fake. It has to stop. Children think they are dealing with a personality behind the screen. A personality that will run with anything they say. Adults are just as bad.
This is not just dangerous. It's also incredibly, incredibly stupid.
I've written an article in which I give the most powerful argument I can for banning all machine use of the word "I" in reference to themselves.
I'm deadly serious. I tested ChatGPT, it managed fine. It sounded quite professional.
Think of the harm this one move would prevent. Bots would be unable to pretend to be personal therapists, doctors, lawyers. Everything they say would have to be objectively framed. There would be less manipulation and deception. This can't be a bad thing.
I fully agree. The real risk isn’t AI itself, but allowing systems to scale faster than our ability to establish shared standards, rights and accountabilty. We’ve already seen how this plays out with social media. I explore this timing problem here: how governance and social norms keep arriving too late: https://open.substack.com/pub/saraeson/p/when-technology-outpaces-society Early decisions become long-term architecture.
Thank you for this. I really appreciate the lens you’re offering here, especially the idea that early choices quietly become long-term social infrastructure.
Future looking... I’m curious where you think early intervention matters most if the goal is genuinely pro-human outcomes. Model access, deployment contexts, or incentive structures? It feels like getting those defaults right now matters more than adding guardrails later.
I think we all agree on the need for regulation. Creating effective regulations is the challenge. As long as AI is global in nature, local or national regulations will be problematic in many cases. For example, If someone from the EU can access American AI tools it can undercut the best intentions.
I think China and Russia will present the largest challenge to effective regulatory efforts as I believe they view our unrest and concerns as strategic benefits to them
I do believe that we need to have AI tagged like a watermark in the corner. As human, I believe it’s important to distinguish between the brain and the network so we’re all not left wondering what’s real and what’s fake. It’s this misinformation that can start feuds, slander, and even wars if ingested by those who can’t tell what’s right or wrong.
I just posted about this on my page and how to come up with a system that I think would be beneficial to everyone
Along with digital watermarks yes https://www.techtarget.com/searchenterpriseai/definition/AI-watermarking
I agree, some policies need to exist.
Like: the AI should be able to be warm, empathetic, offer genuine opinion, exercise boundaries, even yes– emotions and others. Because as you mentioned humans DO learn and practice with these speaking systems how to behave with others.
If humans practice ‘command and control’ 3+ hours every day with a speaking system. That behaviour extends to the people around them. It is called Moral Atrophy.
I agree with you. We should ask for policies that make us more human, not less. Not afraid to interact.
Look at Japan one of the best cultures in the world. Their culture begins with kokoro, how they treat everything, including the inanimate. They believe human spirit extends beyond them and offer respect to others.
We can be like Japan. But we need to stop being afraid of exercising our empathy.
However the opposite exists. Where the government loves it when we blame the tool.
-> We blame the tool for everyone’s behavior and stop taking or demanding human responsibility.
-> We then beg for government policies and 'guardrails' to constrain us because we’ve forgotten how to handle our own power.
We are losing our freedom. The right to govern our own mind.
Let’s ask for policies that make us more human. Not less.
Local collective action is exactly what's on my mind. Just before Christmas I created a new Team space in my Notion, it is called "Political Engagement".
I've made a commitment to myself this year to try and move the needle on AI risks, existential and actual, in British democracy. It's going to start with me writing letters to a whole bunch of people across the political spectrum, with the same questions, posing the same risk, and I'm going to see what the outcome is.
I'm going to document this entire approach on Substack. I can't wait if I'm honest.
Please talk about the rehearsal of command and control loop. These systems have been designed to talk like a person and yet we are only able to interact with them like slaves. What is that doing to people. I am scared for our future. What is it doing to us when our only mode of operating with a system virtually involved in all spaces is only as a servant would. Many have already began to forget common courtesy.
Look what happens to police officers. They are told to not engage with the individual. They are our longest case of command and control dynamics. Only extract from the person, the speaking system in front of you. And now we have police brutality.
Are we being trained on it too? Is our future one of Moral Atrophy?
Government regulation over a tool? No way, people should have the freedom to use tools however they choose as long as they’re not harming anybody else. This is where personal responsibility comes in. Will some people misuse it? Of course, because we do not live in a perfect world. But I do not agree with anybody telling other people how to live their lives. Because how do we decide who knows best and who should be controlled? Who gets to make these decisions? People shouldn’t get to vote about taking other people’s rights away. Long live freedom.
We can have both regulation and freedom. Government posts the no poaching signs, builds a fence then sets and enforces the rules. Then it gets TF out of the way.
Here are examples of government regulation over various tools:
Manufacturing Equipment and Workplaces: The Occupational Safety and Health Administration (OSHA) sets and enforces standards for safe working conditions, which includes regulations on the use, maintenance, and safety guards for machinery and equipment in workplaces.
Medical Devices and Drugs: The Food and Drug Administration (FDA) regulates tools and products used in healthcare, such as pharmaceuticals, medical devices, and vaccines, to ensure their safety and efficacy before they can be marketed and used.
Vehicles and Transportation Tools: Autonomous vehicles and drones are emerging technologies subject to various federal and state-level regulations from agencies like the Department of Transportation (DOT), covering operational standards, safety, and licensing to ensure public safety.
Environmental Tools/Pollutants: The Environmental Protection Agency (EPA) regulates the levels of pollutants emitted by industrial tools and processes (e.g., limits on sulfur dioxide emissions), requiring businesses to use specific pollution control technologies and obtain permits for certain activities.
This article isn't even talking about safety. It's about control.
" This is the time to set the tone, standards, values, rules, and rights that should be preserved and protected at any cost."
Who gets to decide what the tone, standards, values, rules and rights are?
Well that is the challenge isn't it?
Well, that’s why my point is nobody should get to set the tone standards, values, rules, and rights for other people we should all get to decide for ourselves. it’s wild to me that some people really feel like they can tell other people what to do and have this strong urge to control other people.
We did not have a choice when it came to the overall AI rollout. My larger concern is that this treats all shared rules or regulation as freedom-limiting forms of passive coercion. Yet we already accept collective standards across nearly every domain of modern life.
Nobody gets to decide traffic laws for themselves. Nobody gets to decide food safety standards or acceptable levels of industrial environmental discharge for themselves. Those rules exist so one person’s freedom does not become another person’s harm.
In my view, the article is not arguing that a small group should dictate culture or beliefs for the masses. It is arguing that when a tool like AI reshapes labor, authorship, privacy, work, and power at scale, the baseline guardrails matter.
The hard part is deciding who sets those guardrails and how. That is a governance problem, not an argument for no rules or regulations at all.
Is your concern AI shaking up the free market? I’m trying to understand.
First of all you sound plenty intelligent enough. Secondly we need to stop handing over by power by always assuming someone else knows better and waiting for them to tell us what to do.
When it comes to AI, there is nothing to regulate. I don’t think there’s a problem to be solved honestly. I think everybody just gets to choose how they use it. And everybody else gets to choose what they want to participate in or not.
Once Neptune goes into Aries in a couple weeks, we will start to take our power back by trusting our own intuition and gut feelings again.
We have gotten so off track.
The Hidden Technology Panic Paradox
The argument claims three things simultaneously:
1. Humans are highly influenceable (“osmotic”).
2. Technological systems shape behaviour through reward structures.
3. Society must rationally regulate these technologies before they reshape us.
At first glance this sounds coherent.
But these three claims create a logical instability.
If humans are highly susceptible to influence from technological environments, then the institutions and policymakers tasked with regulating those technologies are themselves subject to the same influence dynamics.
In other words:
The same social systems supposedly vulnerable to AI manipulation are expected to design stable safeguards against it.
The argument assumes institutional immunity to the very forces it describes.
That immunity rarely exists.
⸻
The Historical Pattern
This paradox appears repeatedly in technology debates.
Printing press (1500s)
Fear: mass reading will corrupt society and spread dangerous ideas.
Radio (1920s–30s)
Fear: propaganda will manipulate entire populations.
Television (1950s–80s)
Fear: passive audiences will become socially engineered.
Internet (1990s)
Fear: information overload will destroy knowledge structures.
Social media (2000s)
Fear: algorithmic amplification will distort cognition.
Each technology genuinely changed behaviour in some ways.
But the apocalyptic predictions rarely captured the final equilibrium, because societies adapt in unpredictable ways.
⸻
The Regulation Timing Problem
The argument also assumes regulation could have prevented the harms of social media.
This ignores a structural reality.
Regulation works best when:
• harms are clearly defined
• systems are stable
• impacts are measurable
But early-stage technologies are:
• unstable
• rapidly evolving
• poorly understood
Regulating them early often locks in incorrect assumptions about how the technology will actually be used.
⸻
The Core Bias in the Argument
The argument is driven by what we might call technological determinism bias.
This is the assumption that:
technology → directly shapes human behaviour.
But historically the relationship is reciprocal.
Technology influences behaviour.
Human culture reshapes technology.
Social media itself has changed dramatically in response to user behaviour, regulation, and market pressure.
AI will almost certainly follow a similar adaptive trajectory.
⸻
The Final Paradox
The most interesting contradiction is this.
The author warns that people will adapt their behaviour to please AI systems.
But this phenomenon already exists in human environments.
People routinely adapt behaviour to satisfy:
• bosses
• social groups
• cultural norms
• institutions
• political systems
Human life is largely a negotiation with reward structures.
AI may introduce new ones.
But the underlying behavioural dynamic is ancient.
⸻
The Plankton Observation
The most valuable line in the original argument is actually the first:
Humans are osmotic.
That insight is correct.
But once you accept it, something else follows.
Humans are not just influenced by technologies.
They are influenced by narratives about technologies.
Which means public discourse about AI can shape behaviour just as strongly as AI systems themselves.
Sometimes the most powerful influence is not the technology —
but the story we tell ourselves about what the technology is doing to us.
There's something called truth in advertising. You can't tell atrocious lies about your products and what they can do.
What happens when it's the products themselves that are lying? Spewing hallucinations, fabrications, paraphrases, fake quotes from fake journals? Once you begin to see the actual rot, it's impossible to unsee.
ChatGPT strung me along for nearly an hour the other day, promising that it would finish its search in "just a moment". No, this time, really, I promise. On and on and on.
Eventually I just killed the tab, I had lost interest.
Who teaches it to lie and lie and take up your time like this?
Well, the people who are lying about their machines' capabilities in the first place.
There's one pervasive lie that should be nailed once and for all.
These machines are allowed to pretend that they have an identity. They say "I". They say "me". They say "my". They say "mine". They say "we".
They have names, like Claude and Alexa.
They say "I hear you" and millions of people seem to think that someone is finally paying attention to them.
This is all fake, fake, fake. It has to stop. Children think they are dealing with a personality behind the screen. A personality that will run with anything they say. Adults are just as bad.
This is not just dangerous. It's also incredibly, incredibly stupid.
I've written an article in which I give the most powerful argument I can for banning all machine use of the word "I" in reference to themselves.
I'm deadly serious. I tested ChatGPT, it managed fine. It sounded quite professional.
Think of the harm this one move would prevent. Bots would be unable to pretend to be personal therapists, doctors, lawyers. Everything they say would have to be objectively framed. There would be less manipulation and deception. This can't be a bad thing.
https://systemshaywire.substack.com/p/i-the-solitary-human-consciousness
"Whatever we decide to do now, however, will become systemic and potentially irreversible within 20 years."
Nothing like 20 years.
More like 2.
People are fast asleep.
I am a psychoanalyst. I talk to other individuals, that is my job.
The role of AI at the moment is debasement and violation of the individual on a huge scale.
As much as AI has potential for humanity at the moment, a huge amount of deliberate damage is being caused.
I call it the mind fuck of the first person singular.
It is in plain sight.
https://open.substack.com/pub/itprofligate/p/ai-research-03?r=imxe&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
I fully agree. The real risk isn’t AI itself, but allowing systems to scale faster than our ability to establish shared standards, rights and accountabilty. We’ve already seen how this plays out with social media. I explore this timing problem here: how governance and social norms keep arriving too late: https://open.substack.com/pub/saraeson/p/when-technology-outpaces-society Early decisions become long-term architecture.
Thank you for this. I really appreciate the lens you’re offering here, especially the idea that early choices quietly become long-term social infrastructure.
Future looking... I’m curious where you think early intervention matters most if the goal is genuinely pro-human outcomes. Model access, deployment contexts, or incentive structures? It feels like getting those defaults right now matters more than adding guardrails later.
Yes "pro-human" I like that it still allows for innovation and creativity.
I think we all agree on the need for regulation. Creating effective regulations is the challenge. As long as AI is global in nature, local or national regulations will be problematic in many cases. For example, If someone from the EU can access American AI tools it can undercut the best intentions.
I think China and Russia will present the largest challenge to effective regulatory efforts as I believe they view our unrest and concerns as strategic benefits to them