I do believe that we need to have AI tagged like a watermark in the corner. As human, I believe it’s important to distinguish between the brain and the network so we’re all not left wondering what’s real and what’s fake. It’s this misinformation that can start feuds, slander, and even wars if ingested by those who can’t tell what’s right or wrong.
I fully agree. The real risk isn’t AI itself, but allowing systems to scale faster than our ability to establish shared standards, rights and accountabilty. We’ve already seen how this plays out with social media. I explore this timing problem here: how governance and social norms keep arriving too late: https://open.substack.com/pub/saraeson/p/when-technology-outpaces-society Early decisions become long-term architecture.
Thank you for this. I really appreciate the lens you’re offering here, especially the idea that early choices quietly become long-term social infrastructure.
Future looking... I’m curious where you think early intervention matters most if the goal is genuinely pro-human outcomes. Model access, deployment contexts, or incentive structures? It feels like getting those defaults right now matters more than adding guardrails later.
I think we all agree on the need for regulation. Creating effective regulations is the challenge. As long as AI is global in nature, local or national regulations will be problematic in many cases. For example, If someone from the EU can access American AI tools it can undercut the best intentions.
I think China and Russia will present the largest challenge to effective regulatory efforts as I believe they view our unrest and concerns as strategic benefits to them
Government regulation over a tool? No way, people should have the freedom to use tools however they choose as long as they’re not harming anybody else. This is where personal responsibility comes in. Will some people misuse it? Of course, because we do not live in a perfect world. But I do not agree with anybody telling other people how to live their lives. Because how do we decide who knows best and who should be controlled? Who gets to make these decisions? People shouldn’t get to vote about taking other people’s rights away. Long live freedom.
First of all you sound plenty intelligent enough. Secondly we need to stop handing over by power by always assuming someone else knows better and waiting for them to tell us what to do.
When it comes to AI, there is nothing to regulate. I don’t think there’s a problem to be solved honestly. I think everybody just gets to choose how they use it. And everybody else gets to choose what they want to participate in or not.
Once Neptune goes into Aries in a couple weeks, we will start to take our power back by trusting our own intuition and gut feelings again.
We can have both regulation and freedom. Government posts the no poaching signs, builds a fence then sets and enforces the rules. Then it gets TF out of the way.
Here are examples of government regulation over various tools:
Manufacturing Equipment and Workplaces: The Occupational Safety and Health Administration (OSHA) sets and enforces standards for safe working conditions, which includes regulations on the use, maintenance, and safety guards for machinery and equipment in workplaces.
Medical Devices and Drugs: The Food and Drug Administration (FDA) regulates tools and products used in healthcare, such as pharmaceuticals, medical devices, and vaccines, to ensure their safety and efficacy before they can be marketed and used.
Vehicles and Transportation Tools: Autonomous vehicles and drones are emerging technologies subject to various federal and state-level regulations from agencies like the Department of Transportation (DOT), covering operational standards, safety, and licensing to ensure public safety.
Environmental Tools/Pollutants: The Environmental Protection Agency (EPA) regulates the levels of pollutants emitted by industrial tools and processes (e.g., limits on sulfur dioxide emissions), requiring businesses to use specific pollution control technologies and obtain permits for certain activities.
Well, that’s why my point is nobody should get to set the tone standards, values, rules, and rights for other people we should all get to decide for ourselves. it’s wild to me that some people really feel like they can tell other people what to do and have this strong urge to control other people.
We did not have a choice when it came to the overall AI rollout. My larger concern is that this treats all shared rules or regulation as freedom-limiting forms of passive coercion. Yet we already accept collective standards across nearly every domain of modern life.
Nobody gets to decide traffic laws for themselves. Nobody gets to decide food safety standards or acceptable levels of industrial environmental discharge for themselves. Those rules exist so one person’s freedom does not become another person’s harm.
In my view, the article is not arguing that a small group should dictate culture or beliefs for the masses. It is arguing that when a tool like AI reshapes labor, authorship, privacy, work, and power at scale, the baseline guardrails matter.
The hard part is deciding who sets those guardrails and how. That is a governance problem, not an argument for no rules or regulations at all.
I do believe that we need to have AI tagged like a watermark in the corner. As human, I believe it’s important to distinguish between the brain and the network so we’re all not left wondering what’s real and what’s fake. It’s this misinformation that can start feuds, slander, and even wars if ingested by those who can’t tell what’s right or wrong.
I just posted about this on my page and how to come up with a system that I think would be beneficial to everyone
Along with digital watermarks yes https://www.techtarget.com/searchenterpriseai/definition/AI-watermarking
"Whatever we decide to do now, however, will become systemic and potentially irreversible within 20 years."
Nothing like 20 years.
More like 2.
People are fast asleep.
I am a psychoanalyst. I talk to other individuals, that is my job.
The role of AI at the moment is debasement and violation of the individual on a huge scale.
As much as AI has potential for humanity at the moment, a huge amount of deliberate damage is being caused.
I call it the mind fuck of the first person singular.
It is in plain sight.
https://open.substack.com/pub/itprofligate/p/ai-research-03?r=imxe&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
I fully agree. The real risk isn’t AI itself, but allowing systems to scale faster than our ability to establish shared standards, rights and accountabilty. We’ve already seen how this plays out with social media. I explore this timing problem here: how governance and social norms keep arriving too late: https://open.substack.com/pub/saraeson/p/when-technology-outpaces-society Early decisions become long-term architecture.
Thank you for this. I really appreciate the lens you’re offering here, especially the idea that early choices quietly become long-term social infrastructure.
Future looking... I’m curious where you think early intervention matters most if the goal is genuinely pro-human outcomes. Model access, deployment contexts, or incentive structures? It feels like getting those defaults right now matters more than adding guardrails later.
Yes "pro-human" I like that it still allows for innovation and creativity.
I think we all agree on the need for regulation. Creating effective regulations is the challenge. As long as AI is global in nature, local or national regulations will be problematic in many cases. For example, If someone from the EU can access American AI tools it can undercut the best intentions.
I think China and Russia will present the largest challenge to effective regulatory efforts as I believe they view our unrest and concerns as strategic benefits to them
Government regulation over a tool? No way, people should have the freedom to use tools however they choose as long as they’re not harming anybody else. This is where personal responsibility comes in. Will some people misuse it? Of course, because we do not live in a perfect world. But I do not agree with anybody telling other people how to live their lives. Because how do we decide who knows best and who should be controlled? Who gets to make these decisions? People shouldn’t get to vote about taking other people’s rights away. Long live freedom.
First of all you sound plenty intelligent enough. Secondly we need to stop handing over by power by always assuming someone else knows better and waiting for them to tell us what to do.
When it comes to AI, there is nothing to regulate. I don’t think there’s a problem to be solved honestly. I think everybody just gets to choose how they use it. And everybody else gets to choose what they want to participate in or not.
Once Neptune goes into Aries in a couple weeks, we will start to take our power back by trusting our own intuition and gut feelings again.
We have gotten so off track.
We can have both regulation and freedom. Government posts the no poaching signs, builds a fence then sets and enforces the rules. Then it gets TF out of the way.
Here are examples of government regulation over various tools:
Manufacturing Equipment and Workplaces: The Occupational Safety and Health Administration (OSHA) sets and enforces standards for safe working conditions, which includes regulations on the use, maintenance, and safety guards for machinery and equipment in workplaces.
Medical Devices and Drugs: The Food and Drug Administration (FDA) regulates tools and products used in healthcare, such as pharmaceuticals, medical devices, and vaccines, to ensure their safety and efficacy before they can be marketed and used.
Vehicles and Transportation Tools: Autonomous vehicles and drones are emerging technologies subject to various federal and state-level regulations from agencies like the Department of Transportation (DOT), covering operational standards, safety, and licensing to ensure public safety.
Environmental Tools/Pollutants: The Environmental Protection Agency (EPA) regulates the levels of pollutants emitted by industrial tools and processes (e.g., limits on sulfur dioxide emissions), requiring businesses to use specific pollution control technologies and obtain permits for certain activities.
This article isn't even talking about safety. It's about control.
" This is the time to set the tone, standards, values, rules, and rights that should be preserved and protected at any cost."
Who gets to decide what the tone, standards, values, rules and rights are?
Well that is the challenge isn't it?
Well, that’s why my point is nobody should get to set the tone standards, values, rules, and rights for other people we should all get to decide for ourselves. it’s wild to me that some people really feel like they can tell other people what to do and have this strong urge to control other people.
We did not have a choice when it came to the overall AI rollout. My larger concern is that this treats all shared rules or regulation as freedom-limiting forms of passive coercion. Yet we already accept collective standards across nearly every domain of modern life.
Nobody gets to decide traffic laws for themselves. Nobody gets to decide food safety standards or acceptable levels of industrial environmental discharge for themselves. Those rules exist so one person’s freedom does not become another person’s harm.
In my view, the article is not arguing that a small group should dictate culture or beliefs for the masses. It is arguing that when a tool like AI reshapes labor, authorship, privacy, work, and power at scale, the baseline guardrails matter.
The hard part is deciding who sets those guardrails and how. That is a governance problem, not an argument for no rules or regulations at all.
Is your concern AI shaking up the free market? I’m trying to understand.