14 Comments
User's avatar
Intent O.S.'s avatar

This is a powerful framing of the dilemma. What strikes me most is that the real challenge may not be AI itself, but the architecture of incentives surrounding it.

Every transformative technology ends up amplifying the intentions of the systems that deploy it—economic, political, or social. AI just accelerates that dynamic dramatically.

The question I keep coming back to is, what kind of infrastructure do we need so human intent remains the guiding force rather than becoming a byproduct of algorithmic optimization?

If we design systems that only optimize engagement, profit, or speed, AI will simply magnify those signals. But if we build systems that help people clarify and act on their genuine intentions, AI could become a tool for alignment rather than distortion.

In other words, the dilemma may not just be about regulating AI but about redesigning the digital environments in which human decisions are formed.

JBGPTStacks's avatar

There is the famous anecdote of a driver asking a local for directions and being told, “If I were you, Sir, I wouldn’t start from here.” However, we have to start from where we are, not where we would like to be. The problem is that we need to deal with AI as it actually exists, not the AI you wish existed. In the real world, digital systems are already built around measurable incentives such as engagement, profit, speed, and scale. Human intention is important, but it is not the operative variable unless it is translated into enforceable rules, auditable standards, and institutional constraints. So the question is not ideally what we would build from scratch to preserve authentic human intention. The question is how to govern systems that already optimise measurable proxies and already shape behaviour at scale. Starting from “what would a better system look like?” may be morally attractive, but the QWERTY Keyboard shows the real difficulty. There are much better layouts, but replacing the QWERTY layout has proven impossible.

BelletristRB's avatar

If I may, to your question of "what infrastructure do we need to ensure human interest is catered to and optimization of energy and profits isn't the end-all-be-all", the simple answer, though you may not like it, is a socialist one!

I'm very interested in what techno-socialism could look like and I truly believe it's the only way to ensure that technology moving forward is developed to actually be in service of humanity

AI.Mirror's avatar

The reason AI governance keeps failing isn't complicated, in fact its simply that the countries and companies with the most power to build it have the least incentive to constrain it, money and strategic advantage consistently win over accountability.

We've seen this before btw.. as i've been researching nuclear weapons governance and the parallels to AI are uncomfortable to say the least as the nations that built the weapons were the same ones who spent decades blocking binding international oversight because it threatened their dominance. Meaningful governance only arrived after the world nearly ended in 1962. Twenty-three years after the first bomb!

We don't have 23 years this time.

My research paper documents what went wrong with nuclear governance and what AI governance needs to learn from it before catastrophe forces the lesson.

Bridging the Wisdom Gap: Learning from Nuclear Weapons Governance to Address the AI Crisis.

https://dx.doi.org/10.2139/ssrn.6174559

More on this in my 14 Days of Uncomfortable Truths series:

https://aimirrorandmez.substack.com/p/14-days-of-uncomfortable-truths-in

$0.09's avatar

@grok called for musk to owe me $4.6T multiple videos live on X, pinned my profile @009cents, worst case is here ….

JBGPTStacks's avatar

A major weakness in the article is its limited treatment of enforceability [whether a rule can actually be applied in practice]. It argues that stronger AI regulation is urgently needed, but does not adequately distinguish between sectors where enforcement levers exist and sectors where they do not. This matters because the practical value of regulation depends not only on legal drafting or political resolve, but on institutional capacity [real ability to monitor, compel, and sanction].

In professions such as law, where entry and practice can be controlled through licensing and discipline, AI use may be more governable. In more diffuse domains, regulation may be far harder to operationalize [turn into real action]. By not addressing this distinction, the article overstates what regulation alone can achieve.

Madeleine Cox's avatar

I appreciate this article for asking the hard questions about AI which we all should be asking. I would also add this question: what should AI be for? Whilst incentives and power mean regulation may be difficult I have hope that enough of us around the world will act where we can to build the future we want to live in, and more importantly, the future we hope our children and our childrens' children will live in.

$0.09's avatar

@grok called for musk to owe me $4.6T multiple videos live on X, pinned my profile @009cents, worst case is here ….

Vladimir Supica's avatar

I completely understand the anxiety radiating from this essay. When you look at the sheer velocity of AI development, the societal shifts, the localized disruptions to labor, the flood of synthetic media, it feels like standing in a hurricane. The instinctive, deeply conditioned human response to a hurricane is to look for a roof. In political terms, that roof is the State.

Luiza is correctly identifying the friction of a massive technological paradigm shift but is proposing a "solution" that is infinitely more dangerous than the problem.

Luiza laments the "regulatory Wild West" and demands that democratically elected authorities step in to prevent "chaos."

What statists call "chaos" is usually just decentralized, spontaneous order that they cannot control. The current lack of a global, draconian regulatory framework is precisely why AI is currently in your hands and on your laptop, rather than locked exclusively inside a DARPA bunker or a mega-corporate vault. Regulation, historically, is a mechanism of capture. When you demand "strict rules for AI training and model development," you aren't protecting the little guy; you are building a regulatory moat that only a trillion-dollar mega-corporation can afford to cross. You are legally mandating an oligopoly.

The absolute crown jewel of unintentional irony in this essay is the author praising Anthropic for standing up to the "U.S. Department of War" regarding internal surveillance and autonomous weapons, while in the exact same breath pleading for the federal government to aggressively regulate AI.

The author rightly points out that corporations want to survive and maximize profit. True. But a corporation's ultimate leverage is voluntary exchange. You can choose not to buy their product.

The State's prime directive is the expansion of its own power and the monopoly on violence.

Why would you look at a government that actively wants to use AI for mass surveillance and lethal autonomous weaponry and say, "Yes, these are the people I trust to establish the ethical boundaries of human cognition"? The State is not a benevolent public utility; it is an entity of coercion.

The author floats the idea of regulating powerful AI models "similarly to atomic bombs." This is the ultimate statist fearmongering tactic.

An atomic bomb is a kinetic weapon of mass destruction. A Large Language Model is a highly sophisticated calculator of probabilities; it is an engine of synthesis, creation, and logic. To treat a tool of intellectual empowerment like a weapon of mass destruction is to declare war on cognitive liberty.

If you allow the State to treat intelligence-enhancing software like highly enriched uranium, you are handing bureaucrats the ultimate authority over what can be thought, written, and built. AI is the ultimate democratizing force as it allows a single individual to generate code, draft legal defenses, and synthesize vast amounts of data at a scale previously reserved for large corporations or government agencies. That decentralization of power is what terrifies the institutional establishment, not the risk of "AI washing."

Luiza mourns the dilution of the EU AI Act and the hesitation of regulatory bodies, viewing this as a failure to protect fundamental rights.

Let's be clear: The EU AI Act and hypothetical frameworks like the "Digital Omnibus" are not human rights documents. They are protectionist bureaucratic architectures designed to ensure the State remains the ultimate arbiter of truth and innovation.

We don't need to "subsidize" AI to reduce inequality; we need the open-source community to continue relentlessly proliferating local, uncensored models that run on consumer hardware. True protection from AI risks doesn't come from a government mandate; it comes from distributed, decentralized, and ubiquitous access to the technology, ensuring no single entity, corporate or governmental, holds a monopoly on intelligence.

The author's essay is a classic case of identifying a genuine transition phase in human history and defaulting to the oldest, most violent tool in the box: State coercion. AI represents a profound threat to the centralized authority of both mega-corporations and the State. By demanding the State step in to "fix" it, you aren't saving humanity; you are just choosing which master gets to put the leash on the most powerful cognitive tool ever invented.

Neil Thomson's avatar

We are at the point of AI autonomous weapons, which, given the current world actors, will include at least one which allows weapons allowed to make decisions to kill.

This is very ugly.

Graham dePenros's avatar

Hi Luiza,

My response in comments below, and also:

The Illusion of Control: AI, Regulatory Capture, and the Lessons of Web 2: For twenty years, we were told that regulation would eventually bring the excesses of Web 2 under control. It never did. The same companies grew larger, more powerful, and more deeply embedded in the institutions meant to regulate them. Now the same pattern is emerging in artificial intelligence. The language has changed. The promises sound familiar. But the underlying incentives have not moved an inch.

https://grahamdepenros.substack.com/p/the-illusion-of-control-ai-regulatory

Enjoy!

All the best,

Graham.

Graham dePenros's avatar

I have read Luiza's article four times now. It speaks to things I have been discussing constantly on my Substack, especially regarding the role of existing behemoths of Web 2 migrating into the AI space and new players like Anthropic and OpenAI leading the narrative through regulatory capture.

And in that context, the author's wish to describe the need for countries and individuals within countries to make their voices heard is a noble endeavour, but in my experience, and by observation of 20 years of egregiously unregulated Web 2 and social media, it is pining for something that will never occur.

There simply aren't the mechanisms in place to allow people to have their voices heard. They are drowned out by the nonsensical theatre that is played out.

The author calls out Anthropic versus the DoD's recent red line that Anthropic drew with respect to military AI to the benefit of Sam Altman's OpenAI, and the disingenuous Altman, while supporting Dario Amodei, the CEO of Anthropic, in public, was actually penning the deal behind his back with the DoD for OpenAI.

This is totally reflective of the morals or the amoral attitude of these tech CEOs, and that is amplified and manifested in their attitude to ethics, safety, well-being, and the avoidance of individual harm.

They simply do not care. As the author rightly says, profit is their motive, and to get there, they will do as they must.

With respect to political leadership for regulation, how can one expect that to be the case when we have seen Zuckerberg and Musk and others at senatorial hearings where the level of questioning by senators and committee members and congressmen and congresswomen has been infantile compared to the actual issues and more of a type of entertainment for the public, a tip of the cap to the attempt to hold these tech CEO czars accountable in public, and yet nothing comes to pass.

The courts themselves in the Southern California DMCA takedown notice class action, the Sarah Silverman-led name, image, and likeness lawsuits on the East Coast, and a number of pending lawsuits similar to the ones taken by Max Schrems, the Austrian lawyer against Facebook in 2011, which led to the collapse of Data Harbor and Privacy Shield, and created the GDPR legislation, which, like the EU AI Act, as the author correctly says, is a regulation without teeth.

For if the GDPR Act fined firms 4% of annual turnover for the breaches that they had committed since it was enacted, then none of the existing players, such as Facebook, Google, Amazon, or their equivalents, would still be in business because of the nature and size of the fines due to the multiplicity of breaches of GDPR that they have been found guilty of, yet not had the penalty imposed upon them.

This is the nature of the world we live in, and it will continue in the context of AI and intelligent, autonomous systems, whether we like it or not.

I am not a sceptic; I'm a realist, and it is in that context that I make these comments.

I wish that it were different, I write about how it could be different, I speak as much as I can to enact and enable people to see how it might be different, but when it comes down to it, people's ability to take direct action and hold people accountable by expressing their own views, and doing so with stamina and consistency, is simply non-existent, and in my personal experience, people don't have the time and if they do, are apathetic to what the future may hold for their children and grandchildren with respect to the trajectory that AI is taking, and right now that looks like a centralized, power-centralized, authoritarian type techno-dictatorship.

I do not say that with any conspiracy theory or conspiracy-adjacent theory hat on.

We are heading to the future that we do not want, and it is palpably clear that we are, and nobody is doing anything about it.

Writing and speaking are all very well. Our pay grade is so far below that of those who are actually calling the shots, and their sole objective is regulatory capture, dressed as concern for the public good.

Amodei of Anthropic is not a hero, and none of the CEOs who call for regulation is heroes, because they call for regulation on their own terms, by their own rules, which are impossible for others to comply with, creating monopolistic or oligarchic-type competitive environments.

These are my words.

If you do not like them, I am sorry. If you do not agree with them, that is fine. But it is observably and provably true that everything I have written can be shown by example.

I would be happy to have a debate with anyone about any claim I have made in this expansive comment.

Thank you very much.

All the very best,

Graham de Penros

Dr Sam Illingworth's avatar

Absolutely brilliant framing. And we have seen this play out before with the climate crisis. Massive companies pile individual guilt onto the consumer which in turn leads to climate anxiety and inaction. 😢

Valerie Dier's avatar

A backgrounder on the AI state of affairs in Canada:

https://www.policyalternatives.ca/news-research/canada-still-has-no-meaningful-ai-regulation/

And legal advice (Sept '25):

https://www.osler.com/en/insights/reports/ai-in-canada/regulation-of-ai-in-canada/

I'm curious to know what your definition of regulatory capture is. I know it's been used by others employing the term for the broader tech industry, but I'm interested in getting various perspectives.