This is a powerful framing of the dilemma. What strikes me most is that the real challenge may not be AI itself, but the architecture of incentives surrounding it.
Every transformative technology ends up amplifying the intentions of the systems that deploy it—economic, political, or social. AI just accelerates that dynamic dramatically.
The question I keep coming back to is, what kind of infrastructure do we need so human intent remains the guiding force rather than becoming a byproduct of algorithmic optimization?
If we design systems that only optimize engagement, profit, or speed, AI will simply magnify those signals. But if we build systems that help people clarify and act on their genuine intentions, AI could become a tool for alignment rather than distortion.
In other words, the dilemma may not just be about regulating AI but about redesigning the digital environments in which human decisions are formed.
We are at the point of AI autonomous weapons, which, given the current world actors, will include at least one which allows weapons allowed to make decisions to kill.
The Illusion of Control: AI, Regulatory Capture, and the Lessons of Web 2: For twenty years, we were told that regulation would eventually bring the excesses of Web 2 under control. It never did. The same companies grew larger, more powerful, and more deeply embedded in the institutions meant to regulate them. Now the same pattern is emerging in artificial intelligence. The language has changed. The promises sound familiar. But the underlying incentives have not moved an inch.
I have read Luiza's article four times now. It speaks to things I have been discussing constantly on my Substack, especially regarding the role of existing behemoths of Web 2 migrating into the AI space and new players like Anthropic and OpenAI leading the narrative through regulatory capture.
And in that context, the author's wish to describe the need for countries and individuals within countries to make their voices heard is a noble endeavour, but in my experience, and by observation of 20 years of egregiously unregulated Web 2 and social media, it is pining for something that will never occur.
There simply aren't the mechanisms in place to allow people to have their voices heard. They are drowned out by the nonsensical theatre that is played out.
The author calls out Anthropic versus the DoD's recent red line that Anthropic drew with respect to military AI to the benefit of Sam Altman's OpenAI, and the disingenuous Altman, while supporting Dario Amodei, the CEO of Anthropic, in public, was actually penning the deal behind his back with the DoD for OpenAI.
This is totally reflective of the morals or the amoral attitude of these tech CEOs, and that is amplified and manifested in their attitude to ethics, safety, well-being, and the avoidance of individual harm.
They simply do not care. As the author rightly says, profit is their motive, and to get there, they will do as they must.
With respect to political leadership for regulation, how can one expect that to be the case when we have seen Zuckerberg and Musk and others at senatorial hearings where the level of questioning by senators and committee members and congressmen and congresswomen has been infantile compared to the actual issues and more of a type of entertainment for the public, a tip of the cap to the attempt to hold these tech CEO czars accountable in public, and yet nothing comes to pass.
The courts themselves in the Southern California DMCA takedown notice class action, the Sarah Silverman-led name, image, and likeness lawsuits on the East Coast, and a number of pending lawsuits similar to the ones taken by Max Schrems, the Austrian lawyer against Facebook in 2011, which led to the collapse of Data Harbor and Privacy Shield, and created the GDPR legislation, which, like the EU AI Act, as the author correctly says, is a regulation without teeth.
For if the GDPR Act fined firms 4% of annual turnover for the breaches that they had committed since it was enacted, then none of the existing players, such as Facebook, Google, Amazon, or their equivalents, would still be in business because of the nature and size of the fines due to the multiplicity of breaches of GDPR that they have been found guilty of, yet not had the penalty imposed upon them.
This is the nature of the world we live in, and it will continue in the context of AI and intelligent, autonomous systems, whether we like it or not.
I am not a sceptic; I'm a realist, and it is in that context that I make these comments.
I wish that it were different, I write about how it could be different, I speak as much as I can to enact and enable people to see how it might be different, but when it comes down to it, people's ability to take direct action and hold people accountable by expressing their own views, and doing so with stamina and consistency, is simply non-existent, and in my personal experience, people don't have the time and if they do, are apathetic to what the future may hold for their children and grandchildren with respect to the trajectory that AI is taking, and right now that looks like a centralized, power-centralized, authoritarian type techno-dictatorship.
I do not say that with any conspiracy theory or conspiracy-adjacent theory hat on.
We are heading to the future that we do not want, and it is palpably clear that we are, and nobody is doing anything about it.
Writing and speaking are all very well. Our pay grade is so far below that of those who are actually calling the shots, and their sole objective is regulatory capture, dressed as concern for the public good.
Amodei of Anthropic is not a hero, and none of the CEOs who call for regulation is heroes, because they call for regulation on their own terms, by their own rules, which are impossible for others to comply with, creating monopolistic or oligarchic-type competitive environments.
These are my words.
If you do not like them, I am sorry. If you do not agree with them, that is fine. But it is observably and provably true that everything I have written can be shown by example.
I would be happy to have a debate with anyone about any claim I have made in this expansive comment.
Absolutely brilliant framing. And we have seen this play out before with the climate crisis. Massive companies pile individual guilt onto the consumer which in turn leads to climate anxiety and inaction. 😢
I'm curious to know what your definition of regulatory capture is. I know it's been used by others employing the term for the broader tech industry, but I'm interested in getting various perspectives.
This is a powerful framing of the dilemma. What strikes me most is that the real challenge may not be AI itself, but the architecture of incentives surrounding it.
Every transformative technology ends up amplifying the intentions of the systems that deploy it—economic, political, or social. AI just accelerates that dynamic dramatically.
The question I keep coming back to is, what kind of infrastructure do we need so human intent remains the guiding force rather than becoming a byproduct of algorithmic optimization?
If we design systems that only optimize engagement, profit, or speed, AI will simply magnify those signals. But if we build systems that help people clarify and act on their genuine intentions, AI could become a tool for alignment rather than distortion.
In other words, the dilemma may not just be about regulating AI but about redesigning the digital environments in which human decisions are formed.
We are at the point of AI autonomous weapons, which, given the current world actors, will include at least one which allows weapons allowed to make decisions to kill.
This is very ugly.
Hi Luiza,
My response in comments below, and also:
The Illusion of Control: AI, Regulatory Capture, and the Lessons of Web 2: For twenty years, we were told that regulation would eventually bring the excesses of Web 2 under control. It never did. The same companies grew larger, more powerful, and more deeply embedded in the institutions meant to regulate them. Now the same pattern is emerging in artificial intelligence. The language has changed. The promises sound familiar. But the underlying incentives have not moved an inch.
https://grahamdepenros.substack.com/p/the-illusion-of-control-ai-regulatory
Enjoy!
All the best,
Graham.
I have read Luiza's article four times now. It speaks to things I have been discussing constantly on my Substack, especially regarding the role of existing behemoths of Web 2 migrating into the AI space and new players like Anthropic and OpenAI leading the narrative through regulatory capture.
And in that context, the author's wish to describe the need for countries and individuals within countries to make their voices heard is a noble endeavour, but in my experience, and by observation of 20 years of egregiously unregulated Web 2 and social media, it is pining for something that will never occur.
There simply aren't the mechanisms in place to allow people to have their voices heard. They are drowned out by the nonsensical theatre that is played out.
The author calls out Anthropic versus the DoD's recent red line that Anthropic drew with respect to military AI to the benefit of Sam Altman's OpenAI, and the disingenuous Altman, while supporting Dario Amodei, the CEO of Anthropic, in public, was actually penning the deal behind his back with the DoD for OpenAI.
This is totally reflective of the morals or the amoral attitude of these tech CEOs, and that is amplified and manifested in their attitude to ethics, safety, well-being, and the avoidance of individual harm.
They simply do not care. As the author rightly says, profit is their motive, and to get there, they will do as they must.
With respect to political leadership for regulation, how can one expect that to be the case when we have seen Zuckerberg and Musk and others at senatorial hearings where the level of questioning by senators and committee members and congressmen and congresswomen has been infantile compared to the actual issues and more of a type of entertainment for the public, a tip of the cap to the attempt to hold these tech CEO czars accountable in public, and yet nothing comes to pass.
The courts themselves in the Southern California DMCA takedown notice class action, the Sarah Silverman-led name, image, and likeness lawsuits on the East Coast, and a number of pending lawsuits similar to the ones taken by Max Schrems, the Austrian lawyer against Facebook in 2011, which led to the collapse of Data Harbor and Privacy Shield, and created the GDPR legislation, which, like the EU AI Act, as the author correctly says, is a regulation without teeth.
For if the GDPR Act fined firms 4% of annual turnover for the breaches that they had committed since it was enacted, then none of the existing players, such as Facebook, Google, Amazon, or their equivalents, would still be in business because of the nature and size of the fines due to the multiplicity of breaches of GDPR that they have been found guilty of, yet not had the penalty imposed upon them.
This is the nature of the world we live in, and it will continue in the context of AI and intelligent, autonomous systems, whether we like it or not.
I am not a sceptic; I'm a realist, and it is in that context that I make these comments.
I wish that it were different, I write about how it could be different, I speak as much as I can to enact and enable people to see how it might be different, but when it comes down to it, people's ability to take direct action and hold people accountable by expressing their own views, and doing so with stamina and consistency, is simply non-existent, and in my personal experience, people don't have the time and if they do, are apathetic to what the future may hold for their children and grandchildren with respect to the trajectory that AI is taking, and right now that looks like a centralized, power-centralized, authoritarian type techno-dictatorship.
I do not say that with any conspiracy theory or conspiracy-adjacent theory hat on.
We are heading to the future that we do not want, and it is palpably clear that we are, and nobody is doing anything about it.
Writing and speaking are all very well. Our pay grade is so far below that of those who are actually calling the shots, and their sole objective is regulatory capture, dressed as concern for the public good.
Amodei of Anthropic is not a hero, and none of the CEOs who call for regulation is heroes, because they call for regulation on their own terms, by their own rules, which are impossible for others to comply with, creating monopolistic or oligarchic-type competitive environments.
These are my words.
If you do not like them, I am sorry. If you do not agree with them, that is fine. But it is observably and provably true that everything I have written can be shown by example.
I would be happy to have a debate with anyone about any claim I have made in this expansive comment.
Thank you very much.
All the very best,
Graham de Penros
Absolutely brilliant framing. And we have seen this play out before with the climate crisis. Massive companies pile individual guilt onto the consumer which in turn leads to climate anxiety and inaction. 😢
A backgrounder on the AI state of affairs in Canada:
https://www.policyalternatives.ca/news-research/canada-still-has-no-meaningful-ai-regulation/
And legal advice (Sept '25):
https://www.osler.com/en/insights/reports/ai-in-canada/regulation-of-ai-in-canada/
I'm curious to know what your definition of regulatory capture is. I know it's been used by others employing the term for the broader tech industry, but I'm interested in getting various perspectives.