16 Comments
User's avatar
Ammon Haggerty's avatar

Tornado is an apt metaphor. Disappointing to say the least. Thanks for the update.

Expand full comment
Larry G Maguire | Psychologist's avatar

EU leaders have no political balls. They are the lapdogs of US interests. Nothing new here... Skynet here we come

Expand full comment
Gary @ AI Loops's avatar

If the EU is simply following, who—or what—is truly leading in AI governance?

Expand full comment
Larry G Maguire | Psychologist's avatar

Well, events of recent days have maybe indicated that the EU realise they don't have an ally in Trump and Vance. Now they might actually be forced to grow some.

Expand full comment
Gary @ AI Loops's avatar

Beyond fortitude, what else would help AI governance serve the needs of the people?

Expand full comment
Larry G Maguire | Psychologist's avatar

Taking development out of the hands of the few corporations that currently control it.

Expand full comment
Gary @ AI Loops's avatar

And into the hands of?

Expand full comment
Larry G Maguire | Psychologist's avatar

You tell me...

Expand full comment
Gary @ AI Loops's avatar

Amazing work. This breakdown of the shifting AI governance landscape is incredibly insightful. The speed at which regulatory narratives are changing—particularly the EU’s pivot from liability to competitiveness—raises deeper questions about who actually governs AI. Is it policymakers, or the incentives embedded in the AI systems themselves?

What struck me is how these same tensions play out not just between global powers, but across institutions. I couldn’t help but think of a friend who had to adjust their grant writing process using AI to navigate the approval filters of the U.S. National Institutes of Health (NIH). They weren’t just optimizing language for clarity—they were aligning their proposals with what both AI and human reviewers were primed to recognize as fundable work, in the current Administion's context. Much like geopolitical AI policies reshape business priorities, AI-driven oversight mechanisms across institutions quietly structure what is seen, valued, and approved.

If AI governance at the national level is shifting toward “business-friendly” oversight, do you think we’ll see a similar shift in AI’s role across institutions? Could AI oversight mechanisms become less about ensuring compliance and more about reinforcing productivity and strategic priorities? Would love to hear your take.

Expand full comment
Bartosz Kowalski's avatar

Let me be the devil's advocate here. Or the business advocate. I agree that these regulations are absolutely necessary. But, as you wrote yourself, the AI Act provides for many exclusions, and the definitions are (in my opinion) very vague and poorly worded. How many companies in the European market can afford the luxury of cutting a few hundred man-days out of their schedule in, say, the fourth quarter of the year to review and adjust processes urgently? More than 99% of them are micro, small and medium-sized enterprises. Naturally, they will have proportionally fewer processes and tools to analyze than the big players. But the obligation will not disappear just because they employ few people. And if they don't hire a lawyer, they'll have to pay an outside law firm, and then they'll have to comply with the guidelines anyway. The same, by the way, applies to local governments. A small minority of local governments and territorial units will have the authority to analyze for themselves whether the AI tools they use are prohibited or carry a high risk.

So it seems to me that even a temporary relaxation of restrictions will give companies and organizations time to take a breath and slowly get their bearings.

Expand full comment
Gary @ AI Loops's avatar

If temporary relaxation is the answer, what ensures it doesn’t become the norm?

Expand full comment
Bartosz Kowalski's avatar

Nothing, I'm afraid. But I don't want to be misunderstood either. Deregulation is not a way out per se, but led wisely (and temporarily) it would give time for European entrepreneurs to adapt to the new reality and build a competitive advantage. This may also not have resounded from my previous comment (I described it in my article), but I do not think that the regulations themselves are the problem here, only their unfair application. Big-tech from overseas and from China blows the whistle on European regulations (vide scam ads in Meta and Google products). Once in a while they will pay a token fine, imperceptible in their global turnover. And for European business to comply with these regulations is to be or not to be. This is where I see room for improvement - tightening the screw on non-EU operators.

Expand full comment
Founders Radar's avatar

Great article Luiza. What strikes me is how the EU is still focusing on protecting fundamental rights, safety, and democratic control in its AI legislation. Yet, there’s still this tricky line between creating a framework that enables innovation and not stifling progress with too many rules.

Expand full comment
Daniel Florian's avatar

It has long been disputed what the role for the AI Liability Directive would be given that the EU has just passed the Product Liability Directive. I would tend to agree that we should first and foremost see whether there are indeed any gaps that a dedicated AI Liability Directive needs to fill. I also don't think that the fact that AI systems are black boxes to a certain degree makes it hard to claim they are "defect". There are other ways to assess defect that are not linked to disassembling a product to its nuts and bolts.

Expand full comment