Luiza's Newsletter

Luiza's Newsletter

Share this post

Luiza's Newsletter
Luiza's Newsletter
Risk-Based AI Laws Are Broken
Copy link
Facebook
Email
Notes
More

Risk-Based AI Laws Are Broken

Legal frameworks such as the EU AI Act are not effective in dealing with emerging AI risks. We must acknowledge that and take a step further | Edition #205

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
May 21, 2025
∙ Paid
18

Share this post

Luiza's Newsletter
Luiza's Newsletter
Risk-Based AI Laws Are Broken
Copy link
Facebook
Email
Notes
More
5
Share

👋 Hi, Luiza Jarovsky here. Welcome to our 205th edition, now reaching 61,900+ subscribers in 168 countries. To upskill and advance your career:

  • AI Governance Training: Join my 4-week, 15-hour program in July

  • Learning Center: Receive free AI governance resources

  • Job Board: Find open roles in AI governance and privacy

  • Subscriber Forum: Participate in our daily discussions on AI


👉 A special thanks to TrustWorks, this edition's sponsor:

VTEX moved from a legacy privacy tool to a platform built for scale. As regulatory demands increased, they needed automation across data mapping, RoPA, DSR handling, and AI governance. TrustWorks delivered the agility and visibility their privacy team needed to stay connected and move faster. See the full transformation.


Risk-Based AI Laws Are Broken

A few days ago, I wrote about the profound ways AI is harming us. Its negative impact is often systemic and unpredictable, and involves our critical thinking, autonomy, relationships, and even our conception of what it means to be human.

After reading my essay, many of you have likely asked yourselves, “How can we change that?” and “Can regulation fix these issues?” (while more pessimistic readers might have simply concluded that we are doomed).

Today, I explain why risk-based regulatory approaches such as the EU AI Act, which many incorrectly deem ‘too strict’ and an ‘innovation stifler,’ are actually weak tools to address the problematic issues I described.

[On Sunday, I will publish a third essay in this series, with my thoughts on potential legal and regulatory paths forward, especially with AI's emerging risks in mind.]

-

In risk-based regulatory approaches to AI, such as the EU AI Act, the South Korean AI law, and specific U.S. state laws, the lawmaker sets in advance the criteria for classifying AI systems and/or AI models as prohibited, high-risk, low-risk, or any other predefined risk category.

Once the law enters into force, AI providers and deployers will be subject to the obligations corresponding to the designated risk level.

Even without considering the specific provisions of these risk-based AI laws, three factors make them weak from the start: the timeframe, strategic legal planning, and the vulnerability spectrum.

1. The Timeframe

There is always a significant amount of time (which goes from several months to years, depending on the jurisdiction) from the moment any law is drafted to the moment it becomes enforceable.

This is how the legislative process works, for reasons that include legal certainty and constitutional requirements.

At the same time, AI research, development, and deployment, including the corresponding risks from emerging applications, have happened at an accelerated pace in the past few years, to the point that it is difficult to predict what will be happening in AI a few weeks from now.

As a consequence, any risk classification in AI will be automatically outdated once the law becomes enforceable, simply because new applications, integrations, use cases, and capabilities emerge, and unforeseen risks become clearer. (The EU AI Act is not even fully enforceable yet, and we are starting to watch its inadequacy unfold).

One could argue that future-proofing mechanisms built into risk-based AI laws could solve most of these timeframe-related challenges.

However, these mechanisms (e.g., the EU AI Act's amendments through delegated acts) are usually specific, targeted, and require prior approval. In contrast, the emerging risks would require, among other factors, a systemic, flexible, and immediate legal answer.

2. Strategic Legal Planning

Another weakness of risk-based frameworks in AI is that they enable companies to perform strategic legal planning to ‘escape’ prohibitions and high-risk categories.

This, in turn, reduces the overall effectiveness of the law and makes enforcement harder.

Many outside of the legal field do not know it, but as soon as a new AI law's risk categories and the respective obligations are agreed upon, legal teams, including law firms and in-house legal departments, will start strategizing about how to use them to benefit their clients.

What does that mean?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More