Luiza's Newsletter

Luiza's Newsletter

High-Risk AI

Existing attempts to classify and regulate AI systems as risky or not risky have been incomplete and overlooked essential aspects of the interaction between humans and machines | Edition #255

Luiza Jarovsky, PhD's avatar
Luiza Jarovsky, PhD
Dec 03, 2025
∙ Paid
“Tiger in a Tropical Storm” by Henri Rousseau, 1891 (oil on canvas, modified)

👋 Hi everyone, Luiza Jarovsky, PhD, here. Welcome to our 255th edition, trusted by more than 87,400 subscribers worldwide.


🎓 This is how I can support your learning and upskilling journey in AI:

  • Join my AI Governance Training [or apply for a discounted seat]

  • Strengthen your team’s AI literacy with a group subscription

  • Sign up for our Learning Center’s weekly educational resources

  • Receive our job alerts for open roles in AI governance and privacy

  • Discover your next read in AI and beyond in our AI Book Club


👉 A special thanks to AgentCloak, this edition’s sponsor:

Compliance teams are now requiring AI systems to operate with only the essential data, a priority in Europe under the EU AI Act. AgentCloak seamlessly cloaks and uncloaks sensitive data between AI clients and servers to ensure that AI systems only access the minimum amount of data they need to operate. Discover more at agentcloak.ai


*To support us and reach over 87,400 subscribers, become a sponsor.


High-Risk AI

How do we assess if an AI system or model poses a risk to individuals, groups, or society as a whole? What types of AI models and systems should be under stricter legal scrutiny and oversight? What should be considered “high-risk AI”?

Any country or region aiming to establish a national AI strategy will have to navigate these risk-related questions, decide how to address them, and, implicitly or explicitly, justify its decision to the public.

Even if a country decides not to enact a comprehensive AI law at the national level (as is currently happening in the United States), or not to impose scrutiny or oversight on any AI system or model, the topic of AI risk cannot be ignored. Why?

Because a product or service that is legally available and poses a risk might lead to harm and cause systemic individual and social issues that may be difficult to correct later on, including potentially catastrophic events.

In AI, assessing risk is particularly important because “artificial intelligence” is an umbrella term that encompasses many types of products, services, and applied technologies.

A general-purpose chatbot, a self-driving car, a voice assistant, a home humanoid robot, a security camera, a toy, and a social media recommender system might all fall into the AI category.

However, these products and services are different, affect different people, are used in different contexts, and clearly present different risk profiles. Even though they might fit the same “AI” category, they probably should be regulated differently.

This is one of the challenges of AI regulation, as products that have almost nothing in common will be under the same legal framework and might have to follow the same rules simply because they have an AI component or they fit the law's definition of AI.

This is also why understanding risk in AI and what a high-risk AI system or model is is so important. It will help lawmakers, policymakers, authorities, practitioners, and the public better understand AI and how it might affect them.

However, despite the high stakes, risk is currently a poorly understood and under-explored legal territory, and most people (including regulatory authorities) do not seem to have realized this yet.

Existing attempts to classify and regulate AI systems as risky or not-risky have been incomplete, rigid, and have overlooked essential aspects of the interaction between humans and machines.

Even the EU AI Act, often seen as ‘strict’ or a global reference in AI regulation, misses critical AI-related risks that have become evident over the past three years and have already led to multiple cases of harm, including death.

And this is where things get complicated:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Luiza Jarovsky · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture