👋 Hi, Luiza Jarovsky here. Welcome to our 170th edition, read by 52,200+ subscribers in 165+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance & regulation. It's great to have you here!
👉 A special thanks to NowSecure for sponsoring this week's free edition of the newsletter. Check them out:
Third-party apps used by your employees or third-party code in the supply chain of the apps you build could be sending your sensitive information to non-secure destinations without your knowledge. The worst way to discover your company's use of non-secure AI is through the news. Take the first step in AI governance with NowSecure. Learn more.
*Promote your AI governance solution to 52,200+ readers: sponsor us
🌪️ The AI Governance Tornado
Many haven't realized it yet, but the global AI governance landscape is shifting dramatically. It's still unclear where it's heading, but this week's AI Summit in Paris revealed two main narratives:
The EU wants to signal that it's ready for the AI race and is willing to do whatever it takes to stay competitive, particularly against China and the U.S. It even went so far as to withdraw the AI Liability Directive, as announced yesterday.
The U.S., now under the new administration, is advancing a deregulation strategy. It has revoked the previous Executive Order on AI and made it clear this week that it will not accept regulatory constraints from the EU.
➡️ The Narrative Shift
Quotes from world leaders at the AI Summit help illustrate this shift in narrative:
French President Emmanuel Macron said: "Europe is going to accelerate; France is going to accelerate - and so for us, France, we're announcing tomorrow at this summit €109 billion of investment in AI over the next few years."
Ursula von der Leyen, the President of the European Commission, said: "We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere. AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic. And this is what you – entrepreneurs and researchers, investors and business leaders – are showcasing here in Paris. This is a glimpse of the AI continent we want to become."
U.S. Vice President JD Vance stated: “The Trump administration is troubled by reports that some foreign governments are considering tightening the screws on U.S. tech companies with international footprints. Now America cannot and will not accept that.”
(*Watch an excerpt from JD Vance's speech here.)
➡️ AI Liability Withdrawal
From a legal perspective, a major milestone happened yesterday when the EU withdrew the AI Liability Directive, stating: "No foreseeable agreement - the Commission will assess whether another proposal should be tabled or another type of approach should be chosen."
This decision has nothing to do with legal arguments and everything to do with politics. Take a look at some additional statements from this week's AI Summit, particularly regarding regulation:
French President Macron said: "We will simplify ... It's very clear we have to resynchronise with the rest of the world."
Henna Virkkunen (the EU "digital chief") promised the audience that the EU would simplify its rules and apply them in a “business-friendly way" — whatever that means from a legal perspective, especially when it comes to reducing risk and protecting fundamental rights.
The EU AI Liability directive has been “bothering” the tech sector much before this AI Summit. A reminder that on January 29, the American Chamber of Commerce to the EU (AmCham EU) published a position paper in which they urged EU policymakers to withdraw the AI Liability Directive. Here's what they said:
"EU policymakers must withdraw the AI Liability Directive in order to avoid adding unnecessary complexity and uncertainty to Europe’s AI regulatory landscape. With the recently passed Product Liability Directive already expanding liability rules to AI, AmCham EU is joining the call with other industry associations in warning that additional, overlapping measures on AI would hinder innovation and disrupt established business practices. Policymakers should instead focus on regulatory simplification, in line with the Draghi Report on Competitiveness."
The withdrawal of the AI Liability Directive is yet another sign that the EU is shifting gears (read my deep dive on EU Liability rules here). The political tide is turning, and the new priority seems to be to "winning" the AI race, whatever it takes.
➡️ EU AI Act
For those with high expectations regarding the EU AI Act and its enforcement—particularly in protecting fundamental rights—in the context of the ongoing global AI governance turmoil, here are a few reminders:
The EU AI Act has entered into force, and some of its provisions have been enforceable since February 2nd. However, as I've discussed in this newsletter and my AI Governance Training over the past few months, the AI Act is already heavily diluted and full of exceptions and loopholes. Its enforcement depends on political will, and priorities appear to have shifted.
It's no longer 2024. This year brought Trump and a major narrative shift that includes deregulation. The EU is openly changing its narrative to reposition itself as a more competitive player. Recent EU reports, such as this one by Mario Draghi, have intensified the internal pressure.
Given the EU AI Act’s numerous loopholes and exceptions—combined with Henna Virkkunen's declaration that the rules will be applied in a “business-friendly” way—it's unclear what will remain or how effective enforcement will be. I'm not particularly optimistic.
➡️ Paradigm Shifts
My final comment: it seems the global AI governance tornado is just starting. The political landscape has changed, and the EU seems interested in pivoting toward greater competitiveness and weaker enforcement. I’ll keep you posted.
On Sunday, paid subscribers will receive premiere access to the recording of my live conversation with Anu Bradford on “The Global AI Race: Regulation & Power.” Don't miss it!
💼 AI Governance Careers Are Surging: Upskill & Lead
I invite you to join the 18th cohort of my AI Governance Training and gain skills to tackle emerging legal and ethical challenges in AI. This 12-hour live online program goes beyond standard certifications and offers:
8 live sessions with me, curated self-study materials, and quizzes;
A training certificate and 13 CPE credits;
A 1-year subscription to the paid newsletter ($115 value);
A networking session with peers;
Office hours for career-related discussions.
Registration for the March cohort is open, and only 11 spots remain. Read alumni testimonials and join 1,100+ professionals who have advanced their careers with us:
*If cost is a concern, we offer discounts for students, NGO members, and individuals in career transition. To apply, fill out this form.
🚩 AI Literacy Fake News
I've seen many misleading posts about the EU AI Act's AI literacy obligation. Here's some of the fake information being spread—and what everyone should know:
Article 4 of the EU AI Act, which establishes an AI literacy obligation for providers and deployers of AI systems, took effect on February 2. Naturally, people have started writing about it and promoting their AI literacy services.
In recent days, I've seen many people—some with large followings—misinterpreting this obligation. They say that AI literacy involves becoming "AI first" or "increasing AI-led productivity." They also suggest that AI literacy training should focus on "accelerating AI adoption" within organizations.
Those sharing this narrative haven't understood Article 4. If you check Recital 20, which clarifies the AI literacy obligation, it establishes that AI literacy efforts should equip providers, deployers, and affected persons with knowledge about:
1. Protecting fundamental rights;
2. Protecting health and safety;
3. Enabling democratic control in the context of AI;
4. Helping everyone involved to make informed decisions regarding AI systems;
5. Understanding the correct application of technical elements during the AI system’s development;
6. Applying protective measures during AI systems' use;
7. Interpreting an AI system’s output;
8. Helping affected persons understand how decisions taken with the assistance of AI will have an impact on them;
9. Complying with the EU AI Act;
10. Understanding how the EU AI Act will be enforced;
11. Improving working conditions;
12. Consolidating trustworthy AI innovation in the EU;
13. Learning about the benefits, risks, safeguards, rights, and obligations in relation to the use of AI systems;
and more!
In summary, this has nothing to do with becoming "AI-first" and everything to do with protecting fundamental rights, complying with the EU AI Act, and ensuring AI is developed, deployed, and used in a way that aligns with the EU's approach to trustworthy AI.
Also, many people selling "EU AI Act AI literacy training" do not have enough background knowledge to teach what the AI Act actually requires. It has nothing to do with "AI-led innovation or productivity" or implementing AI across an organization. This is not what the EU AI Act establishes as AI literacy.
These misleading posts about AI literacy reflect a larger problem that many lawyers and legal professionals, including myself, recognize: people don't take legal expertise seriously. They forget that there is much beyond "just reading the Article" or a quick Google/ChatGPT answer. That's why we spend years in law school before we can practice law.
So next time you see someone claiming the EU AI Act mandates companies to train employees to become "AI first," let them know that this is not true.
🚫 Prohibited AI Practices in the EU
On February 2, the first provisions of the EU AI Act took effect, including Article 5, which deals with the AI Act's prohibited AI practices. Two days later, the EU Commission published the long-awaited guidelines clarifying these prohibitions.
Everyone working in AI should be familiar with these prohibited practices, especially since non-compliance can result in the highest possible fine under the EU AI Act: up to 35 million Euros or 7% of the organization's total worldwide annual turnover, whichever is higher.
On Sunday, I published my highlights from the 140-page document, including a final section with my critical insights. Read my analysis here.
🚀 Daily AI Governance Resources
Thousands of people receive our daily emails with educational and professional resources on AI governance, along with updates on our free live sessions and training programs. Join our learning center:
🤖 EU AI Act: Definition of AI
The EU has published guidelines on the EU AI Act's definition of AI. Many may be surprised to discover that the following systems are not covered:
1. "machine learning-based models that approximate functions or parameters in optimization problems while maintaining performance. The systems aim to improve the efficiency of optimisation algorithms used in computational problems. For example, they help to speed up optimisation tasks by providing learned approximations, heuristics, or search strategies."
2. "satellite telecommunication system to optimize bandwidth allocation and resource management. In satellite communication, traditional optimization methods may struggle with real-time demands of network traffic, especially when adjusting for varying levels of user demand across different regions. Machine learning models, for instance, can be used to predict network traffic and optimize the allocation of resources like power and bandwidth to satellite transponders, having similar performance to established methods in the field."
3. "a chess program using a minimax algorithm with heuristic evaluation functions can assess board positions without requiring prior learning from data. While effective in many applications, heuristic methods may lack adaptability and generalization compared to AI systems that learn from experience."
4. "All machine-based systems whose performance can be achieved via a basic statistical learning rule, while technically may be classified as relying on machine learning approaches fall outside the scope of the AI system definition, due to its performance."
5. "Static estimation systems, such as customer support response time system that are based on static estimation to predict the mean resolution time from the past data and trivial predictors such as demand forecasting for a store to predict how many items of a product the store will sell each day are other examples, that help to establish a baseline or a benchmark, e.g. by predicting average or mean."
📢 Spread AI Governance Resources
Enjoying this edition? Share it with friends and colleagues:
📄 Generative AI's Impact on Critical Thinking
Studies are starting to show what many of us feared: AI use might lead to overreliance and human disempowerment.
In this context, the paper "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers" by Hank Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson is a must-read for everyone in AI. Check out the paper's conclusion:
"1. We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources.
2. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.
3. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.
4. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.
5. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows.
6. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers."
➡️ My comments:
It would be interesting to see more studies like this, especially with larger samples and focusing on different occupations, demographics, and use patterns;
The paper focuses on Generative AI; it will be interesting to see a similar study focused on advanced AI agents, especially as their popularity grows.
📚 AI Book Club: What Are You Reading?
📖 More than 2,200 people have joined our AI Book Club and receive our book recommendations.
📖 The 17th recommended book was "Chip War: The Fight for the World's Most Critical Technology," by Chris Miller.
📖 Ready to discover your next favorite read? See our previous reads and join the AI book club:
🔥 Job Opportunities in AI Governance
Looking for a job in AI governance? Check out our global job board and subscribe to receive our weekly alerts.
💡 Before you go
Thanks for reading! If you enjoyed this edition, here's what you can do next:
→ Keep the conversation going
Start a discussion on social media about this edition's topic;
Share this edition with friends, adding your critical perspective.
→ Upgrade your subscription
Stay ahead in AI: upgrade to paid and start receiving my Sunday deep dives on emerging challenges in AI governance;
Looking for an authentic gift? Surprise them with a paid subscription.
→ For organizations
Teams promoting AI literacy can purchase 3+ subscriptions at a discount here or secure 3+ seats in our AI Governance Training at a reduced rate here;
Companies offering AI governance or privacy solutions can sponsor this newsletter and reach thousands of readers. Fill out this form to get started.
See you soon!
Luiza
Tornado is an apt metaphor. Disappointing to say the least. Thanks for the update.
EU leaders have no political balls. They are the lapdogs of US interests. Nothing new here... Skynet here we come