Dismantling the EU AI Act
Contrary to what EU officials are saying, several of the proposed amendments weaken AI regulation in the EU and go against the protection of fundamental rights | Edition #249
Since the beginning of the year, I have been writing about the EU's narrative shift, driven by the publication of the Draghi report on European competitiveness and by growing external pressure, especially from Washington.
The AI Action Summit in February made this new narrative loud and clear to the public, as EU officials abandoned fundamental rights-focused statements and announced they would “remove the red tape,” simplify, and apply the EU AI Act “in a business-friendly way.”
As I wrote in February, from a legal perspective, it is unclear to me what applying a law “in a business-friendly way” means, especially when one of its main goals is to restrict and shape corporate behavior in an ethical and socially beneficial way. Giving in to profit-oriented desires goes against the regulatory purpose, especially in a field like AI, where risks can be systemic.
A few months ago, we heard the first rumors of GDPR and EU AI Act amendments aimed at simplifying compliance (and, let us be clear, to immediately please the Trump Administration and avoid tariffs as well), which were supposed to be decided at the end of 2025.
Last week, the draft of the proposed amendments to the AI Act, the GDPR, and other EU laws was leaked.
On AI regulation, the EU seems to have given in to external pressure and decided to weaken its fundamental rights-based legal framework.
The leaked document is a draft and, supposedly, could change until November 19, the scheduled day for the official publication of the ‘Digital Omnibus,’ when the EU will announce the planned amendments to some of its tech laws. However, being realistic, there are only eight days left. There is not enough time for meaningful changes.
Contrary to what EU officials have been saying since the document was leaked, several of the proposed amendments to the AI Act, especially when read together with the proposed GDPR amendments, will weaken AI regulation in the EU, which was already below what would be expected from a comprehensive legal framework focused on the protection of fundamental rights.
I want to start by highlighting two paragraphs of the draft that serve both as context and justification for the proposed amendments:
“These consultations revealed implementation challenges that could jeopardize the effective entry into application of key provisions. These include the slow designation of national competent authorities and conformity assessment bodies, as well as a lack of harmonised standards for the AI Act’s high-risk requirements, guidance, and compliance tools. Such delays risk increasing costs for businesses and slowing innovation.
To address these challenges, the Commission is proposing targeted simplification measures aimed at ensuring timely, smooth, and proportionate implementation.”
So, strangely, the justification for the proposed amendments to the EU AI Act is designation delays by the EU member states and work delays by the EU standardization organizations.
If these were really the reasons, would it not be more coherent to pressure them to hurry up, hire more people, increase the budget, or help solve the bureaucratic obstacles? Does the EU need to amend some of the AI Act’s core obligations because EU bodies are delayed? It does not make sense.
Let’s look at some of the proposed amendments and how they would weaken AI regulation and undermine the protection of fundamental rights:
A) Proposed amendment: “[Placeholder for measure still under consideration: Aligning the implementation timelines to address the uncertainty and challenges caused by the delay in availability of standards]”
It is unclear if this will happen or not, but this is a blank check for a postponement of the obligations for providers of high-risk AI systems, which should enter into force on August 2, 2026.
The EU AI Act already has a major loophole for operators of high-risk AI systems already on the EU market (see Article 111.2), as they will only be subject to the AI Act if, starting on August 2, 2026, those systems undergo significant changes in their design (with the exception of public authorities).
Now, with the possibility of postponement, two years after the law entered into force, there may be no foreseeable effective date for the obligations of providers of high-risk AI systems, which are essentially the core obligations of the AI Act.
If certain AI systems were classified as high-risk, it is because they might, directly or indirectly, pose a negative impact on fundamental rights. Continuously postponing the enforcement of these rules ignores the risk these AI systems pose and the harm they might cause.
B) Proposed amendment: “Requiring the Commission and Member States to foster AI literacy instead of enforcing unspecified obligations on operators;”
To contextualize this amendment, today the AI literacy obligation is framed this way:
“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used."
This is a perfectly reasonable and flexible obligation, establishing minimal accountability for providers and deployers of AI systems by requiring that their staff understand AI and the risks it poses, especially considering the people who will be affected by these AI systems.
This is also one of the few obligations that are not only applicable to high-risk AI systems but also potentially global in impact, setting a higher AI literacy standard for AI companies, promoting the protection of fundamental rights, and fostering the protection of affected persons.
The proposed new wording for the article is the following:
“The Commission and Member States shall encourage providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.’”
The obligation is now for the European Commission and EU Member States to encourage providers and deployers to take AI literacy measures, effectively removing the direct obligations for companies.
This type of amendment seems to be driven by industry lobbyists, especially as it goes against a simple, flexible, and fundamental rights-based obligation to foster AI literacy in the AI industry.
It is unclear to me in what sense it will foster innovation or European competitiveness. It seems to actually have the opposite effect, encouraging AI companies to do the minimum possible, even when the prevention of AI harms is involved.
C) Proposed amendment: "Reducing the registration burden for AI systems which are used in high-risk areas but for which the provider has concluded that they are not high-risk as they are only used for narrow or procedural tasks;”
This is another amendment that seems to be proposed by the AI industry lobby and that goes directly against the protection of fundamental rights.
To contextualize, Article 6.3 of the AI Act says that even if an AI system is covered by Annex III, and therefore falls into one of the high-risk criteria, its providers might not have to comply with the rules for high-risk AI systems if it does not pose a significant risk of harm to health, safety, or fundamental rights.
The problem is that to verify the absence of risk, the article establishes four conditions that are extremely vague and leave room for exploitation by AI companies. One of the few mechanisms intended to prevent this exploitation was the registration obligation.
According to Article 49.2 of the AI Act:
“Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71.”
The registration of these systems in a publicly available database was an important mechanism to ensure that companies are held accountable and that they do not try to exploit the AI Act.
If a company that presents a risk to fundamental rights attempts to bypass the obligations for providers of high-risk AI systems, the registration obligation could potentially serve as an accountability mechanism.
According to the draft:
“To streamline compliance and reduce the associated costs, providers of AI systems should not be required to register AI systems referred to in Article 6(3)”
This minimal cost reduction will have a major impact on the protection of fundamental rights, as companies providing high-risk AI systems that attempt to bypass the EU AI Act might not be held accountable.
-
Beyond the EU AI Act, several proposed GDPR amendments will affect AI regulation and undermine the protection of fundamental rights.
I will comment on them later this week.
Next week, on November 19, we will know if these proposed amendments are final or not. I will keep you posted.
A special thanks to AIUC Global, this edition’s sponsor:
Effective AI Governance starts with personal intention, not centralized oversight. Start using nuanced language recognizing human contribution and empower your teams to openly discuss risks, innovation opportunities, and AI tool choices, sharing knowledge and building accountability from the start. Adopt the new AI Usage Classifications Framework today!




