🚫 Prohibited AI Practices in the EU
Emerging AI Governance Challenges | Paid Subscriber Edition | #169
👋 Hi, Luiza Jarovsky here. Welcome to our 169th edition, read by 51,800+ subscribers in 165+ countries. Not a subscriber yet? Join us.
🌎 We are a leading AI governance publication helping to shape the future of AI policy, compliance, and regulation. This is a pivotal moment for advancing AI governance—it's great to have you here!
🚫 Prohibited AI Practices in the EU
On February 2, the first provisions of the EU AI Act took effect, including Article 5, which deals with the AI Act's prohibited AI practices.
Two days after the prohibitions took effect, the EU Commission published the long-awaited guidelines clarifying these prohibited AI practices. Below are my highlights from the 140-page document, including a final section with my critical comments.
Everyone in AI should be familiar with these prohibited practices, especially since non-compliance could result in the highest fine under the EU AI Act: up to 35 million Euros or 7% of the organization's total worldwide annual turnover, whichever is higher. This overview is, of course, for educational purposes only.
A reminder that I cover this topic in depth during my AI governance training. If you haven't attended yet, the next cohort starts in March: register here.
➡️ General comments
→ The guidelines aim to increase legal clarity and are non-binding. The examples given in the guidelines are merely indicative. An authoritative interpretation of the AI Act may ultimately only be given by the Court of Justice of the EU.
→ Most AI systems that fall under an exception listed in Article 5 of the AI Act will qualify as high-risk. Two examples cited in the guidelines:
“Emotion recognition systems that do not fulfil the conditions for the prohibition in Article 5(1)(f) classify as high-risk AI systems according to Article 6.2. and Annex III, point (1)(c).
Certain AI-based scoring system, such as those used for credit-scoring or assessing risk in health and life insurance, will be considered high-risk AI systems where they do not fulfil the conditions for the prohibition listed in Article 5(1)(c).”
→ Even if an AI system is not prohibited by the EU AI Act, it might still be deemed unlawful by other EU laws—for example, if it lacks a lawful basis to process personal data according to the the GDPR
➡️ Article 5(1)(a)
“The following AI practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.”
→ Examples of subliminal techniques that might be prohibited in case the other conditions listed in Article 5(1)(a) of the AI Act are fulfilled:
Visual Subliminal Messages;
Auditory Subliminal Messages;
Subvisual and Subaudible Cueing;
Embedded Images;
Misdirection;
Temporal manipulation.
→ Regarding purposefully manipulative techniques:
“Another example is personalised manipulation where an AI system that creates and tailors highly persuasive messages based on an individual’s personal data or exploits other individual vulnerabilities influences their behaviour or choices to a point of creating significant harm.”
→ Intent is not an essential element when evaluating purposefully manipulative techniques:
“For example, regardless of whether the provider intends it, an AI system may learn manipulative techniques because the data on which it is trained contain many instances of manipulative techniques, or because reinforcement learning from human feedback can be ‘gamed’ through manipulative techniques.”
→ Regarding deceptive techniques, AI chatbots that purposefully present misinformation may fall under the prohibition:
“It may, for example, cover cases where a chatbot or deceptive AI-generated content presents false or misleading information in ways that aim to or have the effect of deceiving individuals and distorting their behaviour that would not have happened if they were not exposed to the interaction with the AI system or the deceptive AI generated content, in particular if this has not been visibly disclosed.”
→ Intent is also not an essential element when evaluating deceptive techniques:
“An example of deceptive techniques that may be deployed by AI is an AI chatbot that impersonates a friend of a person or a relative with synthetic voice and tries to pretend it is the person causing scams and significant harms.”
→ Regarding harm, the main types of harm relevant for Article 5(1)(a) of the AI Act include:
Physical;
Psychological;
Financial;
Economic.
→ Conditions to assess significant harm:
The severity of harm;
Context and cumulative effects;
Scale and intensity;
Affected persons' vulnerability;
Duration and reversibility.
➡️ Article 5(1)(b)
“The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;”
→ Examples of harm highlighted in the guidelines:
“AI systems may target the vulnerabilities of young users and use addictive reinforced schedules with the objective to keep them dependent on the service are particularly harmful for young persons and girls (…).”
“An AI system that is designed in an anthropomorphic way and simulates human-like emotional responses in its interactions with children can exploit children’s vulnerabilities in a manner that fosters unhealthy emotional attachment, manipulates engagement time and distorts children’s understanding of authentic human relationships (…).”
➡️ Article 5(1)(c)
“The following AI practices shall be prohibited: (c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity.”
→ The prohibition applies regardless of whether the AI system or the score is provided or used by public or private persons. For example:
“A private credit agency uses an AI system to determine the creditworthiness of people and decide whether an individual should obtain a loan for housing based on unrelated personal characteristics.”
➡️ Article 5(1)(d)
“The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.”
→ Attention to "based solely". To be prohibited, the risk assessment to assess or predict the risk of a natural person committing a crime must be based solely a) on the profiling of the person or b) on the assessment of their personality traits and characteristics.
→ Out of scope: “location-based or geospatial or place-based crime predictions is based on the place or location of crime or the likelihood that in those areas a crime would be committed. In principle, such policing does not involve an assessment of a specific individual. They therefore fall outside the scope of the prohibition.”
→ Criminal law only: the prohibition "applies only for the prediction of criminal offences, thus excluding administrative offences from its scope, the prosecution of which is, in principle, less intrusive for the fundamental rights and freedoms of people.”
➡️ Article 5(1)(e)
“The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.”
→ Targeted vs. untargeted:
“If a scraping tool is instructed to collect images or video containing human faces only of specific individuals or a pre-defined group of persons, then the scraping becomes targeted, for example to find one specific criminal or to identify a group of victims. Such scraping is not covered by the prohibition in Article 5(1)(e).”
“Where systems combine targeted searches for images or videos with untargeted searches, the untargeted scraping is prohibited.”
“Where an AI system receives a picture of a person and searches the face on the internet for matches, i.e. ‘reverse engineering image search engines’, this will be considered to be targeted scraping.”
→ Out of scope: the prohibition does not apply to "the untargeted scraping of biometric data other than facial images (such as voice samples).”
→ Out of scope: "AI systems which harvest large amounts of facial images from the internet to build AI models that generate new images about fictitious persons because such systems would not result in the recognition of real persons.”
➡️ Article 5(1)(f)
“The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.”
→ Exclusion of physical states: “Recital 18 of the AI Act clarifies that emotions or intentions do not include ‘physical states, such as pain or fatigue, including, for example, systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents.’” Examples:
“The observation that a person is smiling is not emotion recognition;
Identifying whether a person is sick is not emotion recognition;
A TV broadcaster using a device that allows to track how many times its news presenters smile to the camera is not emotion recognition.”
→ Call center example: “Using voice recognition systems by a call center to track their customers’ emotions, such as anger or impatience, is not prohibited by Article 5(1)(f) AI Act (for example, to help the employees cope with certain angry customers).”
→ Safety reasons: “The notion of safety reasons within this exception should be understood to apply only in relation to the protection of life and health and not to protect other interests, for example property against theft or fraud.”
→ Out of scope: “AI systems inferring physical states such as pain and fatigue.”
➡️ Article 5(1)(g)
“The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorising of biometric data in the area of law enforcement.”
→ Examples of permissible labeling or filtering:
“the labelling of biometric data to avoid cases where a member of an ethnic group has a lower chance of being invited to a job interview because the algorithm was ‘trained’ based on data where that particular group performs worse, i.e. has worse outcomes than other groups.”
“the categorisation of patients using images according to their skin or eye colour may be important for medical diagnosis, for example cancer diagnoses.”
➡️ Article 5(1)(h)
“The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.”
→ Remote: “The use of biometric systems to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device, or having security access to premises is excluded from the concept of ‘remote’ (Recital 15 AI Act). This modality is used, for example, in access control.”
→ Examples of spaces that do not constitute publicly accessible spaces:
“Online spaces, since they do not constitute a physical space within the meaning of Article 3(44) AI Act.”
“Certain spaces meant to be accessed by a limited number of persons, such as factories, companies and workplaces with entry control or limitations to relevant employees or service providers.”
➡️ My comments
This is an extremely helpful document, as it discusses each legal condition that forms the prohibitions in Article 5 and provides numerous examples to illustrate what they mean in practice.
When analyzing the prohibitions closely, we see that they are actually