Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessInside the push to make every employee an AI masterBusiness InsiderHow Rust's Ownership Model Prevents Bugs — A Visual GuideDEV CommunityThe Eve of Gentle Singularity: A Short StoryLessWrong AIAnthropic releases part of AI tool source code in 'error'TechXplore AIPrograms Beat Prompts: AI Forges Deterministic Interface Programs That Run ForeverDEV CommunityThe new American Dream: owning just part of a homeBusiness InsiderHow to stay relevant as a developerDEV CommunityI Built 24+ Free Developer Tools That Run in Your Browser — Here's the Full StackDEV CommunityMCMC Island Hopping: An Intuitive Guide to the Metropolis-Hastings AlgorithmDEV CommunityThe Iran war could haunt grocery bills long after the fighting stopsBusiness InsiderOracle cut thousands of jobs in recent round of layoffs – CNBCSilicon RepublicAnthropic admits partial leak of Claude Code source, says no customer data exposed - Storyboard18Google News: ClaudeBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessInside the push to make every employee an AI masterBusiness InsiderHow Rust's Ownership Model Prevents Bugs — A Visual GuideDEV CommunityThe Eve of Gentle Singularity: A Short StoryLessWrong AIAnthropic releases part of AI tool source code in 'error'TechXplore AIPrograms Beat Prompts: AI Forges Deterministic Interface Programs That Run ForeverDEV CommunityThe new American Dream: owning just part of a homeBusiness InsiderHow to stay relevant as a developerDEV CommunityI Built 24+ Free Developer Tools That Run in Your Browser — Here's the Full StackDEV CommunityMCMC Island Hopping: An Intuitive Guide to the Metropolis-Hastings AlgorithmDEV CommunityThe Iran war could haunt grocery bills long after the fighting stopsBusiness InsiderOracle cut thousands of jobs in recent round of layoffs – CNBCSilicon RepublicAnthropic admits partial leak of Claude Code source, says no customer data exposed - Storyboard18Google News: Claude

Red Lines under the EU AI Act: Understanding the prohibition of biometric categorization for certain sensitive characteristics

Future of Privacy Forumby Joana BalaApril 1, 20261 min read0 views
Source Quiz

Blog 7 | Red Lines under the EU AI Act Series This blog is the seventh of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here. The EU AI Act provides for rules on prohibited AI practices that the […]

Blog 7 | Red Lines under the EU AI Act Series

This blog is the seventh of a series that explores prohibited AI practices under the EU AI Act and their interplay with existing EU law. You can find the whole series here.

The EU AI Act provides for rules on prohibited AI practices that the legislature considers incompatible with fundamental rights and European Union values. Article 5(1)(g) introduces a prohibition on the biometric categorization for “certain sensitive characteristics”, focusing on systems used to categorize individuals “based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation”.

The European Commission guidelines on prohibited AI practices (hereinafter, “the Guidelines”) note that information, including sensitive data, can be extracted, deduced, or inferred from biometric data with or without the individual’s knowledge, leading to unfair or discriminatory treatment that undermines human dignity, privacy, and the principle of non-discrimination protected under the EU acquis. This provision also reflects longstanding concerns with regard to the risks associated with processing sensitive personal data, particularly where such processing may take place without the knowledge of the individual.

With this in mind, Section 1 unpacks the (limited) scope and key definitions of the prohibition, including the cumulative conditions required for the provision to apply. Section 2 takes a look at the situations that fall outside the scope of the prohibition, and, finally, Section 3 explores the interaction between the biometric categorization prohibition and the existing EU legal framework.

Several key takeaways emerge:

  • The AI Act prohibits specific biometric inference practices, not biometric categorization as such – Many forms of biometric categorization, such as categorization based on non-sensitive physical traits or for purposes that do not involve inferring the listed characteristics, do not fall within the prohibition.

  • The objective and design of the system are central to determining whether the prohibition applies – The prohibition is not triggered only by the presence of biometric analysis, but by the intended inference of protected attributes from biometric data.

  • The relationship between this prohibition and EU data protection law needs further clarification – Given that the AI Act itself clarifies that it does not affect the application of the GDPR, and some processing of biometric data that may result in biometric categorization can be lawful under Article 9(2) GDPR when following its strict conditions, further clarification is needed with regard to the intersection of the two laws.

  1. (Limited) Scope and key definitions

To trigger the prohibition under Article 5(1)(g) AI Act, five cumulative conditions must be simultaneously met:

  • The AI system must be placed on the market, put into service, or used.

  • The AI system must be a biometric categorization system.

  • The AI system must categorize individuals.

  • The AI system must categorize individuals based on their biometric data.

  • The AI system must infer sensitive characteristics (e.g., race, political opinions, religious beliefs, and so on).

The first condition, relating to the placing on the market, putting into service or use of an AI system, applies to both providers and deployers within their respective responsibilities. The Guidelines also clarify that the prohibition does not cover the labelling or filtering of lawfully acquired biometric datasets, including for law enforcement purposes.

The requirement that all five conditions be fulfilled simultaneously is likely to be significant in practice. It may limit the scope of the prohibition and it raises questions about how it will be applied in specific cases, particularly where systems are designed to avoid explicit inference of sensitive traits while still enabling similar outcomes.

1.1 Defining biometric categorization

Biometric categorization refers to assigning individuals to predefined groups based on their biometric data, rather than identifying or verifying their identity. Such categorization may be used, for example, to display targeted advertising or for statistical purposes, without necessarily identifying the individual.

Article 3(40) AI Act defines a biometric categorization system as an AI system that assigns natural persons to specific categories based on their biometric data, unless this function is ancillary to another commercial service and strictly necessary for objective technical reasons. Biometric data, defined in Article 3(34) AI Act, includes behavioural characteristics based on biometric features. As discussed in a previous blog, this definition is broader than the definition of biometric data in the GDPR. Categorization based on clothing, accessories, or social media activity falls outside the scope of biometric categorization under the AI Act.

The Guidelines further clarify that biometric categorization may involve categories based on physical characteristics such as facial structure or skin colour, some of which may correspond to sensitive characteristics protected under EU non-discrimination law. At the same time, the AI Act definition contains an important limitation: a system will not fall within the definition where the categorization is ancillary to another commercial service and strictly necessary for objective technical reasons. According to Recital 16 AI Act, an ancillary feature is one that is intrinsically linked to another commercial service and cannot be used independently of that service.

The Guidelines provide several examples to illustrate this distinction. For instance, filters that categorize facial or bodily features on online marketplaces, allowing consumers to preview a product on themselves, may constitute an ancillary feature because they are linked to the principal service of selling a product. Similarly, filters integrated into social media platforms that allow users to modify images or videos may also be considered ancillary features because they cannot be used independently of the platform’s content-sharing service.

The Guidelines also identify examples of systems that would fall within the prohibition. These include AI systems that analyse biometric data from photographs uploaded to social media platforms to categorize individuals by their assumed political orientation and send them targeted political messages. Another example concerns AI systems that analyse biometric data from photos to infer a person’s sexual orientation and use that information to serve targeted advertising. In both cases, the categorization would not be strictly necessary for objective technical reasons and therefore would fall within the definition of biometric categorization under the AI Act. Importantly, the systems that perform such categorization need to fall under the definition of “AI system” pursuant the AI Act for the prohibition to apply.

The risks associated with biometric categorization also reflect broader concerns under EU data protection law. The EDPB has clarified that inferences about sensitive characteristics may themselves constitute special categories of personal data under Article 9 GDPR. Also, the Court of Justice of the European Union has held that processing which allows information falling within Article 9(1) GDPR categories to be revealed must be regarded as processing of special categories of personal data (Meta Platforms and Others, C-252/21). However, the prohibition to process sensitive data under the GDPR has several exceptions, such as explicit consent.

The EDPB and the European Data Protection Supervisor (EDPS) have taken a similar position in their Joint Opinion 5/2021 on the Proposal for the AI Act. They called for a broader prohibition of certain biometric AI practices. In particular, they called for a general ban on the use of AI for automated recognition of human features in publicly accessible spaces, including faces, gait, fingerprints, DNA, voice, and other biometric and behavioural signals.

1.2 For the prohibition to apply, categorization must take place at the level of the individual

Another essential condition for the prohibition to apply is that the system must categorize individual natural persons based on their biometric data. Importantly, the categorization must take place at the level of the individual. If biometric analysis is performed without categorizing specific individuals, the prohibition does not apply. For example, the prohibition would not be triggered where a system analyzes biometric information only to categorize an entire group without identifying or singling out individual persons. These include AI systems that conduct “attribute estimation”, sometimes referred to as demographic analysis, by assigning characteristics such as age, gender or ethnicity based on biometric features such as facial characteristics, height or skin, eye or hair colour, or other features such as a visible scar or distinctive tattoo.

1.3 “Sensitive characteristics” under the AI Act

The prohibition under Article 5(1)(g) AI Act applies only when a biometric categorization system is used to deduce or infer specific sensitive characteristics, such as: race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

This means that not all biometric categorization systems fall within the scope of the prohibition. Rather, the prohibition targets systems that attempt to derive particularly sensitive characteristics from biometric data.

For example, a system that claims to infer an individual’s race from their voice would fall within the scope of the prohibition. By contrast, a system that categorizes individuals according to physical traits such as skin or eye colour, or a system analysing the DNA of crime victims to determine their origin, would not be prohibited under Article 5(1)(g). Another example provided by the Guidelines concerns a biometric categorization system that claims to infer a person’s religious orientation from tattoos or facial characteristics would fall within the prohibition.

  1. Biometric categorization for bias detection: What falls outside the scope of the prohibition?

The prohibition in Article 5(1)(g) AI Act does not apply to all uses of biometric categorization. In particular, it does not cover AI systems used for the labelling or filtering of lawfully acquired biometric datasets, including in law enforcement contexts. As explained in Recital 30 AI Act, such uses may include sorting images by biometric characteristics, such as hair or eye colour.

The Guidelines note that labelling or filtering biometric datasets may be necessary to ensure that datasets used to train AI systems are representative across demographic groups. Where training data contains systematic differences between groups, for example, due to historical bias in data collection, algorithms may replicate those biases and potentially lead to discriminatory outcomes. In such cases, labelling data according to certain characteristics may be necessary to improve data quality and prevent discrimination. In some circumstances, the AI Act may even require such labelling operations in order to comply with the requirements applicable to high-risk AI systems (see Article 10 AI Act).

The Guidelines provide several examples of permissible uses. One example concerns the labelling of biometric data to prevent recruitment algorithms from disadvantaging individuals from certain ethnic groups, where historical training data reflects biased outcomes. Another example involves categorizing patients’ images by skin or eye colour, which may be relevant to medical diagnosis, including certain cancer diagnoses.

The exception also applies in law enforcement contexts where biometric datasets have been lawfully acquired. For example, law enforcement authorities may use AI systems to label or filter datasets suspected of containing child sexual abuse material. Such systems may help detect and redact sensitive information in images or assist investigations by labelling biometric features such as gender, age, eye or hair colour, scars, markings, or tattoos in order to identify victims or establish links between cases. Similarly, filtering and labelling features such as hand characteristics or distinctive tattoos may help identify possible suspects in law enforcement contexts.

  1. Interplay with other EU laws

This prohibition must be understood in the context of the existing EU data protection framework.

Interestingly to note, the Guidelines refer to an earlier explanation provided by the Article 29 Working Party (the precursor to the EDPB) when describing “biometric categorization” in the Opinion on developments in biometric technologies. Article 3(40) AI Act provides a legal definition, describing a biometric categorization system as an AI system that assigns natural persons to specific categories on the basis of their biometric data, while also specifying an exclusion where such categorization is ancillary to another commercial service and strictly necessary for objective technical reasons.

By contrast, the Article 29 Working Party explains biometric categorization as the process of determining whether the biometric data of an individual belongs to a group with predefined characteristics, emphasizing that the objective is not to identify or verify the individual but to assign them automatically to a category, for example, to display different advertisements depending on the perceived age or gender of the person. While both definitions describe categorization based on biometric data rather than identification, the AI Act establishes a regulatory definition determining the scope of the prohibition, whereas the Article 29 Working Party description provides a conceptual explanation of how biometric categorization systems operate in practice.

Furthermore, Article 9(1) GDPR establishes a general prohibition on the processing of special categories of personal data, subject to exceptions, which might see some processing of biometric data in the context of biometric categorisation lawful under the GDPR, as long as it respects its strict provisions. The AI Act introduces an additional layer of restriction, which raises important conflict of law questions with the GDPR. As analyzed in the first blog of this series, the GDPR takes priority in application (the AI Act “shall not affect” the GDPR). Further guidance on the intersection of the GDPR and the AI Act in this respect is needed.

The Guidelines clarify that AI systems intended to categorize individuals based on biometric data to infer attributes protected under Article 9(1) GDPR are classified as high-risk AI systems, provided they are not already prohibited under Article 5 AI Act. At the same time, Article 5(1)(g) further limits the possibilities for lawful processing of personal data under EU data protection law, including the GDPR, the Law Enforcement Directive (LED), and Regulation (EU) 2018/1725 (EUDPR). In particular, the provision excludes the use of biometric data to categorize natural persons in order to infer sensitive characteristics such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, subject to the limited exception for the labelling or filtering of lawfully acquired biometric datasets.

The prohibition is also consistent with Article 11(3) LED, which explicitly prohibits profiling that results in discrimination on the basis of special categories of personal data, including race, ethnic origin, political opinions, religious beliefs or sexual orientation.

  1. Closing reflections and key takeaways

The AI Act prohibits specific biometric inference practices, not biometric categorization as such

Article 5(1)(g) AI Act does not prohibit biometric categorization in general. It prohibits the placing on the market, putting into service, or use of AI systems that categorize individuals based on biometric data for the purpose of inferring certain sensitive characteristics, such as race, political opinions, religious beliefs, trade union membership, sex life or sexual orientation. The prohibition applies only where all cumulative conditions of Article 5(1)(g) are met. This means that many forms of biometric categorization such as categorization based on non-sensitive physical traits or for purposes that do not involve inferring the listed characteristics, do not fall within the prohibition.

The objective and design of the system are central to determining whether the prohibition applies

The Guidelines place significant emphasis on the purpose and functionality of the AI system, in particular, whether the system is designed to deduce or infer one of the sensitive characteristics listed in the provision. This means that the prohibition is not triggered only by the presence of biometric analysis, but by the intended inference of protected attributes from biometric data. The examples provided in the Guidelines illustrate this distinction: systems that claim to infer race from voice or religious beliefs from facial features would fall within the prohibition, whereas systems categorizing individuals based on traits such as eye or hair colour would not.

Context and use matter for determining the scope of the prohibition

The prohibition applies only where individuals are individually categorized based on their biometric data, and where the categorization results in the inference of the listed sensitive characteristics. Systems that analyse biometric data at an aggregated level without singling out individuals would not meet this condition. Similarly, the AI Act explicitly excludes certain practices from the scope of the prohibition, including the labelling or filtering of lawfully acquired biometric datasets, for example, where such operations are carried out to improve dataset quality, mitigate bias in AI training data, support medical diagnosis or assist law enforcement investigations.

The relationship between this prohibition and EU data protection law needs further clarification

Finally, the prohibition must be understood in the broader context of EU data protection and non-discrimination law. The GDPR already restricts the processing of special categories of personal data under Article 9(1), while the AI Act introduces an additional regulatory layer by prohibiting certain biometric inference practices altogether. Given that the AI Act itself establishes that it does not affect the GDPR, further guidance is needed for those cases where processing of biometric data would be lawful under Article 9(2) GDPR, but prohibited under the AI Act.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

eu ai act

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Red Lines u…eu ai actFuture of P…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 204 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Laws & Regulation