How to Classify Your AI Systems Under the EU AI Act: A Practical Guide
Not sure if your AI system is high-risk, limited-risk, or minimal-risk? This step-by-step classification guide covers every Annex III use case with real examples.
Why Classification Is the Foundation of AI Act Compliance
The EU AI Act is a risk-proportionate regulation. The obligations that apply to your AI system depend entirely on its risk classification. Get the classification wrong, and you either over-invest in compliance for a minimal-risk system, or — more dangerously — you under-invest and face regulatory action for a high-risk system you failed to recognise.
Classification is more complex than it first appears. The Act covers AI systems that you develop, place on the market, put into service, or use — meaning organisations that simply purchase and deploy third-party AI tools must assess those tools, not just developers. A company using automated CV screening software from an HR vendor must ensure that software is classified and compliant.
What Is an AI System Under the Act?
Article 3(1) defines an AI system as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real and virtual environments."
The key indicators are:
- Machine-based (not human decision-making)
- Produces outputs (predictions, recommendations, decisions, content)
- Those outputs influence the real or virtual world
- Operates with some degree of autonomy
This definition captures machine learning models, neural networks, and statistical systems — but explicitly excludes traditional software that follows purely deterministic rules without any inference. A rules-based decision tree with hardcoded logic is generally not an AI system under the Act. A model trained on data to make predictions generally is.
The Four-Tier Risk Structure
Tier 1: Unacceptable Risk (Prohibited)
Article 5 lists practices that are entirely prohibited and cannot be made compliant. These include:
- Cognitive behavioural manipulation (subliminal techniques, exploiting vulnerabilities)
- Social scoring by public authorities
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorisation systems inferring sensitive characteristics
- AI-enabled predictive policing based solely on personal characteristics
- Creation of facial recognition databases by untargeted scraping
If your system does any of these things, it is not a compliance question — it must be discontinued immediately.
Tier 2: High Risk
High-risk systems are defined in two ways: by being used in safety components of regulated products (Annex I), or by being used in the specific application domains listed in Annex III.
Tier 3: Limited Risk
Systems with specific transparency obligations: chatbots must disclose they are AI; deepfakes and synthetic media must be labelled; AI-generated content must be marked where technically feasible.
Tier 4: Minimal Risk
All other AI systems — spam filters, recommendation algorithms, inventory optimisation tools, etc. — with no specific obligations beyond good practice.
Annex I: AI in Safety Components of Regulated Products
If your AI system is a safety component of a product regulated under EU product safety legislation, it is automatically high-risk. The regulated products listed in Annex I include:
- Machinery (Machinery Directive)
- Toys
- Recreational craft
- Lifts
- Equipment and protective systems in explosive atmospheres
- Radio equipment
- Pressure equipment
- Medical devices (including in vitro diagnostic medical devices)
- Automotive safety components (General Safety Regulation)
- Agricultural and forestry vehicles
- Marine equipment
- Civil aviation equipment
- Two- and three-wheel vehicles
Annex III: The Eight High-Risk Application Domains
1. Biometric Identification and Categorisation
High-risk use cases: Remote biometric identification (face recognition, voice recognition, gait analysis) for identity verification; biometric categorisation inferring characteristics such as gender, age, or disability.
Examples: Employee attendance via facial recognition; customer identity verification at onboarding; age verification using facial analysis.
Not high-risk: Biometric authentication where the individual actively uses a feature (e.g. fingerprint login on a phone), unless it constitutes remote biometric identification.
2. Critical Infrastructure Management
High-risk use cases: AI as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
Examples: AI systems managing electrical grid load balancing; AI optimising water treatment operations; autonomous traffic management systems.
Not high-risk: General predictive maintenance AI for non-critical industrial equipment; energy efficiency monitoring without operational control.
3. Education and Vocational Training
High-risk use cases: AI determining access to educational institutions; AI evaluating learning outcomes; AI monitoring and detecting prohibited student behaviour; AI assessing students in exams.
Examples: Automated admissions scoring tools; exam proctoring software that detects cheating behaviour; essay grading AI for high-stakes assessments.
Not high-risk: Personalised learning recommendation systems that suggest content without making binding assessments; scheduling and administration tools.
4. Employment, Workers Management, and Self-Employment
High-risk use cases: AI used for recruitment and CV screening; AI determining terms and conditions of employment; AI for performance and behaviour monitoring; AI for promotion, dismissal, or task allocation decisions.
Examples: ATS (Applicant Tracking Systems) that rank candidates; emotion analysis in video interviews; productivity monitoring software that informs performance reviews; algorithmic scheduling systems.
Not high-risk: HR software that tracks leave balances without AI inference; job posting tools that automate distribution without screening.
This category catches most organisations using AI in HR. If your ATS uses ML to rank CVs, if your performance management software uses AI to infer productivity, if you use a tool to analyse video interviews — these are almost certainly high-risk.
5. Essential Private and Public Services
High-risk use cases: AI evaluating creditworthiness; AI setting the level of life and health insurance; AI determining eligibility for social benefits; AI for emergency services dispatch prioritisation; AI for risk assessment and pricing of health insurance.
Examples: Automated credit scoring engines; AI-driven insurance underwriting; benefits eligibility determination tools; fraud detection systems that can suspend accounts.
Not high-risk: Fraud detection that flags transactions for human review without automated blocking; customer service chatbots that do not make eligibility determinations.
6. Law Enforcement
High-risk use cases: AI for assessing the risk of individuals becoming victims of criminal offences; AI used in polygraphs and similar tools; AI for evaluating the reliability of evidence in criminal proceedings; AI for profiling in the course of criminal investigations; AI for crime analytics based on personal data.
This category primarily affects law enforcement agencies. Private organisations are not typically affected unless they are developing tools for law enforcement use.
7. Migration, Asylum, and Border Control
High-risk use cases: AI for lie detection in the context of migration; AI for assessing risks for irregular migration; AI for verifying the authenticity of documents; AI for assessing asylum applications.
Primarily affects government agencies and their contractors.
8. Administration of Justice and Democratic Processes
High-risk use cases: AI assisting courts in researching and interpreting facts and the law; AI used to influence elections.
Examples: AI legal research tools used by courts to identify relevant precedents; AI systems used in legal proceedings to assess credibility or predict outcomes.
Not high-risk: Legal research tools used by private lawyers for internal research (as opposed to court proceedings); e-voting systems without AI inference components.
The Classification Decision Tree
For each AI system in your inventory, ask these questions in order:
- Does the system perform any of the Article 5 prohibited practices? If yes → prohibited, must stop.
- Is the system a safety component of an Annex I regulated product? If yes → high-risk.
- Is the system used in an Annex III application domain? If yes → probably high-risk.
- Does the system have direct interactions with users where the AI nature should be disclosed? If yes → at minimum limited-risk transparency obligations.
- None of the above → minimal-risk, no specific obligations.
The Annex III Scope Filtering Rule
Even for systems in Annex III application domains, there is a filtering mechanism: systems are only high-risk if they pose a significant risk to health, safety, or fundamental rights. The European AI Office has published guidance on this filtering, and providers can use it to narrow the classification for systems with marginal involvement in Annex III domains.
This is a complex legal analysis. GeraCompliance provides an automated classification tool that walks through the Annex III criteria systematically and generates a documented justification for the risk classification — providing an audit trail for regulators and a defensible basis for your compliance programme.