GeraCompliance · EU AI Act
EU AI Act Risk Classification
The EU AI Act classifies AI systems into four risk tiers: unacceptable risk (prohibited), high risk (heavy compliance obligations), limited risk (transparency obligations), and minimal risk (no specific requirements). Understanding which tier your AI system falls into determines your compliance obligations before deployment in EU markets.
The four risk tiers
Unacceptable risk — Prohibited
AI that manipulates persons through subliminal techniques, exploits vulnerabilities, performs social scoring by public authorities, and (with narrow exceptions) real-time remote biometric identification in public spaces. These are banned from August 2025.
High risk — Full compliance obligations
AI in critical infrastructure, education, employment (CV screening, interview scoring), essential services (credit scoring, insurance), law enforcement, migration, justice. Full Article 9–17 obligations apply from August 2026.
Limited risk — Transparency only
Chatbots and AI that generate synthetic content must disclose they are AI. Users must be told they are interacting with an AI system unless it is obvious.
Minimal risk — No specific obligations
AI for spam filters, inventory management, personalisation, and most other everyday applications. No mandatory compliance steps, though good practice guidelines are encouraged.
Classify your AI and build your compliance plan
GeraCompliance AI Act sprint covers risk classification, Article 9 risk management, and conformity documentation.
Start AI Act sprint