AI System Risk Classification Template
A structured decision tree to classify any AI system into the EU AI Act's four risk tiers: unacceptable, high, limited, or minimal — determining your compliance obligations.
Quick Answer
EU AI Act risk classification follows a four-tier hierarchy — unacceptable (banned), high-risk (Annex I/III), limited (transparency obligations), minimal (voluntary codes) — and must be reassessed when use-case or deployment context changes.
Compliance Checklist (8 items)
Penalty if not compliant
Misclassification leading to unmet high-risk obligations: up to €30 million or 6% of global annual turnover.
Frequently Asked Questions
What are the four risk tiers in the EU AI Act?
Unacceptable risk (prohibited, Article 5), high risk (Annex I and Annex III, extensive obligations), limited risk (transparency obligations only, e.g., chatbots and deepfakes), and minimal risk (voluntary codes of conduct apply).
What happens if my AI system spans multiple risk categories?
The highest applicable risk tier governs. If a system could be classified as high-risk under one criterion and limited-risk under another, the high-risk obligations apply in full.
Do I need to reclassify if I update my AI model?
Yes. Material changes to intended purpose, training data, performance characteristics, or deployment context can change the risk classification and must trigger a reassessment.
Need this turned into a real document?
Our compliance sprint service delivers production-ready documents tailored to your organisation in 5–15 business days. A senior compliance specialist reviews every document before delivery.