Skip to main content

GeraCompliance AI utility · EU AI Act

EU AI Act risk checker

The EU AI Act classifies AI systems into risk tiers that determine compliance obligations. Use this page to understand where your AI system sits, what documentation you need, and what GeraCompliance can generate for you automatically.

EU AI Act risk tiers explained

Prohibited

Real-time biometric surveillance in public, social scoring, emotion recognition in workplace/education, subliminal manipulation.

Required action: Cannot deploy. No compliance path.

High-Risk

AI in recruitment, credit scoring, healthcare diagnostics, law enforcement, critical infrastructure, education assessment.

Required action: Full conformity assessment, technical documentation, CE marking, notified body involvement for some categories.

Limited-Risk

Chatbots, deepfake generators, emotion detection (non-workplace).

Required action: Transparency obligations only: disclose AI use to users.

Minimal-Risk

Most AI applications: spam filters, AI games, recommendation engines, productivity tools.

Required action: No mandatory obligations. Voluntary codes of practice recommended.

GPAI Models

General-purpose AI models (LLMs, multimodal models) including those above 10^25 FLOPs.

Required action: Technical documentation, copyright policy, systemic-risk models additionally need adversarial testing and incident reporting.

What does GeraCompliance automate?

  • AI system risk classification (prohibited / high-risk / limited / minimal / GPAI)
  • Technical documentation package for high-risk systems
  • GPAI model transparency document
  • GDPR intersection gap assessment
  • Human oversight control checklist
  • Compliance sprint report delivered in under 48 hours

Get your EU AI Act classification

Automated. Fixed fee. 48-hour delivery. Built for product teams.

Start compliance sprint