EU AI Act Compliance Deadline 2026: What You Need to Do Right Now
The EU AI Act's high-risk AI provisions come into force on 2 August 2026. Here is the definitive action checklist for every organisation using AI systems in the EU.
The Clock Is Running
The EU AI Act entered into force on 1 August 2024. The full compliance framework is phasing in across multiple deadlines:
- 2 February 2025: Prohibited AI systems must be taken offline (AI social scoring, real-time biometric surveillance in public spaces, emotion recognition in workplaces and educational institutions)
- 2 August 2025: Obligations for General-Purpose AI (GPAI) models apply, including the most capable models designated as "systemic risk" models
- 2 August 2026: Obligations for high-risk AI systems come into full force — this is the critical deadline for most organisations
- 2 August 2027: Certain AI systems embedded in regulated products (medical devices, machinery, vehicles) get an additional 12-month grace period
With the August 2026 deadline now less than four months away, organisations that have not begun their compliance programmes are running out of time. This is not a soft deadline — penalties for non-compliance can reach €30 million or 6% of global annual turnover, whichever is higher.
Step 1: Complete Your AI System Inventory
You cannot comply with rules you do not know apply to you. The first step is a complete inventory of every AI system your organisation uses, develops, or deploys. This includes:
- Custom-built AI models developed in-house
- AI features purchased as part of SaaS products (HR systems with automated CV screening, CRM systems with predictive scoring, financial tools with automated decision-making)
- AI systems integrated via API (OpenAI, Anthropic, Google, Microsoft)
- AI components in physical products (cameras with facial recognition, automated vehicles, medical devices with diagnostic AI)
For each system, document: what it does, who built it, what data it uses, what decisions it influences or makes, and what the consequences of errors would be for the people affected.
Step 2: Classify Each System by Risk Tier
The EU AI Act organises AI systems into four risk tiers:
Unacceptable Risk — Prohibited
These systems are illegal and must not be operated:
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- AI that manipulates people through subliminal techniques or exploits vulnerabilities
- Social scoring of individuals by public authorities
- Emotion recognition in workplaces and educational settings
- AI that creates or expands facial recognition databases by untargeted scraping of images
- Predictive policing systems that profile individuals based on personal characteristics
High Risk — Full Compliance Requirements
High-risk AI systems are defined by their application domain. Annex III of the Act lists the covered areas:
- Biometric identification and categorisation (not prohibited, but heavily regulated)
- Critical infrastructure management (electricity, water, gas, transport)
- Educational and vocational training (admission, assessment, student management)
- Employment, workers management, and access to self-employment (CV screening, performance monitoring, termination decisions)
- Essential private and public services (credit scoring, social benefits, emergency services dispatch)
- Law enforcement (polygraphs, profiling, evidence reliability assessment)
- Migration, asylum, and border control
- Administration of justice and democratic processes
If your AI system falls into any of these categories, you face the full suite of high-risk obligations.
Limited Risk — Transparency Requirements
Chatbots, deepfake generators, and AI-generated content tools must disclose that users are interacting with AI. Specific labelling requirements apply to synthetic media.
Minimal Risk
Most AI systems — spam filters, recommendation engines, inventory management — face no specific obligations beyond good practice.
Step 3: Implement High-Risk Requirements
For each high-risk AI system, you must implement and document the following by 2 August 2026:
Risk Management System (Article 9)
A continuous risk management process that: identifies and analyses known and foreseeable risks, estimates and evaluates risks that may emerge in use, evaluates other risks based on post-market data, and adopts suitable risk management measures. This must be documented and updated throughout the system's lifecycle.
Data Governance (Article 10)
Training, validation, and testing data must be subject to documented data governance practices addressing: data collection methodology, data preparation operations, formulation of assumptions, examination of availability and quality, examination of potential biases, appropriate statistical properties, and special category data handling. Data must be relevant, representative, free from errors, and complete for the intended purpose.
Technical Documentation (Article 11 + Annex IV)
You must maintain comprehensive technical documentation including: a general description of the system, elements of the system and the process for its development, information about training and testing, monitoring, functioning, and control, information about the validation and testing procedures, computational requirements, and the logs generated by the system.
Record-Keeping and Logs (Article 12)
High-risk AI systems must have automatic logging capabilities that ensure traceability of the system's operation. Logs must be kept for at least 6 months (or longer for embedded systems). For systems used by public authorities, the period is 3 years.
Transparency and Information Provision (Article 13)
Instructions for use must be provided to deployers, covering: the identity of the provider, the system's intended purpose, any residual risks, the level of accuracy, robustness and cybersecurity against which the system has been tested, the circumstances under which it may fail, the interpretability of its outputs, any expected lifetime of the system, and maintenance requirements.
Human Oversight (Article 14)
High-risk systems must be designed to allow appropriate human oversight. The natural persons overseeing the system must be able to: fully understand the system's capabilities and limitations, monitor its operation and detect anomalies, disregard, override, or intervene when appropriate, and ensure the system does not create risks when they are not in full control. "Human in the loop" provisions must be technically implemented, not merely procedurally claimed.
Accuracy, Robustness, and Cybersecurity (Article 15)
Systems must achieve appropriate levels of accuracy, be robust against errors and inconsistencies, and be resilient to adversarial attacks. Technical measures must address data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model evasion.
Conformity Assessment (Article 43)
Before placing a high-risk AI system on the market, providers must conduct a conformity assessment. For most Annex III systems, this is a self-assessment, but it must produce a technical file that documents compliance with all the above requirements. For certain biometric systems and AI in critical infrastructure, a third-party conformity assessment body must be involved.
CE Marking and EU Declaration of Conformity (Articles 48–49)
After completing conformity assessment, providers must affix CE marking and draw up an EU Declaration of Conformity before placing the system on the market.
Registration (Article 49)
Providers must register their high-risk AI systems in the EU database before placing them on the market. Deployers of certain systems must also register their use. The registration portal is maintained by the European AI Office.
Step 4: Deployer Obligations
If you are deploying (using, rather than developing) a high-risk AI system, your obligations include:
- Only use systems with CE marking that are covered by a valid EU Declaration of Conformity
- Follow the provider's instructions for use
- Implement human oversight measures as specified
- Monitor performance using post-market monitoring data
- Conduct a fundamental rights impact assessment if you are a public authority or provide services to the public
- Inform employees and their representatives before deploying AI that monitors or manages them
The Cost of Non-Compliance
Penalties under the EU AI Act are tiered:
- Prohibited AI practices: up to €35 million or 7% of global annual turnover
- Violations of high-risk requirements: up to €15 million or 3% of global annual turnover
- Provision of incorrect information: up to €7.5 million or 1.5% of global annual turnover
For SMEs and startups, the lower of the fixed amount and the turnover percentage applies. But even 1–3% of turnover represents a potentially company-defining liability for a small organisation.
The European AI Office and national market surveillance authorities are operationally active. Investigations can be triggered by competitors, regulators, affected individuals, or civil society organisations. Proactive compliance is far cheaper than reactive remediation.
Getting Started with GeraCompliance
GeraCompliance automates the EU AI Act compliance workflow: AI system registry, automated risk tier classification, technical documentation generation, conformity assessment checklists, fundamental rights impact assessment templates, and continuous compliance monitoring with regulatory change alerts.
The 2 August 2026 deadline is non-negotiable. Start your compliance programme now.