5 Mistakes Companies Make With EU AI Act Compliance
Five recurring mistakes we see in EU AI Act programmes — and how to fix each before the 2 August 2026 high-risk deadline bites.
Quick answer
The five mistakes we see most often across 2026 AI Act readiness engagements: (1) treating AI Act compliance as a legal-only project; (2) under-scoping the AI inventory; (3) confusing provider vs deployer obligations; (4) skipping the FRIA or doing it too late; (5) not documenting GPAI vendor arrangements. All five are fixable in weeks, not months — if you start now.
Mistake 1: Treating it as a legal-only project
Legal writes the policy. Product, engineering, and data science design the system. Operations run the human oversight. If your AI Act programme lives only in legal, you will write sensible policy that nobody can operationalise.
Fix: cross-functional steering group with minimum legal, product, engineering, data, HR, procurement. Weekly 30-minute stand-up until the August 2026 deadline. Owner in each function for their share of the obligations.
Mistake 2: Under-scoping the AI inventory
The inventory a company declares on its first pass is almost always 40-60% of reality. Missing: embedded AI in SaaS tools (HR platforms, sales tools, IDE assistants), shadow AI (individual-use tools), internal research notebooks with models in production, and third-party models wrapped behind APIs.
Fix: three-method discovery: (1) vendor questionnaire to every SaaS contract over €5,000/year; (2) employee survey asking which AI tools they use in their role; (3) network/SaaS-visibility tool (Productiv, Torii) for blind spots. Re-inventory quarterly.
Mistake 3: Confusing provider vs deployer
Your obligations differ dramatically depending on whether you are a provider (you develop / place on market under your name) or a deployer (you use somebody else's AI). Many organisations are both, for different systems. Most small businesses are purely deployers. Many large enterprises misclassify themselves as deployers when, by finetuning or substantial modification, they have become providers.
Fix: classify each system with counsel; document the reasoning. Revisit when you fine-tune a model or rebrand it.
Mistake 4: Skipping or late-running the FRIA
The Fundamental Rights Impact Assessment (Article 27) is required for deployers of certain high-risk systems before first use. Many organisations plan to do it "when we deploy" — by which time the system is already in use. Article 27 says before.
Fix: FRIA before every high-risk deployment, with concrete mitigation actions, not just risk identification. Repeat for material changes. See our FRIA guide.
Mistake 5: No documented GPAI vendor arrangement
General-Purpose AI (GPAI) models (OpenAI GPT, Anthropic Claude, Mistral, Llama) carry obligations on their providers — and separate downstream obligations on anyone who puts them into a high-risk use. Companies that integrate GPT-4 into an HR screening product are responsible for the high-risk obligations, not OpenAI.
Fix: for every GPAI vendor, document: the vendor's AI Act Article 53/55 conformity statement, the contract's AI provisions, your fine-tuning (if any), the guardrails you add, the evaluation evidence, and the human-oversight controls. This is the core of your provider documentation if you place the system on market under your name.
Bonus: treating "compliance" as static
The AI Act is a living framework — Commission guidelines, harmonised standards, notified body approaches, and national regulator priorities will all shift through 2026 and beyond. A compliance programme designed as a one-off project rather than an ongoing capability will degrade within 12 months.
Related reading
Buyer's guide to AI compliance platforms · Conformity assessment · GeraJobs — compliance hiring