Skip to main content
← Back to Blog
Compliance Guides·8 min read·

Fundamental Rights Impact Assessments: What They Are and How to Complete One

The EU AI Act requires deployers of high-risk AI in public services to conduct a Fundamental Rights Impact Assessment. Here is exactly how to complete one — with a practical template.

#FRIA#fundamental rights#EU AI Act#impact assessment#AI deployers#public services

What Is a Fundamental Rights Impact Assessment?

Article 27 of the EU AI Act introduces a new obligation for certain deployers of high-risk AI systems: before putting a high-risk AI system into service, they must carry out a Fundamental Rights Impact Assessment (FRIA). This is a document that analyses the potential impact of the AI system on the fundamental rights of affected individuals.

The FRIA obligation applies to deployers who are:

  • Public bodies (government agencies, local authorities, public hospitals, public universities)
  • Private entities providing public services (private companies providing banking, insurance, or other services that are essential to daily life)

The FRIA must be conducted before the system is put into service — not after deployment, not retrospectively. And it must be notified to the relevant national market surveillance authority.

How Does a FRIA Differ from a DPIA?

GDPR requires Data Protection Impact Assessments for high-risk data processing. The EU AI Act FRIA covers different ground. A DPIA focuses specifically on risks to personal data and privacy. A FRIA has a broader scope — it considers impacts on all fundamental rights protected under the EU Charter of Fundamental Rights, including:

  • Human dignity (Article 1)
  • Right to life (Article 2)
  • Prohibition of torture and inhuman treatment (Article 4)
  • Right to liberty and security (Article 6)
  • Respect for private and family life (Article 7)
  • Protection of personal data (Article 8) — this overlaps with DPIA
  • Freedom of thought, conscience, and religion (Article 10)
  • Freedom of expression and information (Article 11)
  • Non-discrimination (Article 21)
  • Equality between women and men (Article 23)
  • Rights of the child (Article 24)
  • Rights of the elderly (Article 25)
  • Integration of persons with disabilities (Article 26)
  • Right to an effective remedy and fair trial (Article 47)
  • Right to good administration (Article 41)

Where processing also involves personal data and triggers DPIA requirements under GDPR, the two assessments should be conducted jointly — the FRIA covering the broader rights analysis, the DPIA covering the specific data protection elements.

What Must a FRIA Cover?

Article 27(2) specifies the minimum content of a FRIA:

  1. A description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose
  2. A description of the period of time and frequency in which the system is intended to be used
  3. The categories of natural persons and groups likely to be affected in the specific context of use
  4. The specific risks of harm likely to have an impact on those categories of persons or groups, taking into account the information given by the provider
  5. A description of the implementation of human oversight measures
  6. The measures to be taken in case of materialisation of risks, including the arrangements for internal governance and complaint mechanisms

Step-by-Step: Completing Your FRIA

Step 1: Define the Scope of Use

Document precisely how you intend to use the AI system. Be specific — vague descriptions of intended purpose do not satisfy the requirement. Include:

  • The specific decisions the AI system will inform or make
  • The processes it will be integrated into
  • The data it will receive and produce
  • Who will use the outputs (a human decision-maker, an automated system, etc.)
  • The deployment environment

Example (HR AI screening tool): "The system will be used in the initial screening stage of recruitment for all roles. It will receive candidate CVs and produce a suitability score and brief rationale. HR recruiters will use this score as one input in determining which candidates to advance to telephone screening. The system will not make final hiring decisions."

Step 2: Define Frequency and Duration

How often will the system be used? How long will the deployment last? What is the volume of decisions influenced?

Example: "The system will be used continuously for all new job applications from date of deployment. We expect approximately 500 applications per month across 50 open roles. The deployment is intended for an indefinite period subject to annual review."

Step 3: Identify Affected Groups

Who will be affected by the AI system's outputs? Consider direct effects (people the AI makes decisions about) and indirect effects (family members, people affected by derivative decisions).

For employment AI, affected groups include: all job applicants, with particular attention to protected groups — women, people from racial and ethnic minorities, people with disabilities, older candidates, people with non-traditional career trajectories. The FRIA must specifically identify groups that may be more vulnerable to harm from the AI system.

For a public benefits eligibility AI, affected groups might include: benefit claimants, carers, people in financial hardship, people with disabilities, migrants.

Step 4: Identify Specific Risks of Harm

For each affected group and each fundamental right identified as potentially impacted, assess the likelihood and severity of harm. Be specific about the mechanism of harm.

This is the substantive core of the FRIA. Generic statements ("the system may discriminate") do not satisfy the requirement. You need to analyse:

  • How could the AI system produce outputs that harm this right for this group?
  • What training data biases could perpetuate historical discrimination?
  • What features does the system use, and could those features serve as proxies for protected characteristics?
  • What are the consequences of errors — false positives and false negatives?
  • Are the consequences reversible?
  • Is there a meaningful appeals mechanism?

Example risk analysis for HR AI:

"Right to non-discrimination (Article 21): The system was trained on historical hiring data. If historical hiring decisions were biased against women in technical roles, the model may have learned to associate male-coded language with suitability. The system uses free-text CV analysis, and gendered language patterns in CVs may serve as proxies for protected characteristics. Risk: Medium-High. Severity: significant — incorrect scoring reduces employment opportunities for qualified candidates from protected groups. Frequency: approximately 50 decisions per month per affected group."

Step 5: Describe Human Oversight Measures

Article 14 of the EU AI Act requires that high-risk AI systems allow for appropriate human oversight. Your FRIA must document specifically how this oversight is implemented in your deployment context:

  • Who are the human overseers? What are their roles?
  • What training do they receive to understand the AI system's capabilities and limitations?
  • What information do they receive alongside the AI's output to enable meaningful oversight?
  • When and how can they override the AI system's output?
  • Are they under pressure (time, volume) that would make meaningful oversight impractical?

A rubber-stamp human review where overseers lack time, information, or authority to override the AI is not meaningful human oversight. Regulators are aware of this pattern. The FRIA must describe genuine oversight capacity.

Step 6: Mitigation Measures and Governance

For each identified risk, document the measures you are taking to mitigate it. Also document:

  • Internal governance structure: who is accountable for AI system performance?
  • Monitoring processes: how will you detect if harms are materialising in practice?
  • Complaint mechanisms: how can affected individuals raise concerns?
  • Escalation path: what happens if the system is found to be causing harm?
  • Review schedule: when and how will you update this FRIA?

Step 7: Notify the Market Surveillance Authority

Article 27(3) requires deployers to notify the relevant national market surveillance authority before putting the system into service, using a standardised form the Commission is due to publish. Check with your national authority for the current notification procedure — this varies by country.

FRIA vs. Proportionality: Scaling the Assessment

The depth of the FRIA should be proportionate to the risks involved. A FRIA for an AI system that makes binding decisions about individuals' access to public benefits (high consequence, broad deployment, vulnerable population) should be substantially more detailed than a FRIA for an AI system used in a limited pilot with low-consequence outputs.

The Recitals to the Act acknowledge this: "The obligation to carry out a fundamental rights impact assessment should not apply to all deployers, but only to those cases where there is a high potential for harm."

Using GeraCompliance for FRIA

GeraCompliance provides a guided FRIA workflow that:

  • Walks through each required section with structured templates
  • Identifies applicable fundamental rights based on your AI system's use case
  • Pre-populates risk factors from the provider's technical documentation
  • Generates a formatted FRIA document suitable for regulatory submission
  • Links the FRIA to your AI system registry for continuous tracking
  • Alerts you when the FRIA should be reviewed based on system changes

The FRIA is a legal document that must withstand regulatory scrutiny. GeraCompliance helps you get it right the first time, before a regulator asks to see it.