What Is the EU AI Act’s “High-Risk” Classification?

Skip to main content
< All Topics

Under the European Union’s Artificial Intelligence Act, the “High-Risk” classification is a regulatory designation for AI systems that have the potential to significantly impact the health, safety, or fundamental rights of individuals. Unlike prohibited AI practices (which are banned outright) or minimal-risk systems (which face few requirements), high-risk systems are permitted but must comply with the most stringent technical and legal standards under the regulation.

This classification sits at the center of the Act’s risk-based approach, ensuring that powerful tools deployed in sensitive sectors operate with a high degree of transparency and accountability.

Criteria for High-Risk Classification

An AI system is classified as high-risk if it meets one of two primary criteria defined by the regulation:

  • Safety Components of Regulated Products: This includes AI used as a safety component in products already subject to third-party conformity assessments under existing EU laws. Examples include medical devices, civil aviation systems, toys, and industrial machinery.
  • Specific Use Cases (Annex III): The Act lists several stand-alone areas where AI use is always considered high-risk due to its potential impact on life outcomes.

Annex III: High-Risk Use Case Areas

The primary domains designated as high-risk under Annex III include:

  • Biometrics: Systems used for remote biometric identification, emotion recognition, or biometric categorization, with specific exceptions for standard identity verification.
  • Critical Infrastructure: AI used in the management and operation of essential services like road traffic, water supply, gas, heating, and electricity.
  • Education and Training: Systems that determine access to educational institutions or evaluate learning outcomes and student behavior during assessments.
  • Employment and HR: AI used for recruitment, resume sorting, promotion or termination decisions, and monitoring worker performance.
  • Essential Public and Private Services: Systems assessing creditworthiness for loans, evaluating life or health insurance pricing, and triaging emergency calls for first responders.
  • Law Enforcement: Tools used for evidence evaluation, polygraph-style assessments, or determining an individual’s risk of reoffending.
  • Migration and Border Control: AI for automated visa examination, asylum application processing, or detecting irregular migration patterns.
  • Administration of Justice: Systems intended to assist judicial authorities in researching and interpreting facts or applicable law.

Core Obligations for High-Risk Providers

Organizations that develop or place high-risk AI on the market (referred to as Providers) must meet a rigorous set of requirements both before and after deployment.

Risk Management System

Providers must establish a documented, ongoing risk management process. This means identifying potential risks to health and safety throughout the system’s entire lifecycle and putting mitigation measures in place.

Data Governance

Training, validation, and testing datasets must meet high quality standards. They are required to be relevant, representative, and as free from errors as possible to prevent discriminatory outcomes or embedded biases.

Technical Documentation and Logging

A detailed technical documentation file must be maintained to demonstrate compliance. The AI system must also be designed to automatically log events, ensuring traceability and ongoing performance monitoring.

Transparency and Human Oversight

Users must receive clear instructions for use. High-risk systems must also be designed to support effective human oversight, meaning a person must be able to understand, monitor, and, when necessary, override the AI’s output.

Accuracy and Cybersecurity

The AI must achieve and maintain appropriate levels of accuracy, robustness, and cybersecurity. This includes resilience against adversarial attacks, where third parties attempt to manipulate the model’s behavior or outputs.

Compliance Timeline

The implementation of high-risk requirements is currently in a critical phase. The primary application date for high-risk AI obligations is August 2, 2026. Organizations should treat this date as the key benchmark for internal readiness, documentation, and data auditing efforts.

Consequences of Non-Compliance

Failing to meet high-risk standards carries significant financial exposure. Violations related to prohibited AI practices carry the steepest penalties, with fines of up to €35 million or 7% of a company’s global annual turnover. Non-compliance with high-risk system obligations specifically can result in fines of up to €15 million or 3% of global annual turnover, whichever is higher.

Was this article helpful?
0 out of 5 stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?