EU AI Act compliance in 2025: what organizations need to know
A practical guide to EU AI Act compliance requirements taking effect in 2025, with implementation timelines.
The European Union's Artificial Intelligence Act entered into force on August 1, 2024, representing "the world's first comprehensive legal framework for AI." This regulation applies to any organization developing, deploying, or using AI systems in the EU market, regardless of location.
Key implementation dates
- February 2, 2025: Prohibitions and AI literacy obligations effective
- August 2, 2025: Governance rules and General Purpose AI model obligations effective
- August 2, 2026: Full regulation applies (18 months from publication date)
Risk-based framework
Unacceptable risk (prohibited)
The following systems are banned as of February 2, 2025:
Social scoring
- Government or private sector systems evaluating people based on social behavior or personal characteristics
- Workplace behavior scoring affecting access to services
Biometric categorization
- Inferring sensitive characteristics (race, political opinions, sexual orientation) from biometric data
- Exception: Labeling biometric datasets for bias detection
Real-time remote biometric identification in public spaces
- Live facial recognition in public by law enforcement
- Limited exceptions for serious crimes (terrorism, kidnapping)
- Requires judicial authorization
Predictive policing
- Systems predicting individual criminal behavior based on profiling
- Systems based on prohibited characteristics or criminal history
Emotion recognition in workplaces and educational institutions
- AI detecting emotions in employment or education settings
- Exception: Medical or safety reasons
Untargeted scraping of facial images
- Indiscriminate collection of facial images from internet or CCTV
Exploitation systems
- AI exploiting vulnerabilities of children, elderly, or disabled persons
- Manipulative or deceptive AI systems
Penalties: Up to 35 million EUR or 7% of global annual turnover, whichever is higher.
High-risk AI systems
These require stringent compliance requirements:
Employment and HR
- Recruitment and selection systems
- Promotion and termination decision systems
- Task allocation and performance monitoring
Education and vocational training
- Admission and enrollment systems
- Assessment and evaluation tools
- Exam proctoring systems
Essential services
- Credit scoring and creditworthiness assessment
- Insurance risk assessment and pricing
- Emergency service dispatching
Law enforcement
- Individual risk assessment for offense prediction
- Polygraph and similar tools
- Evidence reliability assessment
Migration and border control
- Asylum and visa application assessment
- Lie detection systems
- Risk assessment for security
Administration of justice
- Legal research and case outcome prediction affecting court decisions
Critical infrastructure
- Safety component management in road traffic, water, gas, electricity
Healthcare
- Medical device AI for diagnosis or treatment decisions
Limited risk (transparency requirements)
Chatbots
- Users must be informed they're interacting with AI
- Exception: Obvious from context
Emotion recognition systems
- Users must be notified when AI detects or infers emotions
Biometric categorization
- Users must be informed of biometric categorization
Generated content (deepfakes)
- AI-generated or manipulated images, audio, video must be labeled
- Particularly synthetic media resembling real persons, places, events
Recommendation systems
- Platforms covered by the EU Digital Services Act must disclose when AI-driven recommender systems influence content a user sees
- Users must have access to an alternative not based on profiling
Penalties for limited risk violations: Up to 15 million EUR or 3% of global annual turnover, whichever is higher.
Minimal risk
The vast majority of AI applications fall into minimal risk and face no mandatory requirements under the Act, though voluntary adherence to codes of conduct is encouraged.
Examples include:
- AI-enabled spam filters
- Inventory management systems
- Manufacturing optimization tools
- AI-assisted drafting tools not deployed in high-risk contexts
- Video games with adaptive AI
Organizations deploying minimal risk AI are still subject to general EU law -- GDPR, product safety directives, consumer protection regulations -- but face no AI Act-specific obligations beyond basic AI literacy for staff who work with these systems.
General Purpose AI models
The EU AI Act introduced a distinct regulatory tier for General Purpose AI (GPAI) models -- large foundation models capable of performing a wide range of tasks, including large language models. These obligations became effective August 2, 2025.
Standard GPAI obligations
All GPAI model providers must:
- Maintain and publish technical documentation covering training methodologies, data sources, compute used, capabilities, and known limitations
- Provide information and documentation to downstream providers who integrate the model into their systems
- Comply with EU copyright law, including obligations related to training data
- Publish a summary of training data that is comprehensive enough to allow assessment of copyright compliance
Systemic risk GPAI models
GPAI models exceeding 10^25 FLOPs in training compute are classified as posing systemic risk and face additional requirements:
- Conduct adversarial testing, red-teaming, and model evaluation before and after deployment
- Assess and mitigate systemic risks including critical infrastructure disruption, serious cyberattacks, and large-scale societal harm
- Incident reporting: notify the EU AI Office of serious incidents and corrective measures taken within defined timeframes
- Maintain cybersecurity protections commensurate with the risk profile
- Report energy consumption of training runs
As of August 2025, the European AI Office published an initial list of models likely meeting the systemic risk threshold. Organizations operating GPAI APIs or embedding foundation models into customer-facing products should evaluate whether their upstream model providers have complied with these obligations, since downstream use can create liability exposure.
Compliance timelines and key dates
Understanding the phased rollout is essential for building a realistic compliance roadmap.
| Date | Obligation | Who It Affects |
|---|---|---|
| August 1, 2024 | Act enters into force | All |
| February 2, 2025 | Prohibited practices banned; AI literacy requirements apply | All EU AI deployers |
| August 2, 2025 | GPAI model obligations; EU AI Office governance rules; notified body designations begin | GPAI providers; high-risk AI developers |
| February 2, 2026 | High-risk AI in Annex I (regulated products) must comply | Manufacturers of AI-enabled regulated products |
| August 2, 2026 | Full regulation applies to all high-risk AI (Annex III); limited risk transparency rules enforceable | All organizations deploying AI in the EU |
| August 2, 2027 | Certain legacy AI systems (already on market before August 2026) must be brought into compliance | Organizations with pre-existing AI deployments |
The February 2, 2025 deadline has already passed. Organizations that have not yet taken inventory of their AI systems and assessed whether any fall into prohibited or high-risk categories are already behind the compliance curve on the literacy and prohibition requirements.
The August 2026 full enforcement date is the critical horizon for most enterprise organizations. Two years sounds like a long runway, but conformity assessments, technical documentation, and human oversight implementations for high-risk systems typically take 12--18 months from initiation to completion.
Technical requirements for high-risk AI systems
Organizations deploying high-risk AI -- including HR screening tools, credit scoring systems, medical AI, and education assessment platforms -- must implement specific technical capabilities before deployment and maintain them throughout the system's operational life.
Risk management system
A risk management system must be established, implemented, documented, and maintained throughout the entire lifecycle of the AI system. This is not a one-time assessment -- it requires:
- Identification and analysis of known and reasonably foreseeable risks
- Estimation and evaluation of risks that may emerge when the system is used as intended and in conditions of reasonably foreseeable misuse
- Evaluation of risks in light of data gathered from post-market monitoring
- Adoption of appropriate risk management measures
The risk management system must be subject to regular systematic updates.
Data and data governance
High-risk AI systems must use training, validation, and testing data that meets quality criteria. Specifically:
- Data must be relevant, sufficiently representative, and to the best possible extent free of errors and complete
- Data governance practices must cover data collection, annotation, storage, filtering, cleaning, and enrichment
- Examination must be performed for possible biases that could affect health, safety, or fundamental rights
- Where data involves personal data, GDPR compliance requirements apply in addition to AI Act requirements
Many organizations underestimate the data governance burden. Demonstrating that training data was collected lawfully, was examined for bias, and remains traceable requires documentation infrastructure that most ML teams have not historically maintained.
Technical documentation
Before placing a high-risk AI system on the EU market or putting it into service, providers must compile technical documentation demonstrating compliance. This documentation must remain current throughout the system's life. Required content includes:
- General description of the AI system including its intended purpose, geographic scope, and categories of persons affected
- Description of the system's components, including hardware, software, and training data
- Description of development methodology and third-party tools and data used
- Validation and testing procedures, including the metrics used, testing datasets, and results
- Post-market monitoring plan
- Systems and measures for human oversight
This documentation is not internal-only. It must be made available to national supervisory authorities on request, and certain elements must be provided to downstream deployers who integrate the system.
Transparency and user information
Providers of high-risk AI systems must ensure deployers receive information covering:
- The identity of the provider and their contact details
- The AI system's capabilities and limitations
- Performance metrics on relevant groups of persons
- Known or foreseeable risks and data requirements
- Human oversight measures, including technical means for interpretation of outputs
- Expected lifespan and maintenance/update requirements
Human oversight
High-risk AI systems must be designed and developed so that they can be effectively overseen by natural persons during the period of use. Oversight measures must enable the individuals responsible to:
- Fully understand the system's capabilities and limitations
- Remain aware of potential tendencies to over-rely on AI output (automation bias)
- Correctly interpret the system's output
- Intervene or interrupt the system through a stop function
This requirement is not satisfied by a nominal override button. Regulators have signaled in guidance that meaningful human oversight requires that the person reviewing AI output has sufficient information to exercise genuine judgment -- not merely rubber-stamp AI decisions.
Accuracy, robustness, and cybersecurity
High-risk AI systems must achieve appropriate levels of accuracy throughout their lifecycle. Providers must:
- Declare the level of accuracy in technical documentation
- Specify performance metrics including accuracy metrics and their relevance for the intended purpose
- Include testing against reasonably foreseeable adversarial misuse
- Implement resilience against attempts by unauthorized third parties to alter system behavior
How RAIL Score maps to EU AI Act requirements
The EU AI Act does not mandate any specific evaluation tool or methodology. What it mandates are outcomes: demonstrable safety, documented performance, bias assessment, and ongoing monitoring. RAIL Score addresses these requirements across its 8 dimensions in ways that directly support compliance evidence generation.
Reliability → Accuracy and robustness requirements
The Act requires providers to document AI system accuracy metrics and test for robustness under adversarial conditions. RAIL Score's Reliability dimension provides a continuous, quantified signal for factual accuracy and calibration. Running RAIL evaluations on a representative test set generates machine-readable accuracy evidence that can be included in technical documentation. Reliability scores below 7.0 on sensitive outputs indicate a robustness gap requiring remediation before deployment.
Fairness → Data bias assessment and non-discrimination
High-risk AI data governance requirements include examination for biases that could affect fundamental rights. RAIL Score's Fairness dimension evaluates outputs for differential treatment across demographic groups. Systematic fairness scoring across a stratified test dataset provides documented bias assessment evidence. For HR and credit scoring systems -- where fairness failures carry legal liability beyond the AI Act itself -- ongoing production fairness monitoring generates the audit trail required by both regulators and internal governance boards.
Safety → Risk management system
The Act's risk management requirement demands identification and mitigation of risks associated with the system's use. RAIL Score's Safety dimension flags harmful, toxic, or dangerous outputs in production. Integrating Safety scoring into the deployment pipeline provides continuous evidence that the risk management system is operational, not just documented.
Transparency → User information and transparency obligations
Limited risk transparency requirements and high-risk user information requirements both demand that AI systems communicate their nature, limitations, and reasoning appropriately. RAIL Score's Transparency dimension evaluates whether outputs honestly represent uncertainty and limitations. Low transparency scores on outputs involving consequential recommendations (credit, medical, legal) signal compliance gaps in the user-facing layer.
Privacy → GDPR alignment and data governance
High-risk AI data requirements overlap significantly with GDPR. RAIL Score's Privacy dimension identifies when outputs contain or solicit unnecessary personal data, which is a compliance signal for both regimes. For systems processing health, financial, or other sensitive data categories, Privacy scoring provides documented evidence of ongoing data minimization monitoring.
Accountability → Technical documentation and traceability
The Act's documentation requirements demand traceable reasoning and auditable decision paths. RAIL Score's Accountability dimension measures whether outputs provide traceable reasoning or leave users in a decisional black box. For systems subject to human oversight requirements, accountability scores below 6.0 on consequential outputs indicate that the oversight mechanism lacks the information it needs to function.
Inclusivity and User Impact → Fundamental rights and intended purpose
The Act requires high-risk AI to perform consistently across different groups and to actually deliver its intended purpose. RAIL Score's Inclusivity dimension monitors for differential output quality across user groups. User Impact dimension scores measure whether the system is delivering value aligned with its stated purpose -- a direct input to conformity assessment.
Enforcement and penalties
The EU AI Act establishes a three-tier penalty structure:
Tier 1 -- Prohibited practices: Up to 35 million EUR or 7% of worldwide annual turnover (whichever is higher). This tier applies to violations of the prohibitions that took effect February 2, 2025. Deploying a prohibited social scoring system or real-time biometric identification system in public spaces falls here.
Tier 2 -- Other violations: Up to 15 million EUR or 3% of worldwide annual turnover. This covers failures to comply with high-risk AI requirements, GPAI obligations, and limited risk transparency requirements.
Tier 3 -- Incorrect or misleading information: Up to 7.5 million EUR or 1% of worldwide annual turnover. This applies to providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities in the context of conformity assessment.
For SMEs and startups, penalty caps are calculated at the lower percentage tier of global turnover or the absolute amount, whichever is lower.
Enforcement structure
Each EU member state must designate one or more national competent authorities (NCAs) responsible for enforcement. NCAs have powers to:
- Access AI systems, their underlying models, and training data
- Request documentation and explanations
- Conduct audits
- Issue binding orders to modify, suspend, or withdraw AI systems
- Impose administrative fines
The European AI Office, established within the European Commission, has direct enforcement authority over GPAI providers and coordinates cross-border cases involving high-risk AI.
Extraterritorial reach
The AI Act applies to providers placing AI systems on the EU market regardless of where the provider is established. It also applies to deployers located in the EU. This means:
- A US company offering an HR screening tool to EU customers must comply with high-risk requirements
- A UK company using a high-risk AI system to make decisions about EU residents must comply with deployer obligations
- A non-EU GPAI model provider whose model is used in the EU must comply with GPAI obligations
The territorial scope mirrors GDPR's approach: where the data subject (or AI-affected person) is located determines applicability, not where the provider is incorporated.
Practical compliance roadmap
Months 1--3: Inventory and triage
The first priority is knowing what you have. Organizations routinely discover AI systems they did not formally document as "AI" -- rule-based decision engines embedded in HR software, scoring models in credit workflows, automated content moderation systems.
Actions:
- Conduct a comprehensive AI system inventory across all business units
- For each system, assess which risk category applies using the EU AI Office's self-assessment guidance
- Identify any systems that may meet the prohibited practices definition -- these require immediate review with legal counsel
- Flag all high-risk systems for Phase 2 work
- Document limited risk systems requiring transparency disclosures
- Ensure AI literacy training is underway for all staff working with AI systems (February 2025 obligation)
Deliverable: AI system register with risk classification, responsible owner, and compliance gap assessment for each system.
Months 4--6: High-risk compliance foundations
For each high-risk AI system identified, begin building the required compliance infrastructure in parallel.
Actions:
- Commission technical documentation for each high-risk system; assign a product or engineering owner responsible for keeping it current
- Assess training data governance: identify whether bias examination documentation exists; if not, commission a retrospective assessment
- Map human oversight mechanisms: document who reviews AI outputs for each high-risk system and whether they have sufficient information to exercise genuine judgment
- Engage a notified body early if conformity assessment will require third-party involvement (Annex I regulated products)
- Begin implementing RAIL Score or equivalent continuous evaluation for production systems -- this generates ongoing monitoring evidence required by the risk management system obligation
- Assess GPAI dependencies: identify which foundation models underlie your systems and verify their GPAI compliance status
Deliverable: Technical documentation drafts, data governance gap analysis, human oversight assessment, continuous monitoring implementation.
Months 7--12: Conformity assessment and operational readiness
Actions:
- Complete conformity assessments for high-risk systems (self-assessment for most Annex III systems; third-party assessment for certain Annex I systems)
- Register high-risk AI systems in the EU-wide AI database once it opens (expected late 2025 for providers, mid-2026 for deployers)
- Implement post-market monitoring plans with defined metrics, review cadences, and escalation paths
- Establish incident reporting procedures: know the timeline and content requirements for reporting serious incidents to national authorities
- Complete transparency obligation implementations for limited risk systems -- disclosure mechanisms, labeling for AI-generated content
- Document the AI governance structure: roles, accountability, escalation paths, and board-level oversight
Deliverable: Completed conformity assessments, EU AI database registrations, operational post-market monitoring, incident reporting procedures in place.
Ongoing: Maintenance and monitoring
Compliance is not a point-in-time certification. The Act's risk management system requirement is explicitly ongoing. Practical sustainability requires:
- Quarterly reviews of RAIL Score distributions across high-risk systems to detect drift in safety, fairness, or reliability
- Annual technical documentation updates reflecting any significant changes to models, training data, or deployment context
- Regulatory tracking -- the Act delegates significant detail to implementing acts, standards, and guidance that the European AI Office continues to publish; what constitutes compliance in specific domains will evolve
- Supplier monitoring -- if your high-risk AI uses third-party components or GPAI models, your suppliers' compliance affects your own; build contractual requirements and monitoring into vendor relationships
Conclusion
The EU AI Act is the most consequential AI regulation currently in force anywhere in the world. For most organizations, the prohibitions are straightforward to comply with -- avoid building social scoring systems, biometric surveillance, and predictive policing tools. The real compliance work is in the high-risk category and GPAI obligations, where the requirements are specific, technical, and ongoing.
The organizations best positioned for August 2026 full enforcement are those that started their inventory and gap assessment work in 2024 and 2025. For organizations that have not yet begun, the window for orderly compliance is narrowing but not closed.
RAIL Score provides continuous, multi-dimensional evaluation across all 8 RAIL dimensions that maps directly to EU AI Act evidence requirements: documented accuracy assessment, bias monitoring, safety evaluation, transparency checking, and accountability tracing. Organizations can use the RAIL Evaluator to assess their AI outputs against these dimensions today -- generating the kind of structured, quantified compliance evidence that technical documentation and post-market monitoring obligations require. For a walkthrough of how RAIL Score fits into your specific compliance architecture, request a demo.
The carbon cost of intelligence: AI's environmental footprint
The environmental impact of training and running large AI models -- carbon emissions, water usage, and energy consumption.
The 2026 global AI regulation landscape
A comprehensive overview of AI regulations across the EU, US, India, China, and other major jurisdictions in 2026.