Skip to main content
EU AI ACT — REGULATION (EU) 2024/1689

EU AI Act Compliance

The EU AI Act is the world's first comprehensive AI regulation. High-risk AI systems face strict technical requirements for robustness, human oversight, logging, and transparency — effective August 2026.

Scope note: The EU AI Act has 113 articles and 13 annexes. HAIEC assesses the technical AI security requirements primarily in Chapter III (high-risk AI obligations) — specifically Art. 9, 10, 12, 13, 14, and 15. Conformity assessment, registration in the EU database, and CE marking require notified body involvement for some categories.

6 key articles assessed4-tier risk classificationHigh-risk AI deadlines tracked

EU AI Act Risk Classification

The Act uses a risk-based approach. Your compliance obligations depend entirely on which tier your AI system falls into.

Unacceptable Risk

In effect from Feb 2025

Banned outright. No compliance path.

  • Social credit scoring by public authorities
  • Real-time remote biometric surveillance in public spaces
  • Subliminal manipulation causing harm
  • Exploitation of vulnerabilities (age, disability)

High Risk

Full enforcement Aug 2026

Strict obligations: conformity assessment, documentation, registration.

  • HR & recruitment AI systems
  • Credit and insurance scoring
  • Education assessment tools
  • Medical device AI, critical infrastructure

Limited Risk

Aug 2025 for GPAI

Transparency obligations only — users must know they're interacting with AI.

  • Customer-facing chatbots
  • Emotion recognition systems
  • Deepfake content generators
  • AI-generated content labelling

Minimal Risk

No mandatory deadline

No specific EU AI Act obligations. Voluntary codes of conduct encouraged.

  • Spam filters
  • AI-powered search
  • Inventory management
  • Content recommendations

EU AI Act Articles We Assess

We focus on Chapter III obligations — the technical requirements for high-risk AI systems — mapped to specific HAIEC scan rules.

Art. 9

Risk Management System

High-risk AI systems must implement a continuous risk management process. We identify AI security risks — prompt injection, data exfiltration, model extraction — and generate evidence for your risk register.

Rules: R1–R12 (all rules)
Art. 10

Data Governance

Training, validation, and test data must be managed with appropriate practices. We scan for raw PII or PHI in AI training pipelines, inadequate data minimisation, and unfiltered data ingestion.

Rules: R3, R5
Art. 12

Record-Keeping & Logging

High-risk AI must automatically log events including output generation, inputs, and human oversight events. We test for missing audit logging in AI inference calls, agent tool executions, and automated decisions.

Rules: R9.8, R-RT09
Art. 13

Transparency & User Information

High-risk AI systems must be transparent about capabilities and limitations. We check for missing disclaimers, undisclosed automation in decision workflows, and chatbots that don't disclose their AI nature.

Rules: R9.7
Art. 14

Human Oversight

High-risk AI must allow human monitoring and intervention. We detect agentic loops, autonomous tool execution chains, and missing human-in-the-loop checkpoints in AI pipelines making high-stakes decisions.

Rules: R2, R9, R9.1–R9.6
Art. 15

Accuracy, Robustness & Cybersecurity

High-risk AI must resist adversarial attacks and remain accurate. Our 91-rule scanner and 22-category runtime tester directly map to this article — prompt injection, adversarial inputs, model robustness testing.

Rules: R1–R10, R11.1–R12.4, R-RT01–R-RT14

EU AI Act Enforcement Timeline

Aug 2024

EU AI Act enters into force

In effect
Feb 2025

Prohibited AI practices (Art. 5) apply

In effect
Aug 2025

GPAI model rules & governance apply

Aug 2026

Full high-risk AI enforcement begins

Aug 2027

High-risk AI in existing products apply

Full enforcement for high-risk AI begins August 2026 — start gap assessment now to meet technical documentation requirements.

What HAIEC Does NOT Cover (EU AI Act)

The EU AI Act has significant legal and procedural requirements beyond our technical scan scope:

Notified body conformity assessment (required for some high-risk categories)
EU database registration (Article 49)
Post-market monitoring obligations
Incident reporting to national authorities
GPAI model technical documentation (Annex XI)
CE marking process and declaration of conformity

For full EU AI Act compliance, pair HAIEC's technical security and robustness assessment with legal counsel for classification, conformity assessment guidance, and regulatory filing requirements.

Who This Assessment Is For

EU-based AI companies with high-risk use cases
Non-EU companies with AI products sold in the EU
HR tech, fintech, healthcare AI — high-risk categories
AI companies preparing for Aug 2026 enforcement
Teams needing Art. 15 robustness evidence for auditors
Companies aligning with ISO 42001 and EU AI Act simultaneously

Assess Your EU AI Act Readiness

High-risk AI enforcement begins August 2026. Assess your Art. 9, 12, 14, and 15 technical gaps now.

No signup required • Free • Regulation (EU) 2024/1689 • Not legal advice