The EU AI Act applies in phases. Prohibitions on unacceptable-risk AI systems have been in force since February 2025. Rules for general-purpose AI models apply from August 2025. The requirements governing high-risk AI systems apply from 2 August 2026.
That deadline is now months away. Most organisations developing or deploying AI systems in the EU are not ready.
The August 2026 deadline is not a certification date. It is the point at which non-compliance becomes enforceable. Preparation needs to begin now.
What high-risk means
High-risk AI systems are defined in Annex III of the Act. They include AI used in employment, education, critical infrastructure, safety components of products, law enforcement, migration, and access to essential services. If your organisation develops or deploys AI in any of these contexts, you are likely subject to high-risk requirements regardless of size or sector.
What the requirements involve
For providers of high-risk AI systems, the Act requires a quality management system, technical documentation demonstrating conformity, logging and record-keeping throughout the AI lifecycle, transparency and information to users, human oversight measures, and standards for accuracy, robustness, and cybersecurity.
ISO 42001 as a governance foundation
ISO 42001, the international standard for AI Management Systems, provides a structured governance framework that maps onto many EU AI Act requirements. It is not a substitute for legal compliance, but it provides a repeatable process for managing AI risk that aligns with what the Act requires.
AuditVantage supports organisations with EU AI Act risk classification, documentation, governance alignment, and conformity preparation. Get in touch to discuss your requirements.