Skip to main content
ISO 27001 + ISO 42001 AI SECURITY

ISO 27001 & ISO 42001 for AI Systems

ISO 27001 covers information security broadly. ISO 42001 — published in 2023 — adds AI-specific management requirements. We assess the controls that apply when you build or deploy AI.

Scope note: ISO 27001:2022 has 93 controls across 4 themes (Organisational, People, Physical, Technological). ISO 42001 has 38+ controls. We assess the AI-relevant subset and Clause 6/Annex A for ISO 42001. Certification requires accredited third-party auditors.

6 ISO 27001 AI controls6 ISO 42001 AI controls~15 minutes

ISO 27001 vs ISO 42001 — What's the Difference?

ISO 27001 (2022)

The gold standard for information security management systems (ISMS). Widely recognised globally for enterprise security posture.

  • 93 controls across 4 themes (2022 edition)
  • Required by EU enterprises and global governments
  • 60-70% control overlap with SOC 2
  • 6-12 months to certification readiness

ISO 42001 (2023) — New

The first international standard for AI Management Systems (AIMS). Specifically addresses risks, governance, and controls for organisations developing or using AI.

  • AI-specific risk assessment and governance
  • Covers AI transparency, human oversight, bias
  • Aligns with EU AI Act requirements
  • 4-8 months to initial readiness

ISO 27001 Controls We Test for AI

We focus on Annex A controls where AI systems create unique compliance exposure not covered by traditional tools.

A.9.2.3

Management of Privileged Access

AI agents with privileged tool access (shell, file system, DB) must be controlled. We detect dangerous tools exposed to LLM agents without access restrictions.

Rules: R2, R9
A.9.4.1

Information Access Restriction

AI systems must not access data beyond their intended scope. We test for missing authentication on AI endpoints and unguarded admin routes.

Rules: R9.4
A.12.6.1

Vulnerability Management

AI systems have unique vulnerabilities (prompt injection, RAG poisoning) not covered by traditional CVE scanners. We check for 91 AI-specific vulnerability patterns.

Rules: R1–R10, R11.1–R12.4
A.14.2.1

Secure Development Policy

AI development must follow secure coding practices. We scan for hardcoded secrets, unvalidated inputs to LLMs, and insecure AI framework configurations.

Rules: R5, R9.6
A.14.2.5

Secure System Engineering

AI pipeline architecture must prevent injection and data exfiltration. We evaluate prompt construction patterns, output handling, and RAG retrieval security.

Rules: R1, R7, R8
A.18.1.3

Protection of Records

AI systems that log conversation data or inference outputs must protect that data. We flag unsanitised logging that captures PII or sensitive system info.

Rules: R9.8

ISO 42001 Controls We Assess

ISO 42001 is the new AI management system standard. These are the key controls for AI developers and deployers.

Clause 6.1

AI Risk Assessment

Organisations must identify and assess AI-specific risks. Our scan provides evidence for the risk register: identified attack patterns, severity, and affected components.

Rules: All rules
A.6.1

AI System Transparency

AI systems must be transparent about their capabilities and limitations. We check for missing disclaimers, hallucination risks, and undisclosed automated decision logic.

Rules: R9.7
A.6.2

Human Oversight of AI

High-risk AI decisions must allow for human intervention. We detect agent loops, autonomous tool execution chains, and missing human-in-the-loop checkpoints.

Rules: R2, R9, R9.1–R9.6
A.6.5

Bias & Fairness Monitoring

AI systems affecting individuals must be monitored for bias. We flag systemic output distributions and surface bias risk areas — detailed demographic parity auditing is covered by our NYC LL144 bias audit product.

Rules: NYC LL144 product
A.9.3

AI System Security

AI-specific security controls are required. Maps directly to HAIEC's 91-rule scan covering prompt injection, tool abuse, RAG poisoning, and model extraction.

Rules: R1–R12
A.10.1

Third-Party AI Providers

AI vendor risk must be managed. We detect direct unproxied calls to OpenAI, Anthropic, Google AI that bypass vendor risk controls.

Rules: R2.1–R2.8

What the Assessment Produces

Gap Analysis: ISO 27001 AI Controls

Heatmap of which AI-relevant controls are met, partially met, or missing. Prioritised by risk and effort to remediate.

ISO 42001 AI Readiness Score

Score against the 6 key clauses of ISO 42001 most relevant to AI developers. Includes risk register evidence.

Certification Timeline Estimate

Realistic estimate of time to certification readiness based on your current gaps and company size.

Audit-Ready Evidence Package

Scan results, evidence artifacts, and control mapping document for your ISO 27001 auditor.

Who This Assessment Is For

Companies pursuing ISO 27001 certification
AI companies needing ISO 42001 alignment
EU enterprises required to demonstrate ISMS
AI vendors selling to government or finance
Teams wanting to benchmark against ISO standards
Organisations preparing for EU AI Act compliance

Assess Your ISO 27001 & 42001 AI Readiness

~15 minutes. Covers the AI-relevant controls from both standards with audit-ready evidence generation.

No signup required • Free • Certification requires independent auditor